“Trust is the highest form of human motivation” says Stephen Covey. It is the foundation of meaningful relationships and the key to working collaboration. Without it, you have major obstacles – with it, you can move mountains. Trust and transparency in data is the basis for collaboratively doing business today. It provides stakeholders the accurate and reliable information needed to make better decisions. Without real data, validated through available technological capability, stakeholders are left at the mercy of models or estimations, that are error-prone and grossly inefficient, to maintain standards of excellence and drive better policy decisions.
When it comes to calculating greenhouse gas emissions the industry has struggled to develop reliable algorithms that meet universal acceptance. Because of this, data collection and modelled calculation has become the typical methodology in estimating carbon output for the shipping industry. This problem is acerbated by the fact that no standardized means or data definitions exist for the universal measurement and validation for things such as how to measure carbon reduction, what constitutes a voyage, or even what ‘green’ actually means. Without industry-wide transparent data to accurately benchmark results, some parties may embellish their reports to favor better results. Alisdair Pettigrew, Managing Director of BLUECOMMS, said, “There have been a number of instances where false claims of fuel efficiency and reductions in SOx and NOx have been made.” (1) Whether it’s additive providers who inflated the efficacy of their elixirs or shippers who made questionable decarbonization claims, without agreed measurements anyone can say nearly anything without ramification.
Fortunately digitalization has accelerated across the industry in recent years, producing considerable amounts of readily available data for easy aggregation and analysis. The question today is how we use this data – along with AI-based analysis and backpropagation – to arrive at meaningful conclusions on how to calculate, with exacting accuracy, things such as the actual amount of carbon production and reduction. Using technology, we can prove or modify a model and progressively refine its error rate between actual and desired output vectors.
“All models are wrong, but some are useful” (2) is a common aphorism intended to imply that while models may fall short of reality, they are still very useful. For example, a map (model) of a city may become outdated over time, but that doesn’t make the old map totally useless because in the absence of anything else. It can get you close enough to where you need to be. But maps, like good models, need continual refinement – how good would Google Maps be if they weren’t continually validated and updated for accuracy? “Validation becomes necessary as soon as we want to use a model to make informed statements about the real world,” says Claudius Graebner in his seminal work on the validation of models. (3)
Training any model, especially AI models that improve automatically through experience and the use of data, introduces significant risk of bias, miscalculation, and incorrect algorithms. That is why the use of backpropagation – a technological method applied to mathematical and AI models to validate algorithms and help ensure a more consistently accurate outcome – is critical. In the fields of artificial intelligence and data science, backpropagation is becoming commonplace as a means of training the neural networks, making the models reliable by systematically increasing their generalizations. This is how the machine learns and improves its algorithms constantly through experiential use of the data. By taking a wide array of real-world data (such as what can be collected across the maritime industry) and running it through an AI model, predicted probabilities are compared to actual probabilities and an error (or loss) is calculated. The information is then transmitted back through the model (network), so weights and values are adjusted in order to reduce the error (loss) in future predictions. The model (network) is continually trained with multiple inputs until it is able to predict with high accuracy and maintain accuracy in light of changing environments. For example, simple calculations of ship fuel consumptions may result in carbon dioxide outputs of 419 kg/MT (4) whereas computational analysis of better data may instead reveal an output of 305.83 kg/MT (5,6) – a marked improvement leading to more informed decisions and perhaps saving thousands in applicable penalties.
Today, almost all models applied to carbon emission calculations produce outcomes greater than the actual outputs of exhausts from individual ships, their respective fleets or the global fleet. With the mandate to drastically reduce carbon footprint over the next 30 years, large real-world data set analysis, with standardized definitions, validated via backpropagation, can provide industry, regulators and policymakers a relevant representation of results. This will require mass sharing and secure validation, through scientifically acceptable models, until the predictive results mirror the actual outputs.
FuelTrust’s patent-pending bunker technology allows for the secure, traceable and controlled sharing of this industry information, while enabling advanced AI-based and backpropagated analysis that can power better business decisions.
- https://www.blue-comms.com/greenwashing-shipping-communications-environmental/
- https://www.lacan.upc.edu/admoreWeb/2018/05/all-models-are-wrong-but-some-are-useful-george-e-p-box/
- http://jasss.soc.surrey.ac.uk/21/3/8.html
- https://www.eia.gov/environment/emissions/co2_vol_mass.php
- https://www.sciencedirect.com/book/9781845697273/advances-in-clean-hydrocarbon-fuel-processing
- http://www.petroleum.co.uk/how-hydrocarbons-burn