The shipping industry is facing a turning point deciding on whether or not to use noon.
Today, access to improved satellite technology is widespread and more shipping companies are embracing methods of high-frequency sensor data collection.
The biggest challenges ship owners must overcome are related to capturing and processing high-frequency data, unifying it in the context of other important datasets, and, ultimately, ensuring that all teams have access to the insights that it generates. All of that is to say that high-frequency data alone has limited value, but hidden inside it are valuable insights one would never find relying on manually reported data.
Generating these insights requires sophistication in managing time-series data, data modelling, and machine learning that most shipping companies don’t have internally.
With the proper software partner, though, they can turn high-frequency data into their competitive advantage, and at Nautilus Labs, that’s just the type of work we’re doing with our clients. But first, how can our clients be confident that data outputted by sensors on their vessels is accurate?
As part of collecting high-frequency sensor data, maintaining confidence in sensor health and accuracy is of utmost importance. At Nautilus Labs, we’ve generated custom, vessel-specific mathematical models based off of troves of historical sensor data to not only allow us to better predict future vessel behaviour but also to understand current vessel behaviour and sensor data. We monitor sensor health in two ways.
Historical sensor data evaluation over time
First, we compare sensor data relative to its historical output over time. If a sensor value is detected to be multiple standard deviations from its average (under the appropriate correlating conditions), we will alert our users.
If this behaviour occurs over several consecutive data points we will require further investigation into sensor functionality.
Additionally, leveraging our vessel-specific mathematical model, we also compare the sensor’s output to what the model would expect the output to be under the vessel’s condition at that point in time.
If the sensor value falls outside of the expected range of values, we will examine this sensor further and, with the approval of our clients, filter this point out from our data set.
Evaluate sensor data vs noon data in a programmatic way
Second, we check sensor outputs via our noon verification tool. We monitor crew-reported noon data to determine whether the sensor’s output is within multiple standard deviations of the noon reported data for each value, and if it is beyond an acceptable error, we investigate this data point and determine whether it is acceptable for our models.
In the future, we also aim to incorporate cross-verification matrices that help validate whether a sensor value is accurate by also looking at correlated values. This will allow us to determine with greater accuracy whether the deviant sensor value(s) are resultant from sensor error, or if it is actually valid based on corroborating data from other ship sensors.
Exposing sensor health with Nautilus
Because we approach sensor health monitoring from an unbiased programmatic point of view, we are able to objectively detect both noon reporting inaccuracies as well as sensor issues.
For example, for a particular client’s vessel we conducted an analysis comparing sensor vs noon reported values for forward, mid, and aft draft readings. The goal was to plot the three for a given vessel to expose the discrepancies and uncover potential sensor health issues.
Potential issues in sensor readings of draft may present larger problems for our clients because their understanding of vessel performance relies on an accurate understanding of draft at any point in time.
Using Nautilus Labs, once a problem with a draft sensor is uncovered, we can adjust our machine learning models to remove any residual impact it may have on our performance understanding of the vessel.
If and when we expose a potential sensor health issue, we’re able to programmatically include or exclude specific variables in our machine learning models to ensure that our predictive models are not impacted by sensor health issues.
Our team is also working on building heuristics on top of exposed data issues that allow us to determine and handle future issues programmatically.
Our clients are finding a lot of value in our automated monitoring of erroneous noon reported data entries pertaining to fuel consumption. More specifically, high-frequency sensor fuel flow data should reconcile with day-over-day remaining-on-board (ROB) noon report entries. In the instances we find that they don’t, Nautilus Platform alerts our clients for further investigation.
As with any technology, sensors providing high-frequency data may malfunction. While experiencing sensor glitches is possible for our clients, Nautilus Labs’ ability to detect them programmatically and remove their residual impact on dependent machine learning models minimises the client’s exposure to any risk to, effectively, zero.
To learn more about how Nautilus Labs can help you monitor your sensor health and accuracy, please get in touch through the enquiry form on this page.