Tags:Artificial intelligence, Evaluation Factor, MASS, Risk management and Risk source
Abstract:
Alongside of the MASS implementation, Artificial Intelligence (AI) is becoming a prominent issue. As a part of that, it is necessary to prepare possible risk sources and reasonable evaluation mechanism. There already exist international standards regarding AI, machine learning and risk assessment to be able to be considered and interpreted to fit to the maritime sector. This article is aiming to find out risk sources for AI in maritime sector, based on the document on AI concepts and terminology (ISO/IEC 22989) such as level of automation, lack of transparency and explainability, complexity of environment, system life cycle issues, system hardware issues, and technology readiness. Also, it is to propose evaluation factors applied practically for those risk sources, which are robustness, reliability, resilience, controllability explainability, predictability, transparency, fairness, jurisdictional issues, precision, recall, accuracy and F1 score coming from risk management and AI and Machine Learning (ML) related international standards. The article also reviews two MASS related guidelines from Det Norske Veritas (DNV) and American Bureau of Shipping (ABS) to know current status how and what risk sources are contemplated. As such, the combination of risk sources and evaluation factors proposed can be applied to evaluate AI practically after adjusting to fit to applied context specifically. Also, all kind of MASS risk stakeholders can be potential users taking into account on these factors and methods, such as risk and test managers, equipment makers, ship owners, classification societies.
Considerable Risk Sources and Evaluation Factors for Artificial Intelligence in Maritime Autonomous Systems