ESREL2023: EUROPEAN SAFETY AND RELIABILITY CONFERENCE 2023
PROGRAM FOR WEDNESDAY, SEPTEMBER 6TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-09:40 Session 14: Plenary session - Professor Jin Wang- Effects of offshore safety case regulations on vessel/platform collision incidents

Prof. Jin Wang has served as Associate Dean (Research) in the Faculty of Engineering and Technology of LJMU since 2015 and also as Director of the LOOM Research Institute since 2003. Prof. Wang joined LJMU as a lecturer in 1995. He was promoted as Reader in Marine Engineering and Professor of Marine Technology in 1999 and 2002, respectively. His research interests are in risk-based design and operation of large maritime engineering systems. He has published extensively in this area, making him among top 70 in Civil Engineering in terms of publications and citations in the World Ranking of Scientists worldwide since 2020. He has authored or co-authored over 500 technical research outputs including 2 research monographs and 200 SCI-cited journal papers.  He is Editor-in-Chief of Journal of Marine Engineering and Technology. He is Chair of the UK-Malaysia University Consortium (UK-MUC) of 16 UK and all 20 public Malaysian Universities to expand international higher education between the UK and Malaysia.

09:45-11:00 Session 15A: Prognostics and Systems Health Management IV

Prognostics and Systems Health Management IV

Location: Room 100/3023
09:45
On the joint use of multiple linear residual generators for improving fault detection in wind turbines

ABSTRACT. In this work, we propose a SCADA data-based method for fault detection on wind turbines. The proposed method relies on an ensemble of linear residual generators. In a previous work ([1]), a model generation process for the monitoring of wind turbines using multi-turbine residual indicator has been developed and applied on several fault cases impacting different components. This process is based on SCADA variables, and uses simple linear normal behavior models composed of only three regressors to make the models interpretable for an operator, and to be robust to industrial constraints related to data quality and cost of implementation. While results have shown good performance detection on real cases, some improvements must be done in order to ensure the fleet wide deployment of the proposed monitoring tool: the false alarm rate remains high on fault-free periods, and some faults remains undetected. The significative false positive rate can be explained by the chosen modeling approach: wind turbine operating variables are linked by several non-linear relationships that cannot be fully explained using linear models, hence generating high prediction error values. Concerning the non-detection, it can be explained by the fact that a given fault can have a similar impact on several correlated variables, including the variable to be estimated. As the model generation process is only focusing on the estimated performance on normal behavior data, it may select regressors that are correlated during a faulty period. Consequently, the health indicator from the associated model is unable to detect the fault. In both cases, using a larger number of variables in the model generation process may be a way to solve the problem. This would allow to take advantage of the diversity of available SCADA database to capture different faulty behaviors. In this study, we propose a method to automatically generate a set of tri-variate linear models, predicting the evolution of a same variable. The models are learnt on fault-free periods, and tailored to be industrially deployable on a fleet of wind turbines. The process is based on a constraint greedy forward selection algorithm, generating five unique linear models. Each model is made different by forcing the variable selection algorithm to use different input variables. An additional model composed of variables linked to the output one, but whose variations are not impacted by wind turbine faults is also considered. Multi-turbine health indicators are then built using these models, together with a signal comparing the variation of the variable to be estimated at the wind farm level. The detection decisions obtained using these seven generators to a threshold are merged using a « median vote » in order to reach a global decision. The method is evaluated on six fault cases, where the false alarm rate remains high, or where the fault was not detected by the previous approach. The results show that the studied indicators have complementary detection characteristics, allowing a significative improvement of the monitoring performance of the fault cases based on the global decision.

10:00
Data Driven Approach for Diagnostic and Prognostic of Vertical Motor-Driven Pump

ABSTRACT. The paper presents a risk-informed predictive maintenance strategy to achieve condition-based maintenance. The paper will present on data architecture that is used to collect heterogeneous data from vertical motor-driven pumps and how the collected data is used by the feature engineering module to extract salient features associated with different faults. Once fault signatures are developed, diagnostics models like eXtreme Gradient Boosting is used for automate the fault classification process. Given the diagnostic outcome, prognostic model like Auto Regression Integrated Moving Average is used to forecast the health condition of the motor-driven pump. Along with the prediction for 1 hour, 24 hour, and 48 hour prediction horizon, uncertainty bounds are also computed.

10:15
Remaining Useful Life Control of a Deteriorating Wind Turbine with Flexible-shaft Drive-train

ABSTRACT. In this paper, we propose a degradation-aware control approach that allows to control the remaining useful life of a deteriorating wind turbine system. We consider more particularly the degradation caused by the dissipated energy in the drive-train, and we aim at controlling it by acting on the control gain of the generator torque imposed at the output of the drive-train. We propose an observation and control structure for this degradation control problem. By applying control techniques, such as optimal control and state-feedback control, we control the degradation process while guaranteeing the stability of the wind turbine system. A numerical case study illustrates the advantages of controlling the degradation using the proposed approach for a system suffering from load effects with the aim to correct its remaining useful life.

10:30
Assessment of fault detection and monitoring techniques for effective digitalization

ABSTRACT. As a result of digitalization, data is nowadays collected on every level of production as an enhancer for decision-making. However, including more electronics to collect additional information does not directly contribute to increasing system reliability but rather raises challenges for optimal data utilization. This work presents the implementation of an approach based on FMSA (Failure mode and symptoms analysis) and FMECA (Failure mode, effects and criticality analysis). The approach is applied on a manufacturing system to evaluate the suitability of its currently implemented detection and monitoring equipment and strategies. The objective is to maximize the confidence level in diagnosis and prognosis while, based on the system’s KPIs, optimize the sensors utilization and data collection. Since the FMEA family of methodologies present shortcomings such as the bias and uncertainty associated to the results due to their reliance on experts and users inputs, this work also focuses on the mitigation of these effects in obtaining the monitoring priority numbers and their respective categorization and prioritization. The approach is exemplified through a case study of a test bed feed-drive system and the benefits of its implementation are illustrated.

10:45
Framework for the monitoring of complex surfaces based on optical assessment

ABSTRACT. The optical perception of surfaces manufactured with high precision is an important quality feature for the most products. The respective manufacturing process is rather complex and depends on a variety of process parameters which have a direct impact on the surface shape and topography in the very most cases.

Surface shapes, topographies and colorings are mostly measured by the use of classical methods (roughness measuring device, gloss measuring device, spectrophotometer, computer tomography, or tactile coordinate measuring instruments). To improve the conventional methods of condition monitoring, in this case represented by the surface, a new image processing approach is needed to get a faster and more cost-effective analysis of manufactured surfaces. For this reason, different optical techniques based on images have been developed over the past years.

In this paper, a framework for surface monitoring is outlined and discussed in detail according to every single step along the monitoring process. For this purpose, the study differentiates between the application of the descriptive statistics as well as the application of artificial intelligence. Both applications are mostly based on same data sources, though on different sample sizes and provide answers to differing questions that complement each other. In case of the application of machine learning algorithms, the proposed framework distinguishes between the supervised, semi-supervised and unsupervised learning techniques based on various data and the availability of the target values.

Since the data is one of the key elements, the influence of the amount of data, data quality, and data structure is discussed with regard to the uncertainty of the models and the final results. Furthermore, the influence of the mentioned measurements, mostly used as target variables, on the results is discussed as well.

This paper has a generic character and can be applied along many technical products. Nevertheless, many of the single steps of the framework are explained based on joint projects realized in common work of academia and industry.

09:45-11:00 Session 15B: S.26: Collaborative Intelligence in Safety Critical Systems I

The topics covered in this session should be at the intersections of the followings:

  • Modelling the dynamics of system behaviours for the production processes, IoT systems, and critical infrastructures (System Safety Engineering)
  • Designing and implementing processes capable of monitoring interactions between automated systems and the humans destined to use them (Human Factors/ Neuroergonomics)
  • Using data analytics and AI to create novel human-in-the-loop automation paradigms to support decision making and/or anticipate critical scenarios
  • Managing the Legal and Ethical implications in the use of physiology-recording wearable sensors and human performance data in AI algorithms.
09:45
Framework of a Neuroergonomic Assessment in Human-Robot Collaboration
PRESENTER: Carlo Caiazzo

ABSTRACT. Human-robot Collaboration (HRC) is a relevant research field dealing with socio-technical and economic issues to consider in manufacturing industries. A Human-robot team, where the partners are human and robot, committed to reach a common goal through a collaboration, is the highest grade of interaction according to the different modes of integration of the robot in the manufacturing workplaces. In this regard, collaborative robots, or cobots, have enthusiastically found application in manufacturing assembly activities. However, the implementation of the cobot in the manufacturing workplace might be challenging as it requires a changeover of the environment, and it might be critically decided according to the task defined. Despite these drawbacks, the benefits highlighted by previous research works seem positively impact on the physical and mental health of the operator working alongside these machines. This research paper shows the impact of cobots on operators and the surrounding work environment from a neuroergonomic point of view. The article proposes a comparative analysis in a laboratory workstation set up for manufacturing assembly tasks, in which the operator accomplish an assembly task with and without the robot assistance. The presence of the robot is the element of comparison in the experimental design of the assembly task. The paper presents a comparative evaluation of the mental workload of the operator performing the task with and without the machine. The collection and analysis of physiological data, through electroencephalogram (EEG) devices, extend the possibility to set an ergonomic evaluation of the cognitive state of the operator during the HRC application.

09:57
A Comprehensive Framework for Ensuring the Trustworthiness of AI Systems
PRESENTER: Monika Reif

ABSTRACT. Legislators and authorities are working to establish a high level of trust in AI applications as they become more prevalent in our daily lives. As AI systems evolve and enter critical domains like healthcare and transportation, trust becomes essential, necessitating consideration of multiple aspects. AI systems must ensure fairness and impartiality in their decision-making processes to align with ethical standards. Autonomy and control are necessary to ensure the system remains aligned with societal values while being efficient and effective. Transparency in AI systems facilitates understanding decision-making processes, while reliability is paramount in diverse conditions, including errors, bias, or malicious attacks. Safety is of utmost importance in critical AI applications to prevent harm and adverse outcomes. This paper proposes a framework that utilizes various approaches to establish qualitative requirements and quantitative metrics for the entire application, employing a risk-based approach. These measures are then utilized to evaluate the AI system. To meet the requirements, various means (such as processes, methods, and documentation) are established at system level and then detailed and supplemented for different dimensions to achieve sufficient trust in the AI system. The results of the measures are evaluated individually and across dimensions to assess the extent to which the AI system meets the trustworthiness requirements.

10:09
Understanding and Quantifying Human Factors in Programming from Demonstration: A user study proposal
PRESENTER: Shakra Mehak

ABSTRACT. Programming by demonstration (PbD) is a promising method for robots to learn from direct, non-expert human interaction. This approach enables the interactive transfer of human skills to the robot. As the non-expert user is at the center of PbD, the efficacy of the learned skill is largely dependent on the demonstrations provided. Although PbD methods have been extensively developed and validated in the field of robotics, there has been inadequate confirmation of their effectiveness from the perspective of human teachability. To address this gap, we propose to experimentally investigate the impact of communicating robot learning process on the efficacy of the transferred skills. This paper outlines the preliminary steps in designing experiments to identify human-related performance shaping factors in PbD. The purpose of this article is to establish the foundation for an experimental study that focuses on the human component in PbD algorithms and provides new insights into human factors in PbD design.

10:21
BlackAUT: Concepts for Blackout Resilience from a Comprehensive Strategic Exercise in Austria
PRESENTER: Stefan Schauer

ABSTRACT. A widespread and long-lasting power outage, i.e., blackout, is one of the most severe threats for society at a national level and possibly evolving to an international incident. Due to the cross-sectoral function of electrical energy, almost all parts of society will be directly affected by a blackout, including most of the critical infrastructures (CIs) in the concerned regions. Therefore, relevant measures to prepare for and to recover from a blackout require structural coordination in advance between national government, municipalities, local authorities, and the CI. In late 2022, a comprehensive strategic exercise (BlackAUT) has been carried out by the Competence Center for a Safe & Secure Austria (KSOe), the Austrian Institute of Technology (AIT) and the Federation of Austrian Industries (IV) in Austria, involving about 80 experts from CI operators of multiple domains and the public administration. Additionally, a Cascading Effects Simulation (CES) was implemented as part of the strategic exercise to provide the participants a better overview on the potential side effects a blackout might have throughout the country. In this paper, we present an overview on the scenario setting, describe the integration and aimed benefits of the CES. We will discuss in detail the main reactions of the CI operators from the different domains, highlight particular dependencies among them that have been identified during the exercise and sketch the main findings and take-aways from this strategic exercise. As a main result, we will discuss the future implications these findings have on the preparation efforts to increase the resilience of critical infrastructures in Austria and all over Europe.

10:33
Analysing "Human-in-the-loop" for advances in process safety: a design of experiment in a simulated process control room
PRESENTER: Chidera Amazu

ABSTRACT. Control room operators are crucial to ensuring safety in safety-critical environments, as major risk process plants, especially when addressing critical process deviations that could lead to process disruptions or accidents. These operators face increased cognitive loads being more involved in tasks that exert their cognition with less manual or physical engagement. Therefore, process safety analysis should integrate key dynamic elements, including the operators’ cognitive states, to allow better predictions. We aim to investigate the impact of the human system interfaces, in this case, two conditions of alarm design (prioritized and non-prioritized) and intervention procedures (paper vs. computer-based), on the operators’ cognition (e.g., situational awareness, mental workload) given a set of scenarios and how these impact operators’ performance and process safety. We also assess how other performance-shaping factors, such as task complexity, communication, and more during the process operations (alarm handling and intervention), contribute to managing safety. Therefore, we present a design of an experiment and a case study of a simulated formaldehyde production plant with which we plan to investigate the operators’ and systems’ behavior during abnormal process operation. Results are yet to be obtained from this study. Subsequent work on modeling process safety for early warnings and optimizations can benefit from this experimental design and the data to be collected and, even so, from including the data on operators’ cognition during analysis.

09:45-11:00 Session 15C: Safety Nuclear Systems III
09:45
Program of Training A Critical Nuclear Power Plant Personnel to Ensure A Specific Response
PRESENTER: Dana Prochazkova

ABSTRACT. Based on the solution of the next stage of the National Action Plan in the area of response of the Nuclear Power Plant Temelin and the region to the worst a long-term power blackout (denoted as SBO) using the Feed and Bleed (FB) method, we must ensure the readiness of the response. We have technical solution of the SBO, sources of risks that may disrupt the response to the SBO and the risk management plan at response. To ensure readiness of the Nuclear Power Plant Temelin and the region to realize this response safely, it is further necessary to ensure a set of highly interconnected technical and organizational works, which ensure correct co-ordination of works according to schedule and real conditions. Therefore, this readiness (operational capability) means ensuring the organizational, technical and professional readiness of sources, forces and assets of the Nuclear Power Plant Temelin and the region. The conditions for action capability include: a high-quality personnel team, adequate equipment and good management of the response process. The quality of the personnel team is conditioned by both, the knowledge and skills of a sufficient number of team members and by training cooperation in the implementation of the response work schedule. Good management of the response process depends on compliance with the timeline of the linked work and on the readiness of the necessary equipment and resources. Proven tools for ensuring the readiness are the readiness of: persons; outfit of material and technical means; objects including the security, services, etc.; and surroundings, i.e. in the case under consideration, the preparedness of the South Bohemian Region. In the present paper, we deal with the content of training the critical personnel of the response to the worst SBO in order to ensure their new competencies that the implementation of this response requires. We divide the response process into sub-sections that fall under the responsibility of individual response managers. Based on the analysis of the requirements for individual tasks and organizational instructions to ensure coordination, we determine the content of knowledge and skills that critical response staff and individual managers must have for their quality execution. Since the response to the worst SBO is specific, we include in the crisis preparedness plan of the entities involved in the response to the worst SBO and in the risk management plan a requirement for regular training and regular verification of the knowledge and skills of critical personnel for the response to the worst SBO. We are also introducing requirements for verifying the cooperation of sub-sections in responding to the worst SBO, because since 2002 only the cooperation of the Integrated Response Systems, the Nuclear Power Plant Temelin and the South Bohemian Region in responding to a design accident has been regularly tested.

10:00
Impact of adaptive automation for supporting operation of a nuclear plant. An explorative study.
PRESENTER: Rossella Bisio

ABSTRACT. The complexity of operating a nuclear power plant can benefit from advances in technology, allowing the real time access to huge amount of information, offering enhanced processing capability and possibilities to automate new functions. New forms of automation, more flexible, informed, widely supporting the operators are possible and could have a positive impact in a wider range of situations respect the past. Adaptive automation seeks to balance task distribution between the technical system and the human to increase performance, having the ability to monitor plant status and recognize significant situations. The main expected advantage of adaptive automation is an adjustment of workload to keep the operator in-the-loop, while avoiding cognitive overload and associated negative influences on human performance, optimizing efficiency and safety without disrupting the human operator in the decision-making process. Adaptive automation has been employed in many areas and safety critical fields, showing promising results. However, to our knowledge there has not been a practical implementation in a nuclear power plant yet. Even though some authors have explored the concept in this specific context, there is no comprehensive data on its influence on human performance. To further the knowledge in this area, we have developed a prototype of an adaptive automation system and accompanying interface to assist nuclear operators in the ramp-up of the turbine. The prototype was integrated in a full scope simulator of a generic pressurized water reactor and a first evaluation was conducted with one crew of four licensed operators that individually went through a short scenario. They were encouraged to think-aloud and explain their thought processes, afterwards they were interviewed. The study took a more naturalistic and explorative approach, giving the participants little information or instructions on how to behave and use the system. The main aspects of interest in this explorative study were a general evaluation and first impression of the adaptive automation system, its intuitiveness, the interface design, advantages and disadvantages of such a system, its perceived influence on workload, other potential use cases for it and suggestions for improvements of the interface and underlying adaptive automation. The feedback from the operators generally was very positive and all four individuals appreciated having the adaptive automation system to assist them in completing the scenario. The two most prominent points that the operators found helpful were (1) the assistance and capability of the automated system while being transparent and controllable and (2) the accessibility of the necessary procedures in the interface. We also gathered several other insights that will be discussed in the paper, spanning additional positive as well as negative aspects. From this initial, qualitative assessment of the developed prototype and the underlying principle of an adaptive automation, we conclude that this kind of technology is very promising for the implementation in nuclear power plants, especially when thinking about new plant designs like Small Modular Reactors. However, there is a strong need for additional research into this topic to gauge its influence on human performance.

10:15
Development of Post-Processing Methods for Dynamic Event Analyzer, DICE
PRESENTER: Yuntae Gwak

ABSTRACT. DDET (Discrete Dynamic Event Tree) method or MCET (Monte Carlo Event Tree) are methods where the reliability of components and operator’s interventions are contextually reflected in safety analysis over time. DDET and MCET methods uses both deterministic analysis using physical simulations and probabilistic state changes, where the stochastic features could be used to find undetermined scenarios that are not discussed in conventional deterministic or probabilistic analysis. DICE (Dynamic Integrated Consequence Evaluation) is a tool on the basis of DDET and MCET method. In the MCET method, the distribution of scenarios could be examined because stochastic considerations including recovery of the components and operator’s action are involved. In order to utilize this method, it is necessary to perform some post-processing for user’s convenience. This study demonstrated the results of post-processing techniques for scenarios generated through DICE with an example of LBLOCA (Large Break Loss of Coolant Accident).

10:30
Influence of physical parameters over the overflow probability of a radioactive waste near surface repository: the case of the Abadia de Goiás repository in Brazil

ABSTRACT. A model [1] was constructed for evaluating the overflow probability for risk-informed decision-making in the analysis of water infiltration inside the near surface repository of Abadia de Goiás, Brazil. Water infiltration inside the repository is influenced by a set of design and physical parameters. The treatment of these parameters requires more specific approaches. The purpose of this paper is to discuss the influence of each of these parameters on the overflow probability of the repository and show ways to take them into account more precisely. The parameters are: internal area of repository base; repository base width, lengh, and thickness; evapotranspiration rate; degradation function of the repository ceiling; irrigation rate in the repository; hydraulic conductivity of concrete; repository wall thickness; internal porosity of the repository; average rainfall rate; surface runoff; height of the liquid column inside the repository; and initial value of the infiltrated liquid height. It is simpler to analyze the influence of those parameters that are not rates (as the repository internal base area), because an eventual variability (for example, a rectangular base or a squared base) does not bring any difficulty since it is just a matter of varying the shape and dimensions considering some restriction as, for example, a constant perimeter. Some parameters, like the internal porosity of the repository are dimensionless parameters and eventual fluctuations are also easier to consider. The analysis of rate parameters (as the rainfall rate) is more complicated. It is standard to stablish an institutional time period control for the repository and this parameter may vary from some tens of years to some hundreds (in the case of the Abadia de Goiás repository it is equal to 300 yr [2]). If one considers that the rain fall rate is the highest available and for the case of the Abadia de Goiás this figure may be found in [3], the overflow probability may be very high, about 99%, which is an unrealistic number [1]. There are cases in which the rainfall rate is collected in m/hr, so that it is not trivial to simply transform this unit to m/yr because its variability has been registered for hourly periods of time. In this case it is more proper to use simulation approaches to produce more realistic results.

References

[1] L. Gabcan, A.S.M. Alves, F. C. da Silva, D. G. Teixeira, and P.F. Frutuoso e Melo, Evaluation of the Overflow Probability from the Abadia de Goiás Repository by the Fokker-Planck Equation Using the Trotter’s Formula, to be submitted to the Nuclear Engineering and Design journal. [2] Tranjan Filho, A., de Martin Alves, A.S., dos Santos, C.D.P., dos Passos, E.V., Coutinho, F.P.M., 1997. Repository of Radioactive Cesium Waste – Abadia de Goiás Conception and Design (in Portuguese), available at: https://www.ipen.br/biblioteca/cd/go10anosdep/Cnen/doc/manu20.pdf, accessed Feb. 28 2019, Goiânia, Brazil. [3] Marcuzzo, F. F. N., Cardozo, M. R. D., Faria, T. G., 2012. Rain at Cerrado of Brazil Middle-West Region: Historical Analysis and Trends (in Portuguese), Geogr. Studio, 6, pp. 112-130.

10:45
Development of a reference book on common cause failures in German nuclear power plants
PRESENTER: Michael Homann

ABSTRACT. GRS has been analysing reportable events from nuclear power plants (NPPs) in Germany for more than 40 years. This also includes the consideration of common cause failures (CCF). For this purpose, GRS has created a database containing CCF events. Amongst others, this database has been applied to calculate CCF probabilities as input parameters for probabilistic safety analyses (PSA). In the frame of a recent research and development project the collected data are being used in their entirety for a generic analysis of CCF of components in German NPPs. The research activity aims at providing a comprehensive reference book with respect to CCF. For this purpose, the events recorded in the database will be sorted by different categories, such as “component affected” or “characteristic aspect”. In this context, a characteristic aspect is a keyword-such as a description of the cause of the event, for example "corrosion" or "incorrect or missing specifications". Commonalities of the events will be identified and described. This paper presents an overview of the database contents, a brief description of the planned analysis and methodology applied, and first results.

11:00
METHODOLOGY OF COMPILING A STEAM GENERATOR MAINTENANCE PLAN
PRESENTER: Dana Prochazkova

ABSTRACT. In order to ensure the safety of a nuclear power plant, it is necessary to maintain under all conditions the limits that were set in the design for elements, components, systems and their interfaces. In the presented paper, we focus on the critical equipment of the nuclear power plant, namely the steam generator. Its task is to ensure that the reactor operates within the permissible range of temperatures and pressures by means of controlled cooling the water of the primary circuit, which has a high temperature and high pressure. In the WWER 1000 type reactor at the Temelín Nuclear Power Plant, the steam generator is a horizontal heat exchanger with a large heat transfer area consisting of a bundle of "U" pipes and is designed for maximum tem-perature values of 320 °C and pressure of 16 MPa. The device conducts heat generated in the nuclear reactor into feedwater and steam in the secondary circuit. Temperature and pressure conditions in the steam generator are set in such a way that intensive steam generation occurs on the surface of the pipes, which flows through the steam collector and steam pipes to the turbine, where it serves to drive the turbo-generator. The steam generator is a category 1 device, and therefore it is required to minimize breakdowns. Therefore, a device care strategy is chosen that ensures high reliability, i.e. minimizes the occurrence of failures and does not tolerate functional failures between established maintenance cycles. The maintenance program is based on the principle of a tiered approach to equipment. The basic requirement for the implementation of an effective pre-ventive maintenance strategy is knowledge of the condition and performance of the operated equipment. Based on the knowledge of these parameters, the maintenance program is optimized to achieve the required level of safety, performance and reliability. The preventive maintenance program is based on data on: the design of the steam generator; the importance of the steam generator for the safety of the nuclear power plant; the importance of the steam generator for power generation; and operational experience. The methodology of compiling a steam generator maintenance program is based on the concept of preventive maintenance for specific items. The conditions of the items are assessed on a five-step scale: very good condition; good condition; acceptable condition; poor condition; and critical condition. Using a checklist, the sources of risk for each item are assessed and their frequencies are evaluated according to the data in the design and in the operation logs; the size of the impacts; and the average time when a defined condition is reached, which is the latest time maintenance is required. DSS is used to determine the contributions of the failure of individual items to the total risk caused by the failure of the steam generator. For economic reasons, the maintenance program is controlled by the maintenance of the items that contribute most to the failure of the steam generator.

09:45-11:00 Session 15D: S.34: Risk Analysis and Safety in Standardisation II
Location: Room 2A/2065
09:45
Implementation of STPA methodology into military jet aircraft certification process according to EMAR certification criteria for Safety.
PRESENTER: Milan Pšenička

ABSTRACT. Increasing requirements on reliability and safety of aircraft are emerging not only in civil aviation but also in the military aviation industry. In order to eliminate all possible safety risks, or to minimize them where they cannot be eliminated a lot of conventional methods are used, such as FMEA, FMECA, FHA, FTA etc. Those are excellent system safety engineering methods widely used to ensure system operational integrity during the initial aircraft certification process. The EMAR regulations explicitly mention the conventional methods as acceptable means of compliance for all safety related paragraphs. Nowadays, however, new approaches emerge that attempt to overcome some of the limitations of the conventional ones. One of the promising is the Systems Theoretic Accident Model and Process (STAMP) and the Systems-Theoretic Process Analysis (STPA) based on it. By nature, this method is based on qualitative analysis which, while very useful in the development phase of an aircraft, makes it difficult to directly connect the outputs of the analysis to the requirements of the military EMAR regulations, which explicitly call for some quantitative outputs. This paper presents few cases where the STPA fits EMACC, including how such qualitative method could be expanded to deliver some of the required quantitative outputs.

10:00
Withstanding capacity of insulating panels used in machinery assemblies
PRESENTER: Fabio Pera

ABSTRACT. In assembly of machinery and production lines it is sometime possible to isolate workers from noise and other emissions by cabin and walls built with polyurethane core. Wall sheet sandwich panels with insulating core in polyurethane foam, used for the construction of infill walls, internal partitions and false ceilings of buildings and prefabricated construction sites are a common solution for this scope. Those panels, having an external sheet in aluminium, steel or other materials are also able to protect workers from temperature, coolants and swarf but they are usually not designed in order to protect against impacts due to ejections of work-pieces or tool parts. Even if the initial aim is to protect against the effects noise there is a residual risk, especially in machine not fully enclosed by fixed and mobile guards, such as huge machining centres and lathes or woodworking machines, In this paper the withstanding capability of either a single or two coupled (double) - sandwich panels, made as before described, will be investigated and the so-called ballistic limit for them this panel will be discussed. Real protective characteristic of those insulating panels walls will be presented in reference to the requirements of ISO 14120:2015. Moreover, probably due to metallic outer sheets of panels, built with ribbed surfaces, the stiffening effect in withstanding capability, when panels are assembled in multiple layer disposition, will be discussed.

10:15
Emerging Technology Certification Risk Assessment with ETHICIST
PRESENTER: Shamal Faily

ABSTRACT. Risk owners need help attending to certification challenges associated with emerging technologies on critical systems. Assessing emerging technology risk needs to account for its relationship with our technology and standards, and the processes and tools need to be accessible to different stakeholders. In this paper, we present ETHICIST: a systematic approach for assessing and managing emerging certification technology risk. Our approach uses multi-criteria decision analysis and concept mapping to account for different attributes of certification risk. It also visualises the cascading impact on other technology, regulations, and systems. We illustrate this approach by considering the certification risk of additive manufactured wearable computing elements for a military air system.

10:30
Full guard testing for ejection in machines, from standard requirements to real specification
PRESENTER: Luca Landi

ABSTRACT. Machine Directive (2006/42/CE) states the requirements for designing machine safeguards. The standard ISO 14120 and in addition, then the annexes deal with the design of guards for almost all types of machinery. Thus, the robustness, mechanical capability, stress, and strain state that can be reached during the ballistic impact must be proven and validated by a well-equipped laboratory. Those requirements claim the necessity of finding “the weakest point on guard,” and it is tough to be fulfilled in real testing devices. The paper presents the design of a new gas cannon device built for maximum flexibility during the test phase and able to shoot in any desired of huge guards. Examples of weak points and design errors highlighted during tests will be presented and discussed accordingly to the requirement of most counterproductive point requirements. Opportunities to modify the state of the art of tests will be discussed at the end of the paper.

09:45-11:00 Session 15E: S.30: Overcoming data and label scarcity for machine learning-based risk and reliability assessment I

The special session will focus on contributions made towards addressing scarcity in data (inputs and/or targets) including scenarios where inputs and/or labels are scarce (or not available at all), or are not sufficiently representative of all operating conditions of the system. Specifically, the following approaches for building ML models for risk and reliability assessment in the low data-regime will be considered:

  • Reliability of machine learning models built from digital twins
  • Physics-informed machine learning
  • Uncertainty quantification of deep learning models (Bayesian models, ensemble methods, and others)
  • ML approaches to handle scarcity in input and target data, and poor representativeness of training data (active learning, semi-supervised learning, unsupervised learning, generative modelling, data augmentation, and others).
Location: Room 100/4013
09:45
Constructing health indicators for systems with few failure instances using unsupervised learning

ABSTRACT. Health indicators are crucial to assess the health of systems and to predict their Remaining Useful Life (RUL). Most health indicators are developed using physics-based models. These models, however, are often not available for complicated systems consisting of multiple components. As such, in recent years, several studies have developed data-driven health indicator using machine learning. However, most safety-critical or expensive systems are preventively maintained before failure. There are therefore not enough failure instances to train a supervised learning model when constructing a health indicator, i.e., the data is unlabelled (the actual RUL or health is not known). In this study, we therefore propose an unsupervised learning model to construct a health indicator for a system with few failure instances.

We consider a system that is operated under highly varying operating conditions. A health indicator is often constructed by detecting deviations of the sensor measurements from a normal range. However, the normal range of the sensor measurements depends on these highly-varying operating conditions. In this study, we therefore develop a health indicator by training a neural network to construct the sensor measurements at a certain time, based on the operating conditions at that time. We train this neural network solely with the sensor measurements of just-installed systems. The constructed sensor measurements therefore deviate from the actual sensor measurements when a system degrades over time. Based on these construction errors, we construct a health indicator for the system.

We apply this approach to develop a health indicator for the aircraft turbofan engines of the N-CMAPSS [1] dataset, which are operated under highly-varying operating conditions (varying altitude, speed etc.). The resulting health indicators have a high monotonicity, prognosability and trendability, and can therefore be effectively used for predictive maintenance planning.

References: 1. Arias Chao, M., Kulkarni, C., Goebel, K., & Fink, O. (2021). Aircraft engine run-to-failure dataset under real flight conditions for prognostics and diagnostics. Data, 6(1), 5.

10:00
Explainable artificial intelligence for understanding the ageing classes of reinforced concrete bridge components

ABSTRACT. This article proposes an approach to the identification and interpretation of homogeneous ageing classes for reinforced concrete bridge components. The approach is articulated into three phases: in the first phase, homogeneous ageing classes are identified by considering the results of the visual inspections and the time sequence of condition states of the bridge components, applying a cluster analysis based on the k-means algorithm; in the second phase, the ageing class is predicted by means of a random forest algorithm, considering features of the bridge and of the components; in the third stage, the prediction is explained by applying a SHAP analysis. The results reveal that the prediction of the ageing class is influenced by the year of construction of the bridge and therefore of the component. This result opens up to a multiplicity of interpretations, which are considered in the article. The dependence of the ageing class on other variables is also discussed.

10:15
A Generic Fully Unsupervised Framework for Machine-Learning-Based Anomaly Detection

ABSTRACT. One of the main challenges of applying machine learning algorithms for industrial fault detection is the scarcity of annotated data, especially from faulty or degraded regimes. Commonly used approaches resort to residual-based anomaly detection (AD), thereby training machine learning models with normal, anomaly-free data exclusively, and detecting deviations from normal behavior during deployment. However, in real-world industrial and operational systems, it is often the case that the training data is completely unlabeled, and may contain anomalies. Thus, training residual-based AD models with unlabeled, potentially contaminated data may result in reduced AD performance.

In this work we present a novel approach to the refinement of contaminated training data in an entirely unsupervised manner, enabling high performance AD despite the data contamination. The proposed framework is generic and can be applied to any residual-based model, whether reconstruction-based (such as Principal Component Analysis or Autoencoder neural networks), or regression-based (from linear regression to deep neural networks). We demonstrate the application of the framework to two public data sets of time series data: acoustic signals from industrial machines, and aircraft engine data. The two examples differ in their physical systems as well as in their fault dynamics (sudden failures vs. slow degradation). We show the superiority of the framework over the naive approach of training blindly with contaminated data. In addition, we compare its performance to the ideal reference case of AD with anomaly-free training data. We show that the proposed framework is similar and sometimes outperforms this ideal baseline.

10:30
Automated and self-adapting approach to AI-based anomaly detection
PRESENTER: Sheng Ding

ABSTRACT. AI has emerged as a promising solution to enhance Time Series Anomaly Detection (TSAD). However, it lacks the self-adapting ability and knowledge of choosing the best-suited model under different contexts. To overcome these challenges, we have integrated various algorithms using a unified data interface and an automated training-testing process. We have incorporated automated hyperparameter optimization and architecture selection. Additionally, we conducted further experiments that demonstrated the advantages of a smart switch mechanism for selecting the most appropriate TSAD method based on statistical features of the data, resulting in improved detection performance. This dynamic switch mechanism has been integrated into our TSAD platform.

10:45
complex-valued-AE for Structural Health Monitoring with Frequency Modulated Continuous Wave Radar

ABSTRACT. Frequency Modulated Continuous Wave (FMCW) radar is a low power, compact mechanism which can be used for non-destructive health monitoring and inspection of surface and subsurface materials. This enables the detection of defects that are internal to the analyzed structural element and not visible. The key benefits of this technology are that it offers a non-contact monitoring tool at reduced costs, reduced risk and reduced time of inspection [1]. Recent work has proposed to assess the capability of FMCW radar sensing for composite material characterization of wind turbine blades [2]. While it showed promising results for the robust classification of a turbine blade of different thickness or inner composite materials, it was not yet applied in the context of health monitoring. In this work, we propose to study the feasibility of FMCW radar to detect anomalies in monolithic surfaces. This task utilized adapted signal processing and machine learning methods to analyze the return signal of the radar. Since the return signal is based on the difference between the received and transmitted signals, the resulting signal can be very sensitive to the echo delay. In this work, we propose to consider the analytic representation of the signal to reduce the impact of the echo delay. In addition, we propose a complex-valued autoencoder neural network with a new activation function adapted to the complex-valued input signal. The Autoencoder is trained on healthy samples only and the residual is used as a health indicator to distinguish healthy from surfaces with defects. To demonstrate the performance of our approach, we consider a monolithic composite containing engineered defects. The anomalies include a dry zone lacking of composite resin used to cure the area. The FMCW scanning was performed with three different distances from the materials, 5, 10 and 15cm. We compare our anomaly detection strategies to other state of the art methods like support vector data description where our proposed approach demonstrates better performance. Moreover, we show that using the analytical representation of our return signal leads also to better performance than other signal return representations.

[1] Blanche, J., Mitchell, D., Gupta, R., Tang, A., & Flynn, D. (2020, November). Asset integrity monitoring of wind turbine blades with non-destructive radar sensing. In 2020 11th IEEE Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON) (pp. 0498-0504). IEEE.

[2] Tang, W., Mitchell, D., Blanche, J., Gupta, R., & Flynn, D. (2021, August). Machine Learning Analysis of Non-Destructive Evaluation Data from Radar Inspection of Wind Turbine Blades. In 2021 IEEE International Conference on Sensing, Diagnostics, Prognostics, and Control (SDPC) (pp. 122-128). IEEE.

11:00
Exploiting Explanations to Detect Misclassifications of Deep Learning Models in Power Grid Visual Inspection

ABSTRACT. In the context of automatic visual inspection of infrastructures by drones, Deep Learning (DL) models are used to automatically process images for fault diagnostics. While explainable Artificial Intelligence (AI) algorithms can provide explanations to assess whether the DL models focus on relevant and meaningful parts of the input, the task of examining all the explanations by domain experts can become exceedingly tedious, especially when dealing with a large number of captured images. In this work, we propose a novel framework to identify misclassifications of DL models by automatically processing the related explanations. The proposed framework comprises a supervised DL classifier, an explainable AI method and an anomaly detection algorithm that can distinguish between explanations generated by correctly classified images and those generated by misclassifications.

09:45-11:00 Session 15F: S.07: Computational and Simulation-based Risk Assessment I
Location: Room 100/5017
09:45
A transformer outage duration model with application to asset management decision support

ABSTRACT. Transformers are key components in the power system and transformer failures can cause long power outages with high costs to society. Transformer failures are rare, and each case is unique with respect to its consequences. This shapes the data and statistics we have available to predict future failures and related consequences. Models to support risk assessments and asset management decisions for these critical assets should rely on practical approaches to include both available data as well as expert judgements. This paper looks at outage duration, an important parameter in risk evaluation and asset management decisions. It presents a transformer outage duration model which can be conditioned on relevant asset management input variables. A use case is constructed to exemplify the usage of the model in an asset management decision context.

10:00
Consideration of Uncertainty in Reliability Demonstration Test Planning
PRESENTER: Martin Dazer

ABSTRACT. In reliability engineering, empirical life data from reliability tests is used very often for reliability demonstration. With the help of inferential statistics, the sample information is transferred to the population. Interval estimation methods are used for this purpose. This is used to determine the confidence interval to safeguard against a false statement regarding the service life distribution. Such methods are well known in reliability engineering. The risk of a failed test (type-II statistical error) is rarely considered in both End of Life (EoL) and Zero Failure Tests (ZFT). Grundler, Dazer and Herzig [1] propose a numerical-simulative and an analytical-approximative method for calculating statistical power calling it Probability of Test Success to refer to the reliability context. The approach classifies a reliability test as a hypothesis test. Thus, the test with a certain sample size can be determined, which is able to demonstrate the reliability target, consisting of lifetime, reliability and confidence, with a high probability. As with any other hypothesis test, prior knowledge is necessary to estimate the scale and scattering of the alternative hypothesis. Especially, if this prior knowledge is stemming from life tests with very small sample sizes, the uncertainty must be taken into account when planning the test. This paper presents an approach to consider the uncertainty in planning within a simulative bootstrap approach. However, uncertainty can also be considered in the analytic-approximation approach, which is based on the central limit theorem. The approach is based on the fact that the location of the approximated lifetime quantile is also subject to scatter stemming from the uncertainty of prior knowledge. This scatter must be considered in the calculation in addition to the scatter from the test to be planned. Also, for zero failure tests there is a possibility to consider the uncertainty of prior knowledge in the test planning. The beta binomial distribution can be used excellently for this purpose. The uncertainty of the prior knowledge (expressed as the scattering failure probability) is thus described as a beta-distributed parameter of the binomial distribution. The implications are illustrated for both EoL and ZFT using some exemplary results. References: [1] Grundler, A.; Dazer, M.; Herzig, T. Statistical Power Analysis in Reliability Demonstration Testing: The Probability of Test Success. Appl. Sci. 2022, 12, 6190. https://doi.org/10.3390/app12126190

10:15
Estimating uncertainty in reliability of supply analysis considering component condition
PRESENTER: Håkon Toftaker

ABSTRACT. The reliability of the electric power transmission system depends on the reliability of its components. As components age, the technical condition degrades, and the probability of failure will increase. Consequently, to estimate the reliability of a transmission system it is valuable to include the effect of deteriorating components. Recent work has demonstrated how this can be done. However, condition dependent reliability models introduce new sources of uncertainty that needs to be accounted for and that may be especially important in a long time horizon. This work presents a novel approach to propagate the uncertainty in input parameters through the system reliability analysis. Monte Carlo simulation is used to create an ensemble to span the sample space of reliability of supply indices. The effect of each source of uncertainty may be seen separately, or the effect of several sources is seen jointly. The methodology is demonstrated using a failure model for high voltage power transformers in the transmission system. The example illustrates that the methodology can identify which sources of uncertainty have significant impact on the uncertainty of system reliability indices and to what degree system uncertainty is amplified or moderated by interactions between the sources of uncertainty. Moreover, it is shown that the uncertainty will not necessarily increase uniformly over time.

10:30
Automatic Reliability Assessment of Data Paths in Sensor Networks

ABSTRACT. The integration of sensor networks (SN’s) into large-scale structural components enables the monitoring of operational loads and resulting structural responses area-wide for the whole part. During the development phase of these components, FE-models of operational loads can be used to find suitable sensor positions and network configurations for this task. Since the network components can’t be replaced after manufacturing due to the integration inside the part, the assessment of the SNs system reliability is important. Because finding a network configuration which fulfills the reliability requirements can be a time-consuming task, an automatic reliability assessment for a sensor network aids during the development. In this paper, an algorithmic solution for the automatic calculation of the system reliability is presented. The calculation is based on an analysation of the data paths and the creation of a reliability block diagram (RBD). In order to show the applicability of this algorithm, it is tested on exemplary scenarios in a case study.

10:45
Overall Markov diagram design and simulation example for scalable safety analysis of autonomous vehicles
PRESENTER: Teo Puig Walz

ABSTRACT. Markov models are a promising tool regarding the assessment of availability, safety, security, and reliability of autonomous driving functions. The paper addresses challenges regarding the overall system functional and static modeling and related overall Markov diagram design options. To this end, the model space is presented, extending the main functions consisting of extended sensory system, decision and control, and vehicle platform manipulation. Sample transition models from literature are used. It is shown how to color-label overall Markov system product states in terms of the level of their criticality, independent of the multiplicity of failures. This is used to model the effect of structural and functional redundancies, e.g., of redundant sensors and sensors of different technology. The modeling approach allows to compare the effect of redundancy options on a systemic level, as well as to identify the need for further aggregation or subdivision of Markov states or refinement of the transition modeling and simulation approach. For instance, in terms of providing statistical assessment of historic events or by using simulation results of specific autonomous driving scenarios, e.g., interaction with vulnerable road users in case of darkness, bad weather, and partial sensor degradation. The paper presents Markov modeling results with a focus on modeling of redundancies of sensors.

09:45-11:00 Session 15G: S.27: Advances in Maritime Autonomous Surface Ships (MASS) I
09:45
Analysing the need for safety crew onboard autonomous passenger ships – A case study on urban passenger transport in Norwegian waters

ABSTRACT. Autonomous ships are commonly associated with uncrewed ships that are able to navigate by themselves based on a combination of novel technologies such as situational awareness, collision avoidance and remote control centres. Autonomous passenger ships, as opposed to cargo ships, comes with the added complexity that is to ensure passenger safety in all operational scenarios. So far, systems and solutions enabling safe navigation has been the main focus area towards realisation of autonomous ships. However, these solutions alone will not yield a viable business case for autonomous passenger ships until procedures and solutions assuring passenger safety is developed and adapted to operations with reduced or no crew. Today, the number of crew required to secure daily operation of a small conventional passenger ship (e.g. navigation, manoeuvring, machine watch), is very often equal to the number of crew required to maintain a necessary passenger safety level in case an abnormal situation should occur (e.g. fire, flooding, grounding, collision and evacuation). Any reduction in crew, compared to the current industry standard, must be approved by the relevant authorities using a risk-based approach, where one need to prove that the ship design in combination with work procedures and installed technology meet the requirements assuring the safety of the passengers and the daily operation. An important input for such an approval is the crew safety instructions. Given the technical solutions and safety equipment installed on board, the instructions specify how tasks and responsibilities are distributed across the defined roles, and ultimately what the crew is expected to carry out in abnormal situations. Based on the assumption that autonomy is understood as the automation of tasks that today require human intervention, we have analysed the safety instructions for one high speed passenger craft and one low speed passenger ship. Both operating in sheltered waters. The purpose of the analysis was to identify which tasks that is possible to automate in a 3-5 years perspective given certain presumptions in expected technology development. The analysis resulted in a set of tasks that can be automated and those who appear more difficult, in which the latter representing further research and development needs. We argue that given the current technology gaps and short-term expected developments, there will still be a need for a reduced safety crew onboard autonomous passenger ships. Further, based on the analysis we propose a definition of a safety responsible, but also suggest the allocation of tasks between the safety responsible and the supporting operator at the remote control centre. Requirements for the development of novel safety equipment paving the way for approval of passenger ships with a single safety responsible crew is also derived. The results of the study and the definition of the safety responsible is non-exhaustive, but provides input to future regulatory and framework discussions needed to obtain a reduced crew approval on autonomous passenger ships that can yield a realisable business case.

10:00
A criticism of proposed levels of autonomy for MASS

ABSTRACT. "Levels of autonomy" (LOA) is a popular subject in both scientific and regulatory literature. This applies to many different types of autonomous systems, but this paper focuses on "Maritime Autonomous Surface Ship" (MASS). Most of today's definitions of LOA for MASS look at where human control resides (remote or onboard) and some form of graduated control capability for the automation system, e.g. ranging from decision support, human approval or vetoing of automated actions, to full automation. MASS has some characteristic factors that make the use of these LOAs difficult: 1. Ships are costly assets and are expected to always operate under human supervision. Thus, there is a human available for doing control tasks if necessary. It may not be cost-effective to design the MASS for full autonomy. 2. MASS operations may last for several days or weeks, and for most of the time relatively little attention is needed from operators. This is not an ideal scenario for an operator if he or she always must be ready and able to determine when intervention is needed. 3. Ships move relatively slowly, and dangerous situations develop over time. This means that it may be possible for automation to warn the operator well before intervention is needed. Thus, in most cases we are looking at limited autonomy for the MASS, where a human operator can assist automation when necessary. To ensure an acceptable risk level for MASS, many technical and operational measures are needed, but relevant to the cooperative human/automation issue, the following are central: 1. It is necessary to define the operational limitations of the complete MASS system as well as the automation system to allow efficient testing and qualification. This also defines the conditions when humans need to take control. 2. One must design the system so that it is clear what party is in control at any point in time, automation or human. This requires and helps to establish trust in automation. 3. One must include facilities in the automation system to give an alarm to the human before intervention is necessary. This can avoid the out-of-the-loop problem often associated with human-automation interactions. 4. One must define the hand-over processes between automation and human so that both parties always know who is in control. The handover must be done in a manner that ensures sufficient situational awareness by the human or automation before actions are necessary. We believe that we are looking at a situation where "constrained autonomy" fully controls the ship under certain conditions and where a human takes over control when necessary or desired. While there is a cooperation between human and automation, only one will be in control at any given point in time. It is important how the human-automation interface is designed, but this is not, in our opinion, related to levels of autonomy.

10:15
Successful autonomous transport – The need for coordination and integration of strategical and operational management
PRESENTER: Trine Stene

ABSTRACT. The rapid pace of technological and societal changes creates a strong need for competence, standards and regulations that allows for exploiting the benefits of new technology, without operating at an unacceptable risk level. To be successful, resilience perspectives may be used to identify future functionality and adaption requirements, including flexibility of operation and interrelations between actors. This includes identifying principles for handling both normal operations and anomalies. The Norwegian Research funded project MARMAN (Maritime Resilience Management of an Integrated Transport System) emphasises system challenges and requirements faced with increased automation and connectivity, including implementation of MASS (Maritime Autonomous Surface Ships). Particular attention is on integrated planning at different management levels (from government to operational practise) and the interrelations between the levels. The purpose of this paper is to examine how a future Maritime Transport System (MTS) can prepare for successful implementation of MASS in an increasingly automated transport system. This includes to identify hazards, risks, operational procedures and challenges, collaboration within the MTS, deviation management, standardisation, in addition to planning capabilities to cope with them. The paper describes automation of the maritime transport system, related risks and integrated planning. Further, the paper discusses main challenges for successful implementation of MASS and management at strategical and operational level to handle these. This includes resilience perspectives e.g. potential resources in case of procedure deviations and emergency preparedness.

10:30
Quantitative Risk Assessment of a Periodically Unattended Bridge
PRESENTER: Mert Yildiz

ABSTRACT. Periodically unmanned bridge is a likely use case often cited for MASS Technologies [1]. The German-funded B ZERO project aims to develop and demonstrate these capabilities for navigating a cargo ship for up to 8 hours within a predefined operational design domain [2]. From a risk perspective, MASS technology and thus the implementation of B ZERO must be as safe as conventional technology [3], which is why a safety assessment of the B ZERO concept is executed according to the IMO formal safety assessment guidelines [4]. This paper outlines the results from the hazard identification and risk analysis executed along the Bow-Tie Model. This includes a special focus on assessing the process of manually taking over the watch from an autonomous navigation system in the context of periodically unattended bridges [5]. Starting with a thorough introduction into safety assessment and risk modelling techniques, the problem definition is given and the identified lists of hazards derived. The risk analyses focuses on quantitative methods as Fault and Event Tree Analyses Modelling for relative comparison of manned and autonomous operations instead of qualitative expert-based rating in risk matrixes. This includes insights into how risks associated with MASS can be quantitatively modelled by identifying scientifically accepted probabilities from literature or how those can be derived from maritime data bases, as e.g. AIS-data.

[1] Lutz Kretschmann, Hans-Christoph Burmeister, Carlos Jahn, Analyzing the economic benefit of unmanned autonomous ships: An exploratory cost-comparison between an autonomous and a conventional bulk carrier, Research in Transportation Business & Management, Volume 25, 2017, Pages 76-86,

[2] Ugé C., Hochgeschurz S.: Learning to Swim - How Operational Design Parameters Determine the Grade of Autonomy of Ships. TransNav, the International Journal on Marine Navigation and Safety of Sea Transportation, Vol. 15, No. 3, doi:10.12716/1001.15.03.02, pp. 501-509, 2021

[3] Rødseth Ø.J., Burmeister H.C.: Risk Assessment for an Unmanned Merchant Ship. TransNav, the International Journal on Marine Navigation and Safety of Sea Transportation, Vol. 9, No. 3, doi:10.12716/1001.09.03.08, pp. 357-364, 2015

[4]IMO (2007) Formal safety assessment. Consolidated text of the guidelines for formal safety assessment (FSA) for use in the IMO rule-making process. Guidelines MSC/Circ.1023 and MEPC/Circ.392, IMO, London

[5]Hochgeschurz S., Dalinger E., Motz F.: Modelling the Processes of Taking Over the Watch From an Autonomous Navigation System. TransNav, the International Journal on Marine Navigation and Safety of Sea Transportation, Vol. 15, No. 1, doi:10.12716/1001.15.01.11, pp. 117-124, 2021

09:45-11:00 Session 15H: S.02: Reliability and Resilience of Interdependent Cyber-Physical Systems II
09:45
Risk Assessment in the Implementation of Resilient Sustainable Smart Cities

ABSTRACT. The degree of complexity of a smart city project and the management of multiple technologies is expected to advance year after year. The project manager must manage the technological resources, ensure their implementation and maintenance throughout the project, and provide all necessary training and documentation for the work team, and the challenge is enormous.

This study aims to conduct a risk assessment on implementing a smart city considering project management methodologies, highlighting the main organization topics, risks, and impacts for major stakeholders. The study also seeks to highlight the different project management methodology guidelines, looking for an effective contribution to success in implementing such a sustainable and innovative project.

As a methodological approach, the authors collected qualitative data from experienced field specialists and project managers. A matrix was used to list opportunities, risks, and impacts of the construction and implementation of a smart city project.

As a result, the study shows vital information that can contribute effectively to project managers involved in smart city projects and help meet the requirements of time, cost, and quality.

The contribution is significant since it covers the critical points of the risk management methodology in implementing a smart city and the project management from the first stage of discussion and conception with municipal leaders to its implementation.

It can impact the project's success and help understand performance and safety during its lifecycle. Although conducted in a specific city in Brazil, the study can be generalized to other cities and countries whose safety is affected by risk issues resulting in waste, rework, and unnecessary energy consumption. The study can change the practice and thoughts of professionals dealing with risk assessment in implementing a smart city.

10:00
Hybrid threats on air traffic
PRESENTER: Corinna Köpke

ABSTRACT. Air traffic in general is vulnerable to various hazards ranging from natural hazards to technical failures or attacks which can be both cyber and physical. These threats impact on airports but also the air traffic management can be affected to influence the overall air traffic. Here, we analyze the resilience of air traffic by studying performance degradation and recovery in airports due to hybrid threats. An airport consists of many coupled network systems such as public announcement system (PAS), flight information display system (FIDS), access control system (ACS), baggage handling system (BHS), and resource management system (RMS). These systems can be interrelated based on the configuration and setup of the network. Consisting of physical assets such as servers and routers connected through an airport internal network, these systems are vulnerable to physical and cyber threats. In this work, a flexible and modular approach is presented to combine various threats and apply a series of attacks onto the air traffic model by impacting single airports. In contrast to existing work, the nodes of the air traffic model do not reduce their performance to zero but follow pre-estimated resilience curves. Thus, the overall resilience of the air traffic model can be assessed in a dynamic way, here demonstrated for air traffic over Germany.

10:15
Information-Sharing in Cross-border Critical Infrastructure Resilience: evaluating the benefits of a digital platform
PRESENTER: Boris Petrenj

ABSTRACT. Modern Critical Infrastructure (CI) systems are becoming increasingly interconnected across international borders. Even minor disruptions to these complex systems can have significant impacts on the economic and social functions of the affected country and beyond. To increase the resilience of CI, stakeholder organizations must collaborate and exchange information at the local level throughout the Emergency Management (EM) cycle. Public-Private Collaborations (PPCs) allow for a more coordinated and effective response to threats and emergencies that may arise by bringing together the stakeholders. The Critical Infrastructure Platform (PIC) is an ICT tool aimed to support a cross-border regional resilience strategy between Lombardy Region (Italy) and Canton Ticino (Switzerland), by enabling secure and effective information-sharing, inter-organizational risk assessment, monitoring, and operational coordination under critical operating conditions and severe disruptive events. The paper evaluates the benefits of PIC to improve the resilience of networked CI systems in a cross-border region by its capacity to address common barriers and challenges of inter-organizational information-sharing and collaboration.

10:30
Application of Adaptive Time-Stepping in the Resilience Analysis of Interdependent Infrastructure Systems Using an Iterative Optimization-based Simulation Framework
PRESENTER: Hamed Hafeznia

ABSTRACT. Civil Infrastructure Systems (CISs) play a crucial role in the socioeconomic development of communities. CISs are also critical urban systems because of providing essential commodities and services. Due to complexity and interdependency, a disruption in the function of CISs may result in cascading failures and degradation of the performance of other infrastructure systems, such as water supply or communication systems. Hence, the resilience of the CISs against natural hazards has taken stakeholders’ attention. Improving the resilience of infrastructure systems can reduce the damages to the CISs and economic losses of urban communities. This study introduces an Iterative Optimization-based Simulation (IOS) framework to quantify the resilience of interdependent infrastructure systems to natural disasters. This IOS framework comprises five modules, namely, risk assessment, database, simulation, optimization, and controller. The role of the risk assessment module is to simulate the hazard and assess its impacts on the functionality of infrastructure networks’ components. After evaluating the components’ vulnerability, the data regarding the post-disaster status of infrastructure networks is transferred to the database module. Next, this data is called by the simulation module to trace the evolution in the performance of infrastructure networks through the recovery process step by step. The data generated by the simulation module in a step is populated into the database. Then, the simulated data is used by the optimization model to find the optimal flow of services within the networks. According to the optimal solution stored in the database, the simulation module changes the supply and demand patterns, simultaneously models the recovery process, and updates the functionality level of the components in the interdependent CISs, setting the stage for the next step of the recovery process. In the meantime, the controller module computes the relevant resilience metrics. The procedure is iterated between the simulation, database, optimization, and controller modules until a stopping criterion, typically formulated as reaching a fraction of the pre-disaster CISs functionality levels, is met. Most research studies considering an optimization-based framework have applied Equal Time-Stepping (ETS) to the resilience assessment period (e.g., one day). This approach is applicable to small and deterministic case studies. The computational burden for the probabilistic resilience assessment of large-scale interdependent infrastructure networks is a serious and multifaceted challenge, especially in the case of using Equal Time-Stepping (ETS). In this study, for example, the optimization module of the proposed IOS framework solves the Mixed-Integer Linear Programming (MILP) problem. Thus, to reduce the computational cost, this study uses an Adaptive Time-Stepping (ATS) approach. The ATS approach changes the size of the time steps during the resilience assessment period of interdependent CISs. The interdependent infrastructure networks (power, natural gas, and water) located in Shelby County (TN), USA, constitute the case study to test ATS. The seismic resilience of Shelby County’s infrastructure networks was evaluated against the earthquake with a magnitude of 8.2 and an epicenter located at 35.3 N and 90.3 W using both the Adaptive Time-Stepping (ATS) and the Equal Time-Stepping (ETS) approaches. In the ATS approach, the length of the time steps varies...

09:45-11:00 Session 15I: S.31: Case Studies on Modern Predictive Reliability: Industrial Perspective
09:45
RAMS never dies! Applying the approach to IT/OT converged systems.
PRESENTER: Arno Kok

ABSTRACT. The reliability of industrial systems is challenged by the increasing use of digital technology. One of these challenges is the reliability of digital technology in combination with physical assets - so-called ‘IT/OT’ converged systems. To ensure the reliability of physical assets, RAMS (Reliability, Availability, Maintainability, and Safety) methodology has become a widely accepted approach for designing and evaluating their performance. Unfortunately, the RAMS method is less common and evaluated in the context of IT/OT converged systems. This research discusses the application of a five-step RAMS method within the context of IT/OT converged systems. The outcomes of this RAMS method on a traditional OT system and an IT/OT converged system are compared using a real case study, carried out with the main Dutch railway operator. The case study shows that the current RAMS application processes should be adapted for use on IT/OT converged systems. Several design principles are presented which can guide the better application of RAMS within an IT/OT converged environment.

10:00
Predictive maintenance model based on anomaly detection in induction motors: a machine learning approach using real-time IoT data

ABSTRACT. PWith the support of Internet of Things (IoT) devices, it is possible to acquire data from degradation phenomena and design data-driven models to perform anomaly detection in industrial equipment. This approach not only identifies potential anomalies but can also serve as a first step toward building predictive maintenance policies. In this work, we demonstrate a novel anomaly detection system on induction motors used in pumps, compressors, fans, and other industrial machines. This work evaluates a combination of pre-processing techniques and machine learning (ML) models with a low computational cost. We use a combination of pre-processing techniques such as Fast Fourier Transform (FFT), Wavelet Transform (WT), and binning, which are well-known approaches for extracting features from raw data. We also aim to guarantee an optimal balance between multiple conflicting parameters, such as anomaly detection rate, false positive rate, and inference speed of the solution. To this end, multiobjective optimization and analysis are performed on the evaluated models. Pareto-optimal solutions are presented to select which models have the best results regarding classification metrics and computational effort. Differently from most works in this field that use publicly available datasets to validate their models, we propose an end-to-end solution combining low-cost and readily available IoT sensors. The approach is validated by acquiring a custom dataset from induction motors. Also, we fuse vibration, temperature, and noise data from these sensors as the input to the proposed ML model. Therefore, we aim to propose a methodology general enough to be applied in different industrial contexts in the future.

10:15
Stress-test Based Transition Model for Lifetime Drift Estimation and RUL Prediction of Discrete Parameters in Semiconductor Devices

ABSTRACT. In recent years, self-driving technologies in cars have become more and more mature. This affects the whole automotive industry. Autonomous cars are expected to have more up-time and more total usage time compared to the current generation of non-autonomous vehicles.

In semiconductor industry for automotive applications, functionality over lifetime is a quality target. With the increasing usage time in self-driving cars, new challenges arise in the prediction of remaining useful life (RUL) in the context of prognostics and health management (PHM). Predictions of remaining useful life are both important for on-line monitoring and product testing before shipping. For this, statistical models for lifetime based on accelerated stress tests are needed.

We propose a semi-parametric transition model for the calculation of the lifetime drift of discrete electrical parameters based on accelerated stress tests. We further discuss methods for extrapolation of projected drift to calculate interval estimators for the remaining useful life. Accelerated stress tests are used in the semiconductor industry to simulate the lifetime of devices in a shorter-than-real time frame. Electrical parameters of devices are first measured, then they are put to harsher-than-usual stress conditions, i.e., extreme heat, cold, or humidity. Then, the parameters are measured again at certain, predefined times, called readout times, and the devices are put back to the stress test and so on. The electrical parameters of the devices change as the devices age. This drift of parameters is called lifetime drift and is taken as an indication of the level of degradation within the device. A statistical model for the lifetime drift is needed to guarantee customer quality and calculate the RUL.

A model for continuous lifetime drift has already been proposed previous work of the authors, based on work by Hofer et. al.. We now introduce models for discrete parameters in the case of both discretized and truly discrete data. The model for discretized data is based on an adaption of existing methods and the model for truly discrete data uses non-parametric estimations of transition probabilities to obtain a Markov model for the lifetime drift. We further discuss extensions of the models to extrapolate future behavior and compare them with different regression-based methods for the calculation of the RUL. We discuss both quantile and expectile regression methods and also propose a regression method based on calculated model quantiles to obtain interval estimations for the remaining useful lifetime.

The work has been performed in the project ArchitectECA2030 under grant agreement No 877539. The project is co-funded by grants from Germany, Netherlands, Czech Republic, Austria, Norway and - Electronic Component Systems for European Leadership Joint Undertaking (ECSEL JU). All ArchitectECA2030 related communication reflects only the author’s view and ECSEL JU and the Commission are not responsible for any use that may be made of the information it contains.

10:30
Expert-in-the-Loop Design Assurance Framework with MBSE-assisted Automatic FMEA Generation

ABSTRACT. While significant progress has been made with the development and adoption of computer modelling and simu-lation tools to assist with the systems design, many of the analysis methods, in particular those focussed on ro-bustness and reliability for the design and process assurance, such as FMEA, still remain expert-centred and time and resource intensive. This paper presents the development of an MBSE-assisted automatic FMEA gener-ation tool, underpinned by systematic cause-and-effect reasoning supported by ontologies, enabling integration and traceability of requirements and critical characteristics across the levels of decomposition of the system. A real world case study of a robotic manufacturing process design for an electric drive unit is used to illustrate the use of the tool in an industrial context. The case study illustrates the cascade of the critical characteristics across multiple levels of process and tooling design analysis, supporting the synthesis of robust process controls plans for an Industry 4.0 implementation. As well as discussing the effectiveness of the tool, a reflection on the inter-action between the analyst, the MBSE model and FMEA, facilitated by the tool is provided. The principle guid-ing the design of the interaction with the tool is that the analyst should evaluate the validity of the FMEA gener-ated, with corrections applied to the MBSE model rather than the FMEA outcome. In effect, this means that the tool provides the analyst with the amplified intelligence required for design integrity assurance through the sys-tematic and iterative review of the MBSE system model. The practical benefits are the robustness of the system model and design assurance documentation, and their governance for future use.

10:45
A data-driven failure prediction method for offshore wind turbines using Long Short-Term Memory model

ABSTRACT. As operating in harsh marine environment, the offshore wind turbines often lead to high failure rate which affect the efficiency and reliability of wind power generation significantly. To improve the power generation capacity and decrease the breakdown time, this paper proposes an early failure prediction model for offshore wind turbines using Long Short-Term Memory (LSTM) model. With the SCADA data, a main feature is distinguished by the coefficient analysis to each failure mode. LSTM is used to capture the representation between the main feature and other relative features in the normal operation. When a failure occurs, the consistency of the representation should change dramatically. Therefore, a rule is set to distinguish the pattern between the normal operation and the failures using residual value indicator. With the SCADA data set of an offshore wind farm provided by EDP, it is proved that the algorithm can warn the hydraulic group, bearing and transformer failure about 31 hours, 5 hours and 15 hours in advance respectively.

11:00
Fast and Accurate Industrial Reliability Predictions with Data Mining and AI Methods
PRESENTER: Marco Bonato

ABSTRACT. The accuracy of reliability predictions of industrial components is a key element for product development in modern industrial processes. The main goal of this presentation is to show by concrete examples how a Data Mining strategy can support, improve and accelerate the predictive reliability approach. The agility of such approach requires new and complementary competences needed for implementing state-of-the art techniques. Data Mining is the process of generating insights in large data sets involving methods at the intersection of data science (i.e. artificial intelligence), statistics, and database systems. Machine Learning or Deep Learning algorithms or models can be used in order to manage high volume of data in a short amount of time. Natural Language Processing can be applied to knowledge extraction from unstructured databases. Thanks to that, even dataset built from complex non standardized documentation can become an asset and can contribute to predictive reliability assessment. This paper highlights several examples from the automotive industry dealing with Data Mining and predictive reliability. Indeed, by making such approach popular and “easy to use” would favor its deployment and success. The presentation will show also how even complex mathematical methods can be formalized into user-friendly tools, shared within a worldwide company, distributed to all contributors to project development and be successfully deployed.

11:00-11:30Coffee Break
11:30-12:45 Session 16A: Prognostics and Systems Health Management V

Prognostics and Systems Health Management V

Location: Room 100/3023
11:30
Optimized predictive maintenance strategy for Railways using the iPN method.

ABSTRACT. The safety, availability, and running costs of railway transport are sensitive to the maintenance strategy, which gives vital importance to optimizing it. The interrelated relations between the assets of the railway make it challenging to find the optimal maintenance strategy. The intelligent Petri net (iPN) model, which merges Reinforcement learning (RL) and Petri net (PN), is an adequate tool for such a problem. The PN is suitable for modelling complex systems with heterogeneous information and any kind of distribution and the RL can explore all the possible ways and find an optimal one out of them by interacting with the environment. This study found an optimal predictive maintenance strategy using the iPN model. The predictive maintenance strategy was built based on the remaining useful life (RUL) of the railway tracks, which is determined after taking into account various track geometry and usage profile parameters. In addition, the proposed method depends on physics-based and data-driven models to model defect initiation and defect evolution on a rail for a given rail traffic tonnage.

11:45
FBG thermal response analysis for electro-mechanical components monitoring in aerospace systems

ABSTRACT. Electromechanical components are extensively used among the most important systems in aerospace. In general, their adoption in this sector is regulated by stringent requirements regarding reliability, safety and capability to withstand even particularly hostile environmental conditions. To achieve and maintain this goal over time, it is necessary to carefully monitor specific physical parameters, such as temperature, strain, vibration, etc. In particular, thermal monitoring is really important: overheating of electromechanical components can cause malfunctions or irreparable damages to the entire system in which they operate. Detecting temperature instantaneously and accurately is therefore essential to carry out effective prognostics and diagnostics on the system. For this purpose, optical fiber-based sensors, such as FBG (Fibre Bragg Grating), can be particularly strategic. FBGs are photo-etched directly into the optical fiber and they act like a filter against the radiation passing through them, thus reflecting a specific wavelength (called Bragg wavelength). This wavelength depends on the sensor's geometry, so the optical structure is sensitive to temperature variations due to thermal expansion acting on itself. FBG sensors have several advantages over other ones for this application, including their small size and low weight, their high sensitivity, and above all their electrical passivity and immunity to electromagnetic interference. Furthermore, being a non-invasive optical fiber-based technology, it is possible to monitor temperature even at many distant points using an optical fiber with a single data acquisition system. However, installing FBG sensors requires defining a specific integration strategy on the component to avoid cross-sensitivity problems. Furthermore, a dedicated sensor’s thermal calibration is always required. The current study examines the performances of FBGs, enhancing their ability to read short-term thermal transients and comparing it to a conventional thermal probe (PT100). All instrumentation was placed in a climatic chamber and subjected to different thermal cycles. Specifically, an experimental set-up was developed to compare FBG’s sensitivity under different fiber integration strategies. At first, two fibers were used: the first one with the FBG sensor area covered by the external coating, while the second one without this outer layer. These two fibers were mounted so that the fiber trait containing FBGs did not come into direct contact with the plate on which they were installed. This enabled optical sensors to provide indications that were as independent of the materials on which they were mounted as possible. In a second time, the same thermal cycle was applied for testing FBGs with the fiber put in different external casings. The performances of the various solutions adopted were duly compared and supported by statistical analysis. Tests have shown that optical sensors have an extremely high sensitivity and a much shorter reaction time than the PT100 probe. Moreover, it resulted that when FBG are integrated in other material, they can detect their support’s temperature de facto in real-time. Data collected by this work allow considering strategic the use of FBG for thermal monitoring using a minimally invasive and extremely accurate technology.

12:00
A Fault Diagnosis Method Based on Temperature and Vibration Characteristics for High-speed Train Axle Box Bearing

ABSTRACT. High-speed train running speed unceasing enhancement, the vehicle running status monitoring and security put forward higher request, axle box bearing temperature is too high will affect the service life of bearing, closely related to the safe operation of the train axle box bearing, so the temperature characteristic of axle box bearing and axle box bearing fault diagnosis method is crucial to provide certain reference and reference for the engineering practice. For the fault diagnosis of high-speed train axle box bearings, a deep learning network-based method, considering the features of both temperature and vibration, is proposed in this paper. A two-channel CNN is constructed based on 2D-CNN and 1D-CNN, in which 2D-CNN takes infrared image as input, and 1D-CNN takes vibration signal as input. Convolution and pooling are carried out respectively, and stretching is taken as feature vector. Then, splicing is carried out in the aggregation layer, and classification is carried out through the fully connected network layer. This method can realize the effective fusion of one-dimensional vibration features and two-dimensional temperature field features, and improve the classification accuracy. The performance of the proposed model is analyzed by high-speed train bearing test. The results show that in this paper, deep learning network is used for bearing fault diagnosis without artificial feature extraction, and the average accuracy of training set is 100%, and that of verification set is 98.02%; Based on infrared thermal imaging system for lubrication of high sensitivity compared with vibration system, but the infrared thermal imaging system for mechanical fault vibration system, due to the interaction characteristics of machine learning algorithms; On the basis of eliminating the weakness of each sensor, the fault diagnosis effect is improved obviously; When the infrared image is used for fault diagnosis, the information loss can be effectively reduced by not directly transforming the infrared image into gray image.

11:30-12:45 Session 16B: S.26: Collaborative Intelligence in Safety Critical Systems II

The topics covered in this session should be at the intersections of the followings:

  • Modelling the dynamics of system behaviours for the production processes, IoT systems, and critical infrastructures (System Safety Engineering)
  • Designing and implementing processes capable of monitoring interactions between automated systems and the humans destined to use them (Human Factors/ Neuroergonomics)
  • Using data analytics and AI to create novel human-in-the-loop automation paradigms to support decision making and/or anticipate critical scenarios
  • Managing the Legal and Ethical implications in the use of physiology-recording wearable sensors and human performance data in AI algorithms.
11:30
Dynamic Influence Diagram-Based Deep Reinforcement Learning Framework and Application for Decision Support for Operators in Control Rooms

ABSTRACT. In today's complex industrial environment, operators are often faced with challenging situations that require quick and accurate decision-making. The human-machine interface (HMI) can display too much information, leading to information overload and potentially compromising the operator's ability to respond effectively. To address this challenge, decision support models are needed to assist operators in identifying and responding to potential safety incidents. In this paper, we present an experiment to evaluate the effectiveness of various recommendation systems in addressing the challenge of information overload. The case study focuses on a formaldehyde production simulator and examines the performance of an improved Human-Machine Interface (HMI) with alarm rationalization and procedure display, as well as the use of an AI-based recommendation system utilizing a Bayesian network in conjunction with reinforcement learning. The preliminary results indicate the potential of these methods to aid operators in decision-making during challenging situations and enhance process safety in the industry.

11:42
Revising the "ability corners" approach: A new strategy to assessing human capabilities in industrial domains

ABSTRACT. Human capabilities refer to an individual's innate and acquired abilities that enable them to complete a given task. These capabilities contain physical, mental and cognitive abilities. In an industrial environment, the complexity and nature of duties vary, and different tasks require different levels and types of human capabilities. For example, in an assembly line, a task that demands assembling small and fragile parts would require a high level of manual dexterity and precision. In contrast, a duty that requires lifting and moving heavy components would need an elevated level of physical strength. Understanding the human capabilities required for a task and matching them with the worker's capabilities is crucial for designing and implementing tasks in industrial settings. The term "ability corners" describes equipment (hardware and software) for evaluating and measuring human capabilities in industrial workplaces. The set of "ability corners" initially considered consists of four tests: Precision test, Both-Hands test, Methodology test, and Memory test that respectively evaluate the manual dexterity, memory retention capacity and physical skill of the operators. They were administered outside the working environment or workflow. The results of these tests are used to match workers with the specific capabilities needed for a particular workstation. This study proposes improving the "ability corners" by addressing some of their limitations, the insufficient number of tests to assess human capabilities and the lack of consideration for workers' motivation, personality traits and other factors that might affect their performance on the task. Furthermore, the study in which they were adopted does not consider the dynamic nature of assembly line work or the possible changes in workers' capabilities over time due to factors such as experience, training or fatigue. The present revision aims at enhancing the accuracy and effectiveness of the "ability corners" approach by integrating new techniques, devices and benchmarks into the current method to guarantee that the worker is well-suited for the job and can execute it safely and competently.

This work is part of our research activity within the Collaborative Intelligence for Safety-Critical Systems (CISC) project. https://www.ciscproject.eu/

11:54
Building Resilient Governance Frameworks for Human-Robot Collaboration: Towards a More Interdisciplinary Understanding of Risk and Ethics in European Regulation

ABSTRACT. As the pace of innovation in the fields of Artificial Intelligence (AI) and robotics picks up, and before its outputs become widely commercialised, the EU has recognised the need for adequate regulatory frameworks to govern them (Smuha et al., 2021). The new wave of legislative initiatives combines both risk and ethics as a basis for said regulation, with the Artificial Intelligence Act as a prime example of the former, and the Ethics Guidelines For Trustworthy AI, of the latter (Dunn & De Gregorio, 2022; Alemanno, 2012; Gellert, 2020).

In order to capture the effects of this regulatory trend, the research draws upon a thematic review of a sample of 110 peer-reviewed articles on the topic of AI policy and ethics, from a grand total of 2753 relevant entries found in the database Web of Science starting in 2016 [1]. The criteria for this article selection is a focus on risk-/ethics-based assessments of AI-powered devices in human-robot collaboration in the workplace. A joint analysis of these publications reveals the different conceptions and relations between risk, safety and resilience that are present in social and technical disciplines, and how they spill over into the legislative domain. This study helps us to understand how the two regulatory approaches (ethics and risk) can sometimes be at odds with each other (Smuha et al., 2021). However, it also reveals the common ground on which to develop a more all-encompassing and future-proof definition than is usually deployed in the EU acquis communautaire (Bisconti et al., 2022).

The emerging findings reveal that the key to reining in risk in an increasingly interconnected world lies in identifying the intersections between what was previously treated as separate spheres. This includes understanding that an ecological risk cannot be disentangled from its socioeconomic repercussions, just as a technical breakthrough cannot properly be assessed without bringing human factors into the equation (Berendt, 2022). Similarly, the reliability of any device in the context of a sociotechnical interaction can only be tested after accounting for the geopolitical order, or resistances from the workforce or the consumer base (Weinberg, 2022).

All in all, this paper sets out to forge compatibility between the two most prominent European frames of reference in the regulation of AI and robotics: the ethics- and risk-based approaches (Veale & Zuiderveen Borgesius, 2021). Simultaneously, it contributes to ongoing research in European regulatory circles towards identifying clear risk thresholds and proportionate measures in response to the inappropriate, abusive or irresponsible use of these data-driven applications (Barkane, 2022). Finally, the paper closes by testing the key findings on a wider range of AI-driven tools, with the purpose of helping inform the development of EU legislative frameworks which can successfully encompass the vast diversity of AI-powered devices (Ferenbok et al., 2016; Sandvik, 2020; Behnke, 2022).

[1] For the purposes of this research, 2016 was chosen as the starting year, as it is often referred to as the dawn of the current prolific period in AI academic research (Kerr et al., 2020).

[References in document attached]

12:06
Visual Mental Workload Assessment from EEG in Manual Assembly Task
PRESENTER: Miloš Pušica

ABSTRACT. The use of electroencephalography (EEG) to assess mental workload (MWL) has been the subject of many studies. Also, there have been many efforts to achieve task-independent MWL estimation, with the most recent being in the field of machine learning (ML). However, the estimation still remains highly dependent on the specific task used for ML model training. Furthermore, there is a shortage of research that is focused on developing an estimator that would function for multiple different tasks within a specific task domain. The creation of the dataset described in this work is a step towards developing task-independent ML estimator within the scope of visual cognition. An experiment meant for the ML model training is designed to collect EEG signals for different levels of MWL during manual assembly that involves assembly instructions to be visually processed by operators. It includes idle state of an operator, as well as two different complexity levels of the visual instructions. EEG data is collected using wireless EEG-recording cap that can be easily incorporated in every-day assembly line environments.

12:18
On the Construction of Numerical Models through a Prime Convolutional Approach
PRESENTER: Doaa Almhaithawi

ABSTRACT. In this paper we apply neural network models to a set of natural numbers in order to classify the congruence classes modulo a given integer m ∈ {2, 3, . . . , 10}. We compare the performances of two kinds of architectures and of several input data representations. It turns out that these tasks are fully solved using a convolutional architecture and a special representation for the input data that exploits the prime factor decomposition of numbers.

12:30
Collaborative intelligence, and Human-AI Teaming in Safety Critical Applications: Key challenges and opportunities

ABSTRACT. The continuous march of technology is increasingly opening new possibilities for the application of automation in domains as diverse as process industry, manufacturing and autonomous driving. All these sectors anticipate huge benefits, in terms of cost, productivity and safety, from the large-scale implementation of advanced automated systems over the coming decade. However, few understand the importance of fully considering how the humans that are supposed to use them should interface with the technology to realise the anticipated benefits and even fewer know how to address this problem. The human factors discipline promotes the consideration of human and organisational factors, particularly in safety critical industries, where breakdowns between the automated system and the human operator can have fatal consequences. However, the methods and approaches used by the discipline have not kept pace with the development of technology (Vincente, 2010). HF and HRA communities need to address this shortcoming and the wider agenda on how to develop new model of socio-technical system performance so that the human factors can be core to fast technology development. This ambitious aim will require the development of modelling and assessment capacities directly linked with the needs of industry and society to test human machine interface paradigms for safety critical domains as shaped by AI. AS stated by Wilson and Daugherty (2018) “Organizations that use machines merely to displace workers through automation will miss the full potential of AI…Tomorrow’s leader will instead be those that embrace collaborative intelligence, transforming their operations, their industries and –no less important-their workforces.” In Collaborative Intelligent systems, for instance, humans need to perform three crucial roles. They must train machines to perform certain tasks; explain the outcomes of those tasks, especially when the results are counterintuitive or controversial; and they must sustain the responsible use of machines (by, for example, preventing robots from harming humans). On the other side AI can amplify human cognitive strengths such as filter data to provide users with information about the status of a safety critical plant (e.g. distillation column) & suggest possible procedures to cope with plant status upsets. Furthermore AI systems in collaborative robotics can embody human skills to extend our physical capabilities. In these collaborations the end users should not to be subject to a decision based solely on automated processing and there should always be human oversight. The humans need to be aware that they are interacting with an AI system as both the AI systems and its related human decisions are subject to the principle of explainability, as required by the EU guidelines on ethics in artificial intelligence (2019). The development of Collaborative Intelligence systems requires an interdisciplinary skillset blending expertise in AI with expertise in Human Factors, Human Realiability Analysis, Neuroergonomics and System Safety Engineering. This paper is to present some of the key challenges and opportunities.

11:30-12:45 Session 16C: Safety and Reliability in Oil and Gas Industry I
11:30
A study on safety analysis during the construction period of Oil and gas reservoir-type storage based on STAMP-STPA
PRESENTER: Xiaowen Fan

ABSTRACT. In order to improve the safety level of the construction period of oil and gas reservoir-type gas storage, and to solve the shortcomings of the traditional safety analysis method that cannot comprehensively consider the interaction between components in non-linear complex systems, a risk analysis method based on STAMP-STPA is used to qualitatively analyse the unsafe control behaviour of the construction process of oil and gas reservoir-type gas storage from three perspectives: control, feedback and coordination. Based on the STAMP model, the interaction problem of system components is regarded as a control and feedback problem, and a control and feedback model for construction operations is established; unsafe control behaviour is analysed based on the STPA method. Taking the underground drilling operation and surface engineering facility installation operation as examples, the key risk factors analysed include defects in the installation of the gas injection compressor, deviations in the test pressure of poor casing gas tightness, defects in the construction technology destroying the stability of the reservoir cavity, errors in the special drilling operation of the capping reservoir, and inadequate technical handover. At the same time,the incident tree method and HAZOP method were used to analyse the drilling and construction operations. A comparison of the results of the three risk analysis methods showed that the STAMP-STPA method was 38.5% more effective in identifying risks in construction operations and provided more comprehensive results in terms of information transfer and personnel psychology. The results show that the STAMP-STPA-based safety analysis method fits well with the construction phase of oil and gas reservoirs, fully integrates human, technical and organisational factors, and effectively solves the difficult problem of risk identification with non-linear and non-stationary characteristics during the construction phase, providing strong support for risk management during the construction phase of oil and gas reservoirs.

References: [1] Wu, H., Lai, S. H.. Safety analysis of STAMP/STPA for high-speed railway emergency dispatch[J]. Chinese Journal of Safety Science,2021,31(06):113-120.DOI:10.16265/j.cnki.issn1003-3033.2021.06.015. [2] Niu Feng,Wang Yu,Zhou Cheng. Causal analysis of underground construction safety accidents based on STAMP model[J]. Journal of Civil Engineering and Management,2016,33(01):73-78.DOI:10.13579/j.cnki.2095-0985.2016.01.012 [3] Han Yulong,Li Wentuo,Hu Jinqiu,Cao Yang,Fan Jianchun,Feng Lingnianium. Safety analysis of counter-terrorism emergency processes in offshore exploration operations based on STAMP model[J]. Journal of Safety and Environment,2022,22(05):2703-2710.DOI:10.13637/j.issn.1009-6094.2021.0950

11:45
Selection of oil-well configurations at design phase: proposal of integrity and production indicators.

ABSTRACT. According to the Oil and Natural Gas Production Bulletin, published by the Brazilian Oil and Gas regulator (the National Agency of Petroleum, Natural Gas and Biofuels – ANP), in November 2022, Brazil produced 3,978 million barrels of oil equivalent. About 97% of this production comes from offshore operations. The pre-salt areas stand out among the offshore fields, with an oil flow of 2.964 million barrels of oil equivalent per day, which means about 74.5% of Brazil's total production. Given the importance of the pre-salt fields, an analysis of the offshore operations carried out in this location is in order. The pre-salt fields are located approximately 290 km from the coast, at a depth of over 2,000 m. The reservoir in this field are located about 7,000 meters from the sea's surface. It is possible to understand the enormous risks of this operation. A blowout in a pre-salt well could leak an average of 20,000 barrels per day (62% of the average flow that occurred in the Macondo well in the accident that occurred in 2010). The technological and logistical challenges involved are also evident. In the first case, corrosion, pressure, and temperature exist, among others. In the logistical case, the distance from the coast makes any repair or replacement of equipment difficult and costly. A vital element in these fields is the oil wells. The design of these wells represents an important phase of their life cycle since the project entails operating and maintenance conditions that cannot be easily changed later. Its operation must be safe and profitable to justify the costs incurred in its construction and operation. During the field development, various configurations are presented as alternatives in the design phase of the well. Such configurations are then compared to choose the one that is expected to have the best performance throughout the entire life cycle. In order to evaluate these configurations, indicators must be defined that portray the evaluators' expectations about the quality of the well concerning several aspects. Two relevant factors are related to production and the integrity of the proposed configuration. This article presents the results of a survey of production and integrity indicators for offshore oil wells. The first section of the paper will deal with the introduction to the problem. The following section will summarize the literature review process and its findings. The third section will discuss the rationale adopted to define the functions represented by the indicators. The fourth section will introduce a proposal of the indicators, showing their formulations. The last section of the paper will present its conclusions and recommendations for future works.

12:00
IDENTIFICATION OF KEY FACTORS IN THE DECOMMISSIONING OF OFFSHORE OIL AND GAS INSTALLATIONS.
PRESENTER: Joe Ford

ABSTRACT. Decommissioning of ageing installations continues to be a crucial concern for the offshore oil and gas industry. Within the next decade, it is anticipated that several structures will be required to undergo the decommissioning process. With the removal of these installations comes the management of waste materials in line with current regulations. Prior to the reuse, recycling, or disposal of any materials, they must be decontaminated from hazardous waste. This paper builds on previous research (Ford et al., 2021), which identified the knowledge of current decommissioning legislation as one of the critical issues. Expert judgements have been analysed using an analytical hierarchy process. This, in turn, has been used to refine a Bayesian network to further determine the key factors in the decommissioning process.

Ford, JL, Loughney, S, Blanco-Davis, E, Shahrokhi, A, Calder, J, Ogilvie, D and MacEachern, E (2021) Benchmarking and Compliance in the UK Offshore Decommissioning Hazardous Waste Stream. In: Proceedings of the 31st European Safety and Reliability Conference (ESREL 2021). pp. 2555-2561. (Proceedings of the 31st European Safety and Reliability Conference, 19 September 2021 - 23 September 2021, Angers, France).

12:15
A Knowledge Graph Method for Risk Factor Analysis of Underground Gas Storage
PRESENTER: Mingyuan Wu

ABSTRACT. In recent years, the data from underground gas storage stations have become more complex and scaled up. This paper proposes a knowledge graph method for risk factors analysis to use textual information such as production reports during the operation period of gas storage and underground gas storage. The technique extracts relationships from textual data of the gas storage operation period, identifies risk factors using a Bi-directional Long-Short Term Memory network and Conditional Random Field algorithm (Bi-LSTM-CRF), finds the connections among them, and builds a knowledge graph of risk factors based on the extraction results using Neo4j graph database. In addition, this paper compares Bi-LSTM-CRF with other models, and its accuracy, recall, and F1 value metrics are improved by 3.6%, 2.9%, and 3.2%, respectively. The results show that the Bi-LSTM-CRF risk identification method has the highest accuracy rate of 94.3% and the best results in unstructured text extraction from gas storage reservoirs. This paper proposes that the risk factor analysis method based on a knowledge graph can characterize the relationship between risk factors and effectively improve underground gas storage sites' risk management capability.

12:30
Efficiency evaluation of emergency resource allocation for urban gas pipeline leakage accidents based on DEA
PRESENTER: Sanfeng Yang

ABSTRACT. In the daily operation of city gas pipelines, due to the characteristics of the gas itself and the influence of the environment in which the pipeline is located, leakage accidents are prone to occur and cause serious consequences. When the gas pipeline leaks, the accident is handled according to the emergency plan, and the emergency resources are an important part of the emergency plan, and the complete emergency resources can well reduce the maintenance time, thereby reducing the economic losses caused by the leakage accident. In order to improve the utilization rate of emergency resources for urban gas pipeline leakage accidents, reduce their redundancy, and improve the efficiency of emergency repair. The purpose of this study is to evaluate the efficiency of emergency resource allocation for third-party damage leakage accidents in city gas pipelines by combining DEA models. Through the analysis of the efficiency values, the under-investment or redundancy of each gas company's resource allocation is found, and corresponding suggestions are made to improve the gas company's emergency response capability for gas leakage accidents and to provide a reference basis for the emergency resource allocation for urban gas pipeline leakage accidents.

11:30-12:45 Session 16D: Occupational Safety I
Location: Room 2A/2065
11:30
Injuries at work: a methodology for outlining and analyzing the “Seveso sector”

ABSTRACT. The "Seveso" legislation aims at the prevention of major-accident hazards involving dangerous substances in industrial plants, based on well-defined thresholds. Inail (National Institute for Insurance against Accidents at Work) is a technical body in charge of regulatory enforcement. At national level, the technicians provide for the fulfilment of the decree and support the institutional actions with a specific research activity also to provide targeted methodological guidelines. In the "Seveso" field, most of the studies and publications focus on environmental or plant-engineering problems, while the aspects connected to workers are not always evident. Instead, this paper wants to focus on the workers and it wants to allow to find this hidden dimension by applying a specific methodology. Through the use and the processing of appropriate statistical data and information, contained in internal and external to Inail databases, the main variables connected with the accidents occurred in “Seveso” field are analysed. The idea of this methodology was born in the transition period between the previous legislative framework and subsequent new outlined legislation (Seveso III Directive and Italian Decree 105/15). The latter, while confirming the main guidelines adopted for the control of the plants with major accident hazard, has totally reorganized the matter in the Seveso area. One of the most important innovations was the new planning introduced for carrying out the inspection activity and the formulation of the various attachments in which technical and operational indications were collected. An important aspect is that the implementation of a management system to control the risk of major accidents is an obligation for all types of establishments and over time, a system that is properly implemented strengthens the awareness necessary for management of the risks present in its production reality. Hand in hand with this aspect, the control system carried out by the competent Authorities and technical Bodies is now effective, have a new structure and organization than in the past. As previously said, in many studies in the “Seveso” field, the aspects related to health and safety of workers, in terms of occurrence of accidents, are not widely evident. The developed methodology, on the other hand, aims to carry out an analysis of accidents that occurred in this sector, which was properly articulated by the authors in four areas: production and distribution of metals, chemical industry, storage, depots and distribution, other activities. Moreover, it allows to overcome difficulties caused by the transversality of the ATECO codes, i.e. the Italian classification for economic activities arising from the NACE nomenclature created by Eurostat: the dangerous substances indicated in the Seveso Directive, in fact, are present in several industrial processes that fall into different sectors of economic activities, and the most of them are not subject to the Seveso legislation. In this papaer the results of the analysis carried out for the period 2017-2020 will be presented. Moreover, some considerations will be provided not only about the trend of accidents in the above period, but also on the previously analyzed one (2012-2016).

11:45
Occupational Health exposure risk factors related to Lower Back Pain Amongst Drivers/Operators of Articulated Vehicles and non-Drivers of Articulated Vehicles at the Ngqura Container Terminal, in the Eastern Cape.
PRESENTER: Martha Chadyiwa

ABSTRACT. Background: Lower back pain remains one of the most common work-related complaints of the developed and developing countries. The purpose of this study was to examine how the body mass index, duration of driving and the vehicle seat’s condition were associated with Lower back pain in Drivers/Operators of articulated vehicles and non- Drivers of articulated vehicles, in Ngqura Container Terminal, Eastern Cape. Objectives: The objective of this study was to examine the prevalence of occupational health exposure risk factors associated with lower back pain in drivers/operators of articulated vehicles and non-drivers of articulated vehicles at the Ngqura Container Terminal, in the Eastern Cape. Methods: A primary data analysis was obtained through a structured questionnaire and using an interview method. The data was cleaned and then entered into a software for analysis. The crude odds ratios were used to calculate adjusted odds ratios using multivariate logistic regression in SPSS. Frequencies and percentages were identified using the descriptive statistical analysis. The crude odds ratios were calculated using SPSS program. The multivariate logistic regression was used to get the adjusted odds ratios to obtain occupational risk factors associated with lower back pain. Confidence intervals were used to obtain the statistical significance within the variables. The data was then presented using figures and tables. Results: Overall, 350 (60.4%) out of 579 participants belonged to the Drivers of articulated vehicles category. Male participants were significantly more likely to be Drivers of articulated vehicles (AOR; 95%CI). Most of the Drivers of articulated vehicles were Black African, (81.43%). Being Black African was significantly more likely to be a Drivers of articulated vehicle (AOR; 95%CI). The majority of Drivers of articulated vehicles were overweight to obese as these categories combined made up 82.86% of the body mass index category. The was a significant difference at the years of working at Ngqura Container Terminal variable between the Drivers of articulated vehicles and non-Drivers of articulated vehicles as illustrated by the p value being <0.0005. On the seat related items, none of the items were found to be significantly more likely to cause lower back pain to Drivers of articulated vehicles. With the distribution of Drivers of articulated vehicles and non-Drivers of articulated vehicles by difficulty of activities several activities were found to be significant. Conclusion: In my study race, gender, years of driving vehicles or operating the machines, income and obesity certainly played a large role in lower back pain. As it can be seen therefore the risk factors for lower back pain are the males of coloured and the back race, working at Ngqura Container terminal for a period of 5 years to more than 10 years, earning between R15 000 to R30 000 and obese.

12:00
Approaches to assessing hand and wrist ergonomics in the workplace - a comparative case study
PRESENTER: Lubos Kotek

ABSTRACT. Physical stress occurs in every activity of the work process. If this load is excessive, it poses a serious problem for the body. As a consequence of the strain, painful problems occur which initially reduce the worker's comfort. Over a long period of time, it already causes a decrease in productivity and thus financial consequences for the employer. Therefore, it is important to take preventive measures and avoid damage to health. The methods chosen are a key component in the successful implementation of ergonomic assessment and design of work systems. In the case of upper limb ergonomics assessment, the most commonly used methods are index methods evaluated by researchers based on camera footage, simulation of work activities in specialized software, automatic assessment with spatial data capture by optical means, strain gauge method, and automatically integrated electromyography. In recent years, methods for assessing activities in virtual reality (such a method was developed at our workplace) have also started to be used. This article aims to compare the most commonly used methods for hand ergonomics assessment.

12:15
A novel tool for preliminary risk assessment of climate change on workers’ health and safety in outdoor worksites

ABSTRACT. Nowadays, there has been considerable research regarding the public health and environmental aspects of Climate Change, but the literature on the potential impacts of Climate Change on the health and safety of outdoor workers has received limited attention. Outdoor workers which include, by a way of example, agricultural, construction, and transportation workers, and other workers exposed to outdoor weather conditions, are exposed at increased risk of heat stress and other heat-related ailments, extreme weather, and occupational injuries due to Climate Change-related issues. Climate Change is increasing environmental temperatures and extreme weather events, affecting air pollution and the distribution of pesticides and pathogens. The implementation of enhanced occupational health and safety measures that can cope with the effects of Climate Change on workers is a key step towards the adaptation perspective that must be embraced to ensure a safer and more sustainable future for the workers. In this paper, a new tool named Climate Change - House of Safety (CC-HoS) is designed to address new risks and to carry out in an effective way the risk assessment considering specifically the risks related to Climate Change. The CC-HoS, derived from the House of Safety (HoS), aims to investigate the direct (i.e., warming, extreme weather, ...) and indirect impacts (i.e., air pollution, UV exposure, vector-born disease, ...) of Climate Change on workers' health and safety in outdoor worksites. This tool can correctly identify and assess risks through the Risk Priority Number (RPN) in terms of Severity, Detectability, and Occurrence criteria, while determining the most suitable safety devices and preventive/protective measures to manage the previously identified risks. The proposed approach is applied to a company operating in the agricultural sector. The effectiveness and usefulness of the tool for selecting the most effective technical solutions to mitigate risks related to Climate Change are presented in the case study.

11:30-12:45 Session 16E: Autonomous Driving Safety I
Location: Room 100/4013
11:30
Components and their failure rates in autonomous driving

ABSTRACT. Autonomous driving has been among the most actively researched topics over the past decades. Today, automotive vehicles are already equipped with driving assistance systems with partial autonomous driving capabilities. Thus, the need for quality assessment of automated driving functions becomes increasingly vital. The used hardware and software must undergo vigorous safety assessments with regard to reliability and safety. This must be done under careful consideration of driving scenarios and environmental conditions. The safety of the intended functionality (SOTIF) standard, which is developed under the corresponding ISO 21448 standard for road vehicles, lies at the center of these considerations. SOTIF deals with the question of how a target function is to be specified, developed, verified and validated so that it can be considered sufficiently safe. As a good starting point, we suggest regarding the individual failure probabilities for each of the compenents comprising the autonomous driving system. Based on the failure probabilities of each component, it is possible to make assumptions about the failure probability of the system as a whole and even identify possible deficiencies. In our contribution, we aim to identify the typical components needed for an autonomous vehicle and further provide a comprehensive overview of failure probabilities for said components. Certainly, it would go beyond the scope of this work to create a statistically firm data basis by individually testing all components until failure, especially when taking into consideration that the failure probabilities of each component varies over time and with environmental conditions. Instead, the relevant factors with regard to the typical failure modes are identified and relevant data is accumulated from publications which reflect the current state of the art.

11:40
The many faces of safety cases
PRESENTER: Thor Myklebust

ABSTRACT. This paper discusses the input documents and project decisions that are important when developing a safety case. The discussion is based on interviews with seventeen companies – all engaged in building safety cases for commercial products. The majority of the companies are Norwegian and Swedish. However, we have also interviewed companies from Denmark, UK, USA and Turkey. We discuss issues such as when in the project to start developing a safety case, what are the important inputs needed, and what are the roles of the required standards. Some issues will not be included – e.g. AI systems. The main reason for this is that none of the companies we interviewed developed AI systems. We also discuss important issues such as the purpose of the safety case, safety case maintenance and the role of reuse when developing a safety case. We will also discuss the relationships between safety case and trust case and how a safety case can be used in communication and to build trust in a system. Our further work will focus on two important areas – traceability between the system and safety case, which is important in order to keep the safety case up to date during system changes – and the possibility of expanding the “case” idea to bridge the communication gap between software developers and customers or users, e.g., to develop a “usability case” or a “maintenance case”. This is an extension of the observation that a safety case is used as a means of communication between safety experts and software developers.

11:50
Ensuring Safety in Highly Automated Vehicles: A Review of Testing and Validation Methods for Robustness and Reliability

ABSTRACT. The future of mobility is set to be reformed, as the rapidly increasing use of driver assistance systems and highly automated vehicles (HAVs) show their great potential. The use of deep neural networks in autonomous driving systems has led to significant progress in this area. However, the increase in accidents involving highly automated vehicles highlights the need for effective testing and validation methods to increase the overall safety of these vehicles. With many technology companies and manufacturers aiming to put Level 4 and 5 vehicles into operation soon, the safety of HAVs remains a major concern. Rigorous testing and validation against potential failures and misbehaviour are required to ensure the reliability and robustness of these systems. This paper provides an overview of state of the art in testing and evaluation methods for machine learning-based HAVs. A literature review on these topics is provided to give valuable insights to researchers, practitioners and policymakers. As such, the review describes different types of validation, verification and testing methods, including real-world testing, simulation testing, hardware-in-the-loop testing, adversarial robustness, and methods used for explainability and interpretability in AI. The advantages and limitations are discussed and current challenges are highlighted. Finally, open research questions and future directions in the field are identified.

12:00
Dynamically resolving and abstracting Markov models for system resilience analysis
PRESENTER: Ivo Häring

ABSTRACT. Regarding the modeling of quasi-static systems with minor failures for failure prediction and maintenance, Markov models have shown to be very successful. Finite discrete state models can be considered as best practice in this domain, often even assumed to be homogeneous. The question arises if Markov models are also capable to model resilience of systems including major disruptions, where great fractions of the system and its functionality fail. To this end, analytical propositions are made that define model extensions. An initial scalable system is defined, including expected refinements and abstractions. In further phases, major disruptions occur. The disruptions can cause branching points opening routes to model extensions or abstractions. Also independent of disruptions, new states and transitions are introduced or merged for model granularity adoption. Overall system behavior can be interpreted in terms of system improvement with or without new system states or functionalities and corresponding transitions, reaching the ex-ante system state as before the disruption, reaching a deteriorated system state, or finally various degraded and failed overall system states. Definitions such as states, absorbing states and critical transitions are reinterpreted or extended to allow for dynamically resolving or abstracting the Markov model. The main results are extended definitions and derivations when compared to traditional Markov models. Based on the analytical expressions, an example is provided where the formalism could be applied with advantage for autonomous driving safety assessment by considering increasing or decreasing levels of resolution of subsystems or subfunctions.

12:10
On the foundation of autonomous mobility: Establishing fundamental principles for a digital driver
PRESENTER: Bård Myhre

ABSTRACT. Autonomous mobility is by many regarded as the ultimate vision for future, green transportation, and driverless operations are therefore pursued for several transportation modes, such as road, rail and water. The scientific and technological communities within each of these domains do however employ different definitions of autonomy or automation: Road transportation mainly adheres to the six "SAE Levels of Driving Automation" [1], rail automation is characterized using five "Grades of Automation" [2], while the maritime domain seem to narrow down to five "levels of autonomy for navigation functions" [3] or "degrees of automation" [4]. While these definitions to a certain extent describe the functionality of the vehicle, train or vessel, they do not provide any assistance or direction when it comes to designing autonomous systems. Furthermore, they do not touch upon aspects such as hand-over between humans and automation, nor do they provide a conceptual framework on how to handle the shift between remote human control and autonomous control.

On this background, this paper aims to address the shortcomings of existing definitions of autonomy and automatic operations by introducing the concept of a "digital driver" (alternatively, a "digital seaman" or "digital train driver"). The concept of a "digital driver" allows for a directed assessment on the actual operating functions, by effectively separating the "driving function" from the "mechanical functions" in the same way a human driver is separated from the car that she or he drives. But more importantly, it invites us to establish some defining principles for "the digital driver", making a constructive foundation for considering both technological capabilities, regulatory requirements, and human-machine interaction in a concrete and non-abstract way.

The paper will first give an introduction to existing definitions of autonomy, before introducing the idea of a "digital driver" as a conceptual basis for autonomous mobility across all domains. Then, the paper will suggest some fundamental principles regarding the "digital driver" and describe why these principles are selected and their implications. This will include considerations on how a digital driver can be located both locally and remotely, how to handle interactions between humans and digital drivers (including interactions between different digital drivers), and how to conceptualize fallback functions within the paradigm of digital drivers. Finally, the paper will elaborate on the technological, commercial and regulative implications of employing these principles.

References [1] SAE International (2018) Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles (Surface Vehicle Recommended Practice: Superseding J3016 Sept 2016) [2] IEC 62290-1:2014. Railway applications - Urban guided transport management and command/control systems - Part 1: System principles and fundamental concepts [3] DNV GL (2018). Class guideline DNVGL-CG-0264: Autonomous and remotely operated ships [4] Bureau Veritas (2019). Guidance Note NI 641: Guidelines for Autonomous Shipping

11:30-12:45 Session 16F: S.07: Computational and Simulation-based Risk Assessment II
Location: Room 100/5017
11:30
Influence of transformation capacity expansion of a substation on the distribution network resilience: a study of a substation in the metropolitan region of Recife-Brazil.
PRESENTER: Thais Lucas

ABSTRACT. In face of the climate changes caused by global warming, the energy transition became a relevant aspect in the energetic planning to be less dependent on fossil fuels. The Brazilian energy mix is already predominantly renewable, and until the end of the decade more investments in renewable energies expansion are expected. So, the electrical system must be structured in order to operate safely, efficiently and reliably, in order to ensure the continuity of the electrical and energy supply, in addition to avoiding the security of supplies. For this, the expansion and implementation of substations is essential since they contain the equipment responsible for the continuity of the power flow and voltage transformation that will be distributed to the population. Thus, addressing these requirements in the system’s planning, the concept of resilience is highly appropriate to tackle them. Specifically dealing with substations, which are responsible for the transformation, protection, control and maneuver of electrical energy until supply to the final consumer, is a way of mitigating problems related to the discontinuity of energy supply, being necessary to carry out electrical studies in order to investigate the resilience of the substation and the power distribution network to which it is inserted. In this context, this article aims to investigate, through a detailed literature review, how the concepts of resilience and reliability are approached in the case of substations. At the end, a methodology to assess the reliability in this context is proposed and is applied in a real substation considering disruption scenarios and analyze how it impacts the operation.

11:45
Deep Behavioural Replication of Markov models for Autonomous Cars using Neural Networks

ABSTRACT. With the advancement of autonomous technologies, autonomous vehicles are on the brink of a revolution. With the increased autonomy of the vehicles and their dependence on self-driving functions, the necessity to assess the reliability of the individual functions and the overarching vehicle also increases. Classical reliability and safety assessment for such systems include Failure Modes, Effects and Diagnostic Analysis (FMEDA), classical Fault Tree Analysis (FTA) or classical Markov models, which are best practice tools recommended by IEC 61508, ISO 26262, and SOTIF ISO 21448. All these tools assume in their classical arrangement that future states depend only on the current state, i.e. the Markov memoryless property. More realistic approaches that allow for rule-based transitions in which the memoryless property does not apply anymore, include Monte Carlo simulation of Markov chains. These, however, are significantly more computationally expensive.

In terms of their potential applicability as an in-the-loop component for assessing the safety of autonomous driving, Markov models essentially become the bottlenecks of the overall toolchain. For a component-level architecture of an autonomous vehicle with an extremely high number of states, the computational overhead gets enormous. In this context, neural networks are excellent tools to learn the behavior and pattern of multidimensional data. Once trained on sparsely generated data of the Markov Model being replaced, they essentially mimic the Markov models and can be used instead. It can be argued that deep learning methods themselves are computationally intensive, which is indeed true, but their computational overhead is mainly associated with the training phase before deployment. The fast prediction phase makes them an excellent in-the-loop component, for instance, to assess system behavior in case of partial degradation.

To this end, in the current work, we will show an approach to replicate the behavior of non-homogenous Markov models applied in the field of autonomous vehicles using deep learning methods. The results compare the estimations and drift in the results assuming the Markov model to be the ground truth. It is shown that the deep learning model is capable of learning and generalizing the behavior of the Markov model. The learned network is also tested using test data.

12:00
Airfox UPRT Flight Simulation with Wake Vortex Encounter Events
PRESENTER: Jonathan Pugh

ABSTRACT. A multidisciplinary team of academics, flight safety experts, pilots, flight simulation engineers and human factors specialists adapted the Boeing 737 Next Gen simulator Airfox UPRT at AMST, Ranshofen, Austria to be capable to 'inject' a variable wake vortex encounter (WVE) in cruise flight and measure the effect on a type-rated operating crew. As part of the EU SAFEMODE project (2019-2022), this was used to carry out a validation study for the use of new Air Traffic Control (ATC) cruise wake alerting procedures. Developed at DMU, the flight simulation model for the extended flight envelope allowed continuation of flight simulations in case of onset of high angles of attack and stall conditions following WVE. Along with facilitating the validation of new ATC procedures, the flight data also provided insights into pilot upset prevention and recovery techniques (UPRT) in the era of recurrent academic and flight training in UPRT.

12:15
Reliability analysis of European power system assets

ABSTRACT. Operating safe and reliable power system is imperative for the unobstructed supply of consumers. A large interconnected system can be severely destabilized due to failures of interconnector lines and large generating units. Although work has been done in this field of power system component reliability, most of the literature focuses on North American generating units. Therefore, identifying the failure behavior of critical European assets can be invaluable for modeling and research purposes. In this work, we analyze outages of generation and transmission system assets, obtained from the ENTSO-E transparency platform. We utilize statistical analyses and advanced clustering methods to examine the failure behavior and identify influencing factors. Our results show that the reliability of generating units is primarily impacted by the technology type and the size of the generator. Nuclear units have the lowest forced outage factor of 1.2%, while the highest forced outage factor is observed in fossil units, with 3.4%. This is likely due to the high level of scrutiny that is present in nuclear safety, as nuclear units have the highest planned outage factor of 18.9%. Regarding transmission system assets, we find that AC interconnectors are remarkably more reliable than DC interconnectors. The most reliable assets appear to be transformer units, closely followed by internal transmission lines. The forced outage factor of transformers is 0.036%, an order of magnitude lower than the factor for AC interconnectors, which is 0.11%. Additionally, we identify the Sweden-Germany interconnection as the most frequently disrupted interconnection in the European system, experiencing up to 26 planned disruptions per year. The results obtained from this work can provide key insights for the operation of the European transmission system, helping system operators and researchers identify vulnerabilities.

11:30-12:45 Session 16G: S.27: Advances in Maritime Autonomous Surface Ships (MASS) II
11:30
Alarm and hand-over concepts for human remote operators of autonomous ships

ABSTRACT. Maritime autonomous ship systems are increasingly in the focus of maritime research institutions, specially in China and Norway. A lot of effort is put into development of technical systems based on artificial intelligence and machine learning. Still, for a long time we will need to rely on that highly automated systems onboard keep the human operator in the loop, albeit remotely. For long open ocean transits it is likely that one operator will oversee several ships, and with a mature automation the chance is that operators seldom will need to intervene thus losing skills and “ship sense”. The situation for the human operator in the remote operation centre will likely contain a several of the ironies of automation described by Bainbridge already in 1983 (deskilling, out-of-the-loop syndrome, automation surprise, etc.) The safety and reliability of an autonomous ship system will rely on this teaming between humans and automation. This concept paper intends to summarise some of the Human Factors issues facing designers of the remote workplace in the Remote Operation Centre (ROC) where human operators after a long period of idleness suddenly is summoned to their workstation after an alarm from one of their autonomous ships. How can we make this a good workplace?

11:45
Identifying Test Scenarios for Simulated Safety Demonstration using STPA and CAST
PRESENTER: Raffael Wallner

ABSTRACT. Assuring safety for new technologies like a Maritime Autonomous Surface Ship (MASS) or an Uncrewed Surface Vessel (USV) is challenging due to their complexity and varying operational environments. Safety demonstrations in simulations may be used to verify operational safety, but it is impossible to test all possible scenarios. The paper proposes an approach to identify critical scenarios for scenario-based safety demonstrations based on System Theoretic Process Analysis (STPA). STPA studies the whole system including interactions between components in the hazard analysis and is, therefore, well-suited for systems like MASS or USV, involving interactions of multiple components, sub-systems, the environment, and humans. The presented approach identifies critical scenarios using STPA and generates simulation scenarios from the identified critical, as well as presumably safe, scenario spaces. In case of incidents or unexpected critical scenarios that have been uncovered during the simulated tests, a Causal Analysis using System Theory (CAST) is conducted. Thus, it is possible to improve safety in new design iterations based on the results of the evaluation. The proposed approach is demonstrated in a simplified example of a USV during remote operations.

12:00
Situated and event discrete decision making support system applied to to remotely operated vehicles

ABSTRACT. The monitoring of the human-machine interaction, the planning and prediction of possible behaviors, and the detection of missing actions or other errors in advance contributes to the safety of human-machine systems. The development of supervision support algorithms may focus to decision making and also to the related detection of critical decision making situations. With the knowledge of consequences the effects of possible human errors can be evaluated in advance. Based on this online generated knowledge about possibly upcoming consequences the human-machine interaction can be effected for example by additional warnings. This is of special relevance for vehicles which are remotely operated. In the public-supported project FernBin beside others the supervision of the captain’s actions in inland shipping is addressed. The focus is to model possible human-guided driving maneuvers several action steps ahead, the detection of human errors, and also of non- optimal behaviors by defining optimal ones. The final system supervises the captain’s behavior supports her or him in critical situations based on the automated decision making support system. Technical core of this contribution is a Situation-Operator-Modeling (SOM) as event-discrete approach used to model the captain-vessel-interaction of a remotely guided vessel as a graph-based-model. Using this approach sequences of possibly connected actions can be generated describing the human interaction options and therefore possibly upcoming future behaviors which allows beside the detection of not allowed actions, omitted but required actions, the detection of intended as well as unintended upcoming future situations. The approach is applied to experimentally-generated real situations within the context of the FernBin project in combination with the research vessel ‘Ernst Kramer’.

12:15
Integration of human factors-related knowledge into decision support systems applied to assisted and automated operating vehicles using examples for inland vessels

ABSTRACT. Within human-machine systems the human behavior contributes mainly to the safety of the overall system. Detecting upcoming critical situations for the human-machine interaction, or knowing critical actions in advance in combination with the situated observation of human behaviors will allow the design of future assistance systems which realizes a new generation of human-machine system which allow the fluent flow from assistance and intervention up to take over command. For most of professional operated complex systems like power plants, aircrafts, or even vessels the workflow of the human operation is highly regulated and therefore can be assumed as formalized. Upcoming automation systems which includes human assistance and monitoring will allow a detailed sensor-based situation aware in the sense of knowing the consequences of possible action alternatives. Using existing individualized knowledge about preferences and experiences from former interactions as well as human reliability measures about principal behaviors from literature a new quality of assistance can be generated trying to focus on reliability and safety issues as objectives. In this contribution a Situation-Operator-Modeling (SOM) approach is used to describe the captain-vessel-interaction and to illustrate the captain’s behavior as a graph-based-model. Examples from the FernBin and SafeBin projects are considered as example. SOM-based action spaces consisting of possible captain’s behaviors leading to a meaningful desired final situation are online analyzed and evaluated with respect to unsafe and unreliable actions components and/or sequences, so from the manifold of possible sequences the best options can be defined and suggested, critical and harmful ones can be denoted as critical by warnings etc. The reliability-based analysis of the captain’s actions will enable a safer driving behavior and the reduction of accidents and dangerous situations due to suitable warning and interaction strategies of the assistance system. Based on experimentally obtained examples the contribution will explain the modeling, but especially the evaluation of the action components with respect to literature knowledge and experienced and individualized trust values for the evaluation of options.

12:30
Considerable Risk Sources and Evaluation Factors for Artificial Intelligence in Maritime Autonomous Systems
PRESENTER: Changui Lee

ABSTRACT. Alongside of the MASS implementation, Artificial Intelligence (AI) is becoming a prominent issue. As a part of that, it is necessary to prepare possible risk sources and reasonable evaluation mechanism. There already exist international standards regarding AI, machine learning and risk assessment to be able to be considered and interpreted to fit to the maritime sector. This article is aiming to find out risk sources for AI in maritime sector, based on the document on AI concepts and terminology (ISO/IEC 22989) such as level of automation, lack of transparency and explainability, complexity of environment, system life cycle issues, system hardware issues, and technology readiness. Also, it is to propose evaluation factors applied practically for those risk sources, which are robustness, reliability, resilience, controllability explainability, predictability, transparency, fairness, jurisdictional issues, precision, recall, accuracy and F1 score coming from risk management and AI and Machine Learning (ML) related international standards. The article also reviews two MASS related guidelines from Det Norske Veritas (DNV) and American Bureau of Shipping (ABS) to know current status how and what risk sources are contemplated. As such, the combination of risk sources and evaluation factors proposed can be applied to evaluate AI practically after adjusting to fit to applied context specifically. Also, all kind of MASS risk stakeholders can be potential users taking into account on these factors and methods, such as risk and test managers, equipment makers, ship owners, classification societies.

11:30-12:45 Session 16H: Chemical and Process Industry

This session discusses risk and risk modelling issues in the Chemical and Process industries.

11:30
Mitigation of Risks of Corrosion and Delamination by Surface Pre-treatment
PRESENTER: Dana Prochazkova

ABSTRACT. For the needs of practice, the surfaces of metal equipment and objects need to be safe, so that they do not endanger the people who use them and perform the tasks for which they are intended. It is a fact that metal surfaces are significantly damaged by corrosion and delamination, i.e. disconnection of surface layers from the inner layers. Both of these phenomena significantly reduce the service life of both equipment and objects. The size of the damage depends on both, the method of use and the aggressiveness of the environment in which the metal equipment or objects are located. Therefore, pre-treatment of surfaces is carried out. The submitted article examines the effect of pre-treatment of the surface of metal samples on reducing the risk of corrosion and delamination using a coating of powdered plastics. The article describes selected methods of powder coating using three pre-treatment methods, namely: blasting; phosphating; and zircon-based nanopassivation. The aim of the experiments is to find out which surface pre-treatment leads to better resulting surface properties, i.e. higher stability, abrasion resistance and durability in terms of: - mechanical tests, - resistance to corrosion and subsequent delamination of the surface or coating. The submitted article examines the effect of pre-treatment of the surface of metal samples on reducing the risk of corrosion and delamination using a coating of powdered plastics. The article describes selected methods of powder coating using three pre-treatment methods, namely: blasting; phosphating; and zircon-based nanopassivation. The aim of the experiments is to find out which surface pre-treatment leads to better resulting surface properties, i.e. higher stability, abrasion resistance and durability in terms of: - mechanical tests, - resistance to corrosion and subsequent delamination of the surface or coating. In the experiment, all samples, with surface pre-treatments used, were exposed in the salt chamber for the same exposure time, and after the exposure we evaluated the mechanical properties, corrosion and delamination of the coating. The properties in question were evaluated according to the relevant standards. The results show interesting findings for certain types of surface pre-treatment and it is not clear which pre-treatment is the best overall. Analysis of experiments showed that some sample pretreatments reduce the risks of corrosion and delamination of coatings more than others. Some pre-treatments proved to be very good in terms of mechanical properties and others very good in terms of corrosion and delamination of the coating. From a practical point of view, the pre-treatment of coatings with powdered plastics should be carried out considering the application and the environment in which they will be used.

11:45
THE ITALIAN INSPECTORATE SYSTEM OF SEVESO – INDUSTRIAL EMISSION DIRECTIVE INSTALLATIONS: COMMON POINTS AND IMPORTANCE OF COLLABORATION

ABSTRACT. This paper, starting from an overview of legislative and regulatory framework of Seveso Directive and Industrial Emission Directive implementation, and showing the industrial installations in Italy covered by the above-mentioned directives, aims to highlight comparison, overlaps and points in common among them. First of all, the main improvements and innovations adopted through the National Decrees compared to the Seveso and the Industrial Emission Directive are described. Then, a detailed description of the typology and the quantity of the industrial installations, referred to the last available data, is presented along with the number of inspections carried out by the inspectors and the type and number of cases of non-compliance detected. The paper intends to focus on common elements among installations under Seveso Directive and Industrial Emission Directive, as inspection systems, human and economic resources involved, performance indicators and environmental objectives to comply in order to understand how the control is guaranteed and how the main Italian inspectorate system works in these industrial sites. Furthermore, some references to technical aspects are analyzed under both Seveso and Industrial Emission Directive point of view, in particular some technical issues are examined related to the storage tanks in oil refineries installations such as: the floating roof sinking, the waterproofing of the containment basins, the double bottom of the tanks and the possible leakage from the bottom of the tank. These evaluations have been performed regarding the Safety Management System and the Best Available Techniques applied in order to put in evidence the close cooperation and relationship needed, both in technical and managerial terms.

12:00
Response to Risks in fuel, oil, and chemicals storage facilities aiming at improving Reliability – A Case Study
PRESENTER: Márcio Mendes

ABSTRACT. Currently, most remote and isolated communities depend on the reliability of fuel, oil, and chemical storage facilities. The size and complexity of storage facilities plants and the nature of the products handled means that analysis and control of the risks involved are required. Statistics show that the reduction in process accidents and the losses from major accidents in the oil and gas processing and storage industry have not decreased over the years. Current risk approaches in storage tanks emphasize improving the reliability of the design rather than maintaining safe operation. In the last European Congress for Reliability and Safety held in 2022 in Dublin, the author presented a method for improving reliability by conducting risk assessment in the daily operation of chemical, fuel, and oil storage facilities plants based on a combination of PFMEA and BBN. The method allows sensitivity analysis and prioritization of preventive and corrective measures to minimize the probability of failure to maintain a safe operation. This study complements the one presented in Esrel 2022 and focuses specifically on the method for implementing the actions to address the high-scoring risks. As a result, the study shows how to vanquish the stakeholder's resistance to change to address the most significant risks in the process and improve the storage system's reliability. The conclusion is that effective implementation of response actions can be effectively made based on the proposed implementation method. The contribution is significant since the proposed method allows process optimization and risk reduction in the storage of chemical products and permits decision-makers to assign funds for critical activities to implement actions that can impact the safety of the process and system reliability. The present study augments the knowledge of the process, maintenance, and safety engineers/managers and helps process improvement. It can impact the company's PFMEA and management of change processes and help understand performance and safety during fuel storage operations. Although conducted in a specific fuel storage facility, it can be generalized to other industries and fields of work whose safety is affected by change issues resulting in waste, rework, and unnecessary energy consumption. The study can change the practice and thoughts of professionals dealing with PFMEA in companies' operations.

12:15
A short-cut tool to manage NaTech risk in chemical industry due to flood

ABSTRACT. Industrial sites are frequently built close to large rivers and coasts in order to facilitate the supply of raw and production materials and the exchange of goods. However, these arrangements expose the plants to the risk of being involved in floods or coastal surges. The hazard can be considerably worsen when the establishment is a Seveso industry, i.e. an establishment classified as at major accident hazard according to Italian Decree Law no. 105/2015 which transposes Directive 2012/18/EU, also known as the Seveso III Directive. Accidental scenarios due to the release of hazardous materials, triggered by natural phenomena, are named NaTech (Natural Technological event). In Europe, during the last decades, the frequency and severity of extreme natural phenomena involving industrial plants at major accident hazard and giving rise to NaTech events had a grown trend (Italian Natech Working Group, 2016). Furthermore, due to its geological conformation, Italy is particularly sensitive to these natural phenomena and in addition hundreds Seveso plants are exposed to flood risk. This work aims present a short-cut tool for the dynamic assessment of NaTech risk triggered by flood waves and floods in order to support emergency managers in choosing the best strategies to implement actions mitigating the consequence of the event. It consists of a Bayesian network in which the frequency and magnitude of the events together with the vulnerable elements, defined as discrete variables, make it possible to quantify a Natech risk index which is updated in real time according to local conditions. The approach was applied to an Italian case study.

References Italian Natech Working Group. Metodologie per la gestione di eventi Natech. In Proceedings of the Valutazione e Gestione del Rischio Negli Insediamenti Civili ed Industriali 2016, Rome, Italy, 13–15 September 2016. (In Italian).

11:30-12:45 Session 16I: Uncertainty Analysis
11:30
Stochastic Model Updating and Model Class Selection for Quantification of Different Types of Uncertainties
PRESENTER: Takeshi Kitahara

ABSTRACT. Stochastic model updating has been increasingly employed for various engineering applications to quantify parameter uncertainty from multiple measurement datasets. It assigns a joint probability distribution to the model parameters and infers its hyper-parameters by minimizing the stochastic discrepancy between the measurement datasets and corresponding model outputs. The second author and his co-workers have recently developed a Bayesian updating framework, where the probability distribution is approximated by staircase density functions and the stochastic discrepancy between the measurement datasets and model outputs is quantified using the Bhattacharyya distance. This framework does not need any prior knowledge about distribution formats; hence, it is regarded as a distribution-free approach. On the other hand, measurement uncertainty should also be accounted for in stochastic model updating since the measurement is typically performed under hard-to-control randomness. However, it is difficult for stochastic model updating to distinguish different types of uncertainty in the measurement datasets, and measurement uncertainty is embedded in parameter uncertainty. To address this issue, we propose to employ the Bayesian model class selection framework, in which different types of probabilistic models are used to represent different types of uncertainties and the most appropriate model is determined based on the associated evidence. In this sense, the proposed framework does not require prior knowledge about sources of uncertainty in the measurement datasets. The proposed framework is demonstrated using a simple numerical example. We consider three types of synthetic datasets; (i) datasets with measurement uncertainty, where the true system response is contaminated by additive Gaussian noise; (ii) datasets with parameter uncertainty that are generated by assigning the true parameter distribution; (iii) datasets with measurement and parameter uncertainty, in which the multiple system responses generated by assigning the parameter distribution are contaminated by additive Gaussian noise.

11:45
An Investigation into the Transition criteria of the Transitional Markov Chain Monte Carlo method for Bayesian inference
PRESENTER: Adolphus Lye

ABSTRACT. One of the advanced Monte Carlo techniques employed to perform Bayesian inference of epistemic model parameter(s) is the Transitional Markov Chain Monte Carlo sampler. A key characteristic in its sampling approach involves the use of “transitional” distributions to allow samples to converge iteratively from the prior to the final posterior. Hence, the selection of the transition step size becomes of critical importance. Currently, the selection criteria proposed by Ching and Chen (2007) is such that the optimal transition step size is one that realizes a 100% Coefficient of Variation in the statistical weights of the samples in a given iteration. The work presented here considers an alternative selection criterion on the transition step size involving the use of the effective sample size as a metric. The optimal step size considered in this work is one which achieves an effective sample size equal to half the total sample size. To provide a comparative study, the standard Transitional Markov Chain Monte Carlo sampler, along with the modified Transitional Markov Chain Monte Carlo sampler imbued with the alternative selection criterion, are implemented to infer the Coulomb friction parameter on a Single-degree-of-Freedom building structure whose dynamics obey a non-linear differential equation. From there, the sampling performance is compared on the basis of the evolution of the tempering parameter, and the standard error of the estimates.

12:00
Identification of the probability density function of a dimension from samples with measurement errors

ABSTRACT. Various scientific and engineering applications rely on the collection of data, which is subsequently processed and utilized for decision making. In this context, the accuracy of measurements plays a crucial role in ensuring the quality and reliability of the entire process. Unfortunately, measurements are often prone to inherent errors that can significantly impact the precision of the results. This paper introduces a method to effectively account for and, to some extent, correct such measurement errors, acknowledging that they cannot be entirely avoided.

The approach adopted in this study is probabilistic in nature, wherein Probability Density Functions (PDF) are utilized to describe both the collected data and the measurement errors. Specifically, the focus of this paper is on identifying the PDF associated with the true value of the collected data, thereby rectifying the effects of the measurement errors. An additive error is considered here and the PDF of the collected data is involved in the well-known formula of the sum of two independent random variables. The PDF of the measurement error is here assumed to be known.

The proposed method takes the collected data as its input, which consists of a set of realizations encompassing the measurement errors. The PDF associated with this data can be identified (using e.g. the maximum of likelihood, kernel density estimation, etc.).

The focus of this paper is the identification of the PDF associated with the collected data. This PDF lays within an integral and its identification is therefore a challenging task; this operation is here referred to as a deconvolution, as an analogy with signal processing. It is not possible to apply the traditional integration strategies, such as e.g. Monte Carlo simulation or Gauss’ integration scheme. Indeed, the integral would be transformed into a sum, where each term – the PDF of the true value – is unknown. An alternative strategy is proposed here, based on a local approximation of the PDF to circumvent this issue.

12:45-13:45Lunch Break
14:35-15:50 Session 18A: S.36: Artificial intelligence-based reliability and maintenance solutions for complex systems I
Location: Room 100/3023
14:35
An active learning reliability analysis framework based on multi-fidelity surrogate model

ABSTRACT. Limit state function (LSF) can be used to define structural reliability, while complex structures frequently correspond to LSFs with high computational costs, necessitating numerous calls to LSFs for the reliability analysis of such structures. In this context, surrogate model can approximate true LSF effectively to model complex LSF at lower computational costs, and active learning can be further introduced to achieve modeling accuracy with fewer calls to LSF by adaptively selecting learning samples through the machine learning algorithm. Both can be used to respond to the challenge of balancing accuracy and efficiency in structural reliability analysis (SRA) and are rapidly becoming essential methods for the efficient evaluation of structural reliability. Accordingly, active learning surrogate model-based SRA is used in a wide range of fields, involving aeronautics and aerospace, automotive industry, civil engineering, critical infrastructures, land transportation, maritime and offshore technology, nuclear industry, railway industry, water transportation, etc. Nevertheless, the same analysis can correspond to multiple models with different paradigms (e.g., experiment, theory, simulation, big data), and different paradigms or even the same paradigm usually correspond to different fidelities. Detailed paradigms and coarse paradigms are generally considered high-fidelity (HF) models with low model uncertainty and low-fidelity (LF) models with low computational cost, respectively. Therefore, a similar challenge also exists in surrogate modeling, i.e., obtaining the quantity of interest (QOI) at an acceptable cost level by selecting models with appropriate fidelity. As an effective method to deal with this problem, the multi-fidelity (MF) surrogate model has received widespread attention in the performance evaluation of complex structures by fusing models with different accuracies to reduce the computational demand and effectively balance the prediction performance and modeling cost of the surrogate model. To effectively address both of these common challenges, this paper proposes an active learning multi-fidelity surrogate modeling framework for SRA: firstly, a multi-fidelity Kriging (MFK) or a multi-fidelity Gaussian process (MFGP) is modeled based on the theory of surrogate model and multi-fidelity; secondly, learning functions EIF (expected improvement function), EFF (expected feasibility function), U or H and its stopping criterion are applied to implement active learning; finally, Monte Carlo simulation (MCS) is used to implement reliability evaluation. This framework incorporates different fidelity information in the form of online data-driven to accomplish the trade-off between high prediction accuracy and low computational cost by combining HF and LF models. In three numerical examples in different dimensions, two multi-fidelity surrogate models with four learning functions are tested and compared with the corresponding single-fidelity (SF) models, which validate the effectiveness of the proposed framework. For the engineering example of aero engine gear in contact fatigue test, the HF LSF and the LF LSF are constructed based on the standard formulation and the simplified formulation respectively. The proposed framework is used for its SRA and the superiority is proved. All the results demonstrate that the MF model based on this framework is more efficient than the SF model at reducing computational costs without compromising accuracy.

14:47
Deep Reinforcement Learning for Space Power Source Regulation

ABSTRACT. As one of the key subsystems of space equipment, the main task of space power supply system is to ensure that it can provide continuous and stable electric energy during orbital operation as well as the bus regulation function of power supply system. How to adapt to the time-varying network state of the power supply system under the condition of function switching, use limited energy resources to complete the space equipment, and effectively reduce the potential cascade failure risk of the internal components of the dependent network is an important issue to be considered in the design process of the space power distribution network. For the space power supply control system represented by the S4R type, this paper proposes a dynamic analysis model of network cascade fault based on multiple charge-discharge adjustment tests which on the basis of using complex network theory to evaluate the structural reliability of the main error amplification system. Furthermore, it models the actual operation characteristics such as photovoltaic conversion and power regulation in the dynamic analysis of cascading faults, and analyzes the impact of the real-time change process on the overall reliability of the system. The depth and breadth of cascading faults in space power grid system vary greatly under different initial sudden faults, which further increases the risk of cascading faults in a few nodes and links in the distribution and power supply system. In order to balance the risk of each module, it is necessary for the trained algorithm to identify and make long-term optimal distribution decisions, and to realize continuous re-adaptation and re-optimization of time-varying power distribution and power regulation status. This research analyzes the self-discharge problem of on-orbit battery under various charging combination modes in the power supply system of solar cell-power controller-battery with fully regulated bus technology. In this regard, the regulation problem of the space power supply system is modeled as a Markov decision process model, and a power regulation algorithm based on deep reinforcement learning is further proposed to achieve intelligent monitoring and diagnosis of power supply network faults through different ways of bus regulation and filtering technology, so as to reduce the overall cascading fault risk of the space power supply distribution and supply network, while maintaining a reasonable utilization rate of stored power.

14:59
Reliability modeling and optimization for satellite DC-DC converter under complex failure mechanism

ABSTRACT. DC-DC converters are used in electronic systems to implement functions such as voltage isolation conversion and power transfer, and have been widely used in various fields such as lighting, national defense, aerospace, rail transportation, and new energy. Statistical analysis results of satellite failures show that the probability of stable operation of the power system for more than 5 years is less than 20%, while the probability of failure in the first year exceeds 50%. This shows that the reliability of satellite power supply has become an important factor for the failure or delay of space missions. Working profile (including environmental profile) is a comprehensive sequential description of device or product functions, events, and environment, which directly affects product reliability and lifetime. The fluctuation of DC-DC converter working conditions (ambient temperature and load, etc.) will affect the working stress of its components, resulting in significant differences in the life and reliability of components under different working conditions. In addition, during the long-term operation of the DC-DC converter, the parameter degradation caused by the damage of the component material will also affect the overall reliability. For a long time, the DC-DC converter reliability assessment method has been mainly based on historical fault data, component logic function block diagrams and empirical formulas to evaluate the reliability, ignoring the impact of the working profile and degradation effects on the reliability of the DC-DC converter. Existing none of the existing reliability assessment methods are fully applicable to the reliability assessment of switching power supplies under complex working profile conditions. This article analyzes the complex failure mechanism of the DC-DC converter. The reliability of DC-DC converters is difficult to assess because of two key issues: the impact of multiple performance degradations, and the uncertainty of the mission profile. Moreover, these two types of problems are strongly coupled, which brings challenges to reliability evaluation and optimization. Therefore, this paper conducts research on DC-DC converter reliability evaluation and optimal design considering the working profile. On the one hand, comprehensively considering the working profile, failure of key components and performance degradation, the reliability model under the complex failure mechanism of DC-DC is established to realize the accurate evaluation of the reliability of DC-DC converter. At the same time, the reliability optimization design of the entire life cycle of the DC-DC converter is carried out, and a favorable trade-off is made in terms of cost and reliability.

15:11
Probabilistic fatigue life prediction of an aero-engine turbine disc considering the random variable amplitude loading

ABSTRACT. Fatigue life prediction has always been a focus for aero-engine components. In recent years, probabilistic statistics combined with physics of failure has gradually become the principal method for fatigue life prediction of the turbine components. Compared with deterministic prediction methods, probabilistic prediction methods quantify the multi-source uncertainties of the turbine components, including physical variability, load scatter, statistical uncertainty, and model uncertainty, and can provide more accurate predicted result to support the safe operation and maintenance strategy of an aero engine. However, most of turbine components are short of actual fatigue experimental data due to their high cost, and even the intermediate data used for fatigue life prediction is not enough. In general, load uncertainty can be represented by load parameters with a certain distribution, which can be determined by few test data or field experience. With these load variables, the digital simulation can be conducted and the fatigue life distribution can be obtained through sampling and calculation, where the loading level of each sample is seen as invariable. The simplification can greatly improve the computing efficiency and the final evaluated result covers the correct data points, while the scatter of fatigue life may have a large error compared with the practical engineering. The main reason for this error is the physical of a turbine component has been decided before it operates, but the load level of each loading cycle is fluctuant and random during its operation. To overcome the gap of the load characteristics between each individual of the actual turbine components and each selected sample in the process of probabilistic life prediction, this paper proposes a probabilistic fatigue life prediction method considering random variable amplitude loading. In this method, physical uncertainty and load uncertainty are respectively represented by strain or stress responses with different statistical characteristic, which are determined by digital simulation. With the responses scatter originated from different sources, physical property of each individual is sampled and the load history of the individual is further generated randomly. Through assessing the fatigue life of each simulation sample with the consideration of random load history, and further fitting the life distribution with the life data, the probabilistic life prediction of a turbine component can be accurately achieved. A case of the probabilistic life prediction of a turbine disc is presented by the proposed method, and the result is contrast to its evaluated result with intermediate data obtained from actual engineering, which shows the results are quite similar. A comparison between the proposed method and traditional probabilistic prediction method also shows the proposed method can better assess the life scatter of the turbine disc. The method proposed in this paper can provide accurate fatigue life prediction result by digital simulation while the pressure on data of turbine components can be relieved to a certain extent, which has potential value of engineering application.

15:23
Deep Neural Network Approach and Model-Based Design for Battery Capacity Assessment

ABSTRACT. Battery Management System (BMS) is an important link between the power battery and the new energy vehicle. Via the BMS, the state of the whole battery system by monitoring the state parameters of the battery cells, such as voltage, current and temperature can be estimated. According to the calculated parameters, the control and adjustment strategy are carried out to realize the charge and discharge management of the power battery system. One of the core functions of the battery management system is the Battery state of charge (SOC) which can provide effective support to the BMS in terms of battery life, safety and reliability, and utilization. Since the battery SOC can hardly be measured directly, data-driven methods are usually adopted to estimate the SOC, such as neural networks. The neural network requires a detailed battery model, which needs a large amount of computation; also, the accuracy is not high enough. Therefore, this paper adopts a data-driven approach, using a deep neural network method to evaluate the battery capacity. The data-driven approach trains the deep learning model based on historical operational data, and it does not need to consider battery modeling. The deep learning model can be trained offline and predicted online with fast response time, and can be applied to various complex systems. Using MATLAB and Simulink, deep learning models can be applied to model design. Normally, the first step is the data pre-processing for obtained data in three aspects: operation data collected through sensors; the experimental data; and the simulation data obtained through fault simulation and recession simulation. The second step is feature extraction and derivation. The input data is obtained and deep learning model is established, and the customed deep neural network model is used for training, comparison and validation, and model hyperparameters are set. The third step is to train deep neural network model. In the fourth step, the simulations are performed, relevant modules are dragged into Simulink, and the trained deep neural network models in MATLAB are associated by configuring the model files. Then, the models are integrated and deployed. There are many factors affecting the accuracy of SOC estimation, leading a challenging issue in dynamic high precision estimation. Open-circuit voltage and ampere-hour integration are the mainstream at present, and high-precision algorithms (Kalman filter method and neural network method) are the future development direction, among which the neural network algorithm needs a large amount of data for training and learning, and can correct parameters in real time, which can significantly improve the accuracy and get more accurate assessment results in battery capacity assessment.

15:35
A Comparative Analysis of Failure, Reliability, and Maintenance Features of Onshore and Offshore Wind Turbines
PRESENTER: He Li

ABSTRACT. This paper reveals differences between failure, reliability, and maintenance features of onshore and offshore wind turbines with the assistance of released new detests. Initially, the new detests recording operation and maintenance activities of wind farms are introduced. Subsequently, failure, reliability, and maintenance properties of onshore and offshore wind turbines are characterized and compared, including (i) Failure properties such as failure mode, failure causes, failure frequency, and failure criticality; (ii) Reliability features such as failure rate and mean time to failure of components and the entire system; (iii) Maintenance actions including maintenance measures, times related to maintenance and logistics. The comparative analytic results provide a thorough and deep understanding of the operation and maintenance of both onshore and offshore wind turbines, especially their differences identified guide the design and operation of new wind farms for instance floating ones.

14:35-15:50 Session 18B: Land Transportation
14:35
Known unknowns in the road based transport of dangeours goods - Consequences for the risk analysis of tunnel systems.

ABSTRACT. The road-based transport of dangerous goods is regulated concerning the transport, storage, handling, and consumer use of dangerous goods are put in place to protect individuals, property, and the environment from accidents or harm. In Norway the Norwegian Directorate for Civil Protection is responsible for the quality of education and certification of safety advisers and for certificates which allows the individual drivers to transport dangerous goods. It is often assumed that rules and regulations in the Heavy Goods Transport (HGV)- are usually followed (Njå et al., 2012), yet recent studies show that this is not the case (Kuran and Njå 2016, Kuran 2018). This can have unforeseen consequences for tunnel safety and put people, property, and the environment at risk. The paper is based on longitude observational fieldwork from 2016 to 2020 (Kuran 2021a) in the HGV-sector and through the lens of systems theory challenges assumptions in the risk management of tunnels safety that rules and regulations pertaining to dangerous goods are not commonly broken and bent, and discuss both why this occurs, and what the consequences for safety management of tunnels might be. A concept of Adaptive nonconform Behaviour (ANB) is used to explore why strict regulations might be routinely broken in the HGV-sector. ANB cuts across all levels in the system and covers the outright violation of safety-related rules and regulations and activities that deviate from established good praxis. ANB can include strategic adaptations to external and internal socioeconomic pressures. Actors in the industry claim ANB is a prominent characteristic of the day-to-day activities (Kuran 2021b). The concept of ANB is used to situate the concept in the sociotechnical systems of the HGV -sector using systems a systems theoretical approach to feedback and constraint in formalized hierarchies, (Rasmussen1997, Leveson2011), while also exploring that informal feedback loops exist (Kuran2017). The empirics of the paper draws on the use of ethnographic methodology (Kuran2021a), were the presence of the researcher in field as both observer and actor, together with informants over time allows for a gradually informal exchange of knowledge and access to the day-to-day work of truck drivers, and also allows the researcher to honestly discuss various concepts and facts with informants, both when rules and regulations are followed, and when they are not. The paper shows that the transport of pieces of dangerous goods are often secretively transported with mixed cargo, that drivers sometimes do not know the nature of the cargo they are transporting, That there is speculation on behalf of transport buyers to attempt cheaper transport of dangerous goods by camouflaging the packaging and that the inspections often do not find hidden pieces of dangerous goods in their routine controls. This findings are discussed as relating to risk analysis of tunnel systems.

14:50
Traffic safety risks among adolescent ATV users in Norway
PRESENTER: Thomas Wold

ABSTRACT. ATVs (All-terrain vehicles) are commonly used by adolescents in rural Norway, but there has been little research on users’ driving behaviour and accident rate. This project seeks to investigate common traits of their ATV use and how social norms affect risk taking behaviour.

Research on ATV use from several countries shows a high accident rate with severe injuries and fatalities, both for work-related use and leisure use, and the majority of victims are male (Lin & Blessing, 2018; Lower, Peachey, & Fragar, 2022). Studies show that young drivers tend to overestimate their driving skills (Cestac, Paran, & Delhomme, 2014), and to underestimate the risk of ATV use (Adams, Aitken, Mullins, Miller, & Graham, 2013). Teenagers, particularly males, are more prone to risk-taking behaviour than adults (Denning & Jennissen, 2016), and peer pressure and group norms are important factors (Nilsson, 2016).

The data in this study consist of focus group interviews with young ATV-users and their parents in separate sessions at five different locations in small towns and rural areas with a high rate of ATV-use. The interviews dealt with their opinions about safety, driving behaviour, modifying the ATVs, and social aspects of ATV use. Preliminary analysis suggests that it is common to modify the ATV to make it go faster than the law permits. The informants thought this was safer, to avoid dangerous overtakes from cars when driving on public roads. Their fathers seem to be more aware of this than their mothers, but don’t see it as a problem.

The purpose of the project is to contribute to the development of safety courses and other safety measures for ATV users and to give advice on public regulations.

References: Adams, L. E., Aitken, M. E., Mullins, S. H., Miller, B. K., & Graham, J. (2013). Barriers and facilitators to all-terrain vehicle helmet use. J Trauma Acute Care Surg, 75(4 Suppl 3), S296-300. doi:10.1097/TA.0b013e318292421f Cestac, J., Paran, F., & Delhomme, P. (2014). Drive as I say, not as I drive: Influence of injunctive and descriptive norms on speeding intentions among young drivers. Transportation Research Part F: Traffic Psychology and Behaviour, 23, 44-56. doi:https://doi.org/10.1016/j.trf.2013.12.006 Denning, G. M., & Jennissen, C. A. (2016). All-terrain vehicle fatalities on paved roads, unpaved roads, and off-road: Evidence for informed roadway safety warnings and legislation. Traffic Inj Prev, 17(4), 406-412. doi:10.1080/15389588.2015.1057280 Lin, P. T., & Blessing, M. M. (2018). The characteristics of all-terrain vehicle (ATV)-related deaths: A forensic autopsy data-based study. Forensic Sci Med Pathol, 14(4), 509-514. doi:10.1007/s12024-018-0014-7 Lower, T., Peachey, K. L., & Fragar, L. (2022). A descriptive review of quad-related deaths in Australia (2011-20). Aust N Z J Public Health, 46(2), 216-222. doi:10.1111/1753-6405.13193 Nilsson, K. (2016). Parents' Attitudes to Risk and Injury to Children and Young People on Farms. PLoS One, 11(6), e0158368. doi:10.1371/journal.pone.0158368

15:05
Driver acceptance of truck platooning: state-of-the-art and insights from a field trial on rural roads
PRESENTER: Maren Eitrheim

ABSTRACT. Truck platooning denotes virtually linking two or more trucks by use of communication and sensor technologies. With increasing automation, platoon driver roles and tasks are likely to change. Some drivers may be encouraged by prospects of teamwork and more flexible work schedules. Others may be concerned about safety and monotony, and fear loss of independence. Acceptance from drivers will depend on the perceived benefits and constraints of truck platooning in the context of their work. However, it is difficult to assess impacts of new technology without first-hand experience. The current study investigated acceptance of platooning in a field trial on rural roads. Three professional drivers operated a three-truck platoon along a 380 km route in Northern Norway, subject to large variations in road conditions. The trucks had automated longitudinal control. Although increasing usefulness and satisfaction were reported during the trial, participants appeared undecided or slightly negative towards truck platooning in interviews and in post hoc ratings. The participants stated that platooning may be advantageous on highways while requiring substantial effort from drivers to work on rural roads.

15:20
Bowtie analysis in a public transport sector during the Covid-19 pandemic: a case study.

ABSTRACT. Buses represent the road modal in public transport systems and fulfill a social and essential function, providing collective transport for a group of people to carry out their daily activities of work, leisure, consumption, and other tasks of modern life. It also has an economic function, contributing to the circulation and consumption of goods and services. In times of a pandemic, public transport also stands out as a critical infrastructure of urban centers as it contributes to the maintenance of essential activities, especially providing transportation to medical units for health professionals who work on the front lines of combating the pandemic. This paper presents a practical application of the bowtie methodology, analyzing the risks involved in public passenger transport operations during the Covid-19 pandemic. Applying the Bowtie methodology provides a better understanding of the risks and consequences associated with regulatory activity and greater assertiveness of decision-makers to balance the regulatory ecosystem (granting authority, companies, and users). In addition, this study contributes to a better resilience of public passenger transport operations in the face of large-scale/catastrophic events resulting from a health crisis experienced during the Covid-19 pandemic (2020-2022).

15:35
Analysis of Road Transport Networks Travelled by both Internal Combustion and Electric Vehicles and Subject to Traffic Congestion due to Incident Scenarios

ABSTRACT. In this work, we use Finite State Machine (FSM) and Cell Transmission Model (CTM) for the analysis of a road transportation network travelled by both Internal Combustion Vehicles (ICVs) and Electric Vehicles (EVs). The application to a realistic network shows that FSM catches better than CTM the traffic volume changes when accidents occur, but paying the price of a larger computational cost. CTM, instead, averages the vehicles motion, thus somehow overlooking traffic congestions and travel time delays that would result in extra energy consumptions and EVs charging demand.

14:35-15:50 Session 18C: Safety and Reliability in Oil and Gas Industry II
14:35
Synergetic implementation of LOPA and RBI methodologies for pressure relief device inspection interval optimization.

ABSTRACT. Pressure relief devices (PRD) have an important role in process safety, risk, and integrity management of refinery installations, as they are barriers against the risk of loss of containment of hazardous fluids.

The design stage of refinery facilities incorporates methodologies such as HAZOP to identify and analyze operational risks, barriers are assigned to achieve tolerable levels of risk. In HAZOP analysis, PRD represents a Risk Reduction Factor (RRF) that allows for a semi-quantitative assessment of overpressure scenarios that could result in hazardous fluids loss of containment events.

In the operation and maintenance stages, the PRDs risk reduction factor change as failure modes and degradation mechanisms take place and these barriers get degraded, therefore becomes mandatory to establish effective inspection and maintenance intervals to preserve their safety function reliability.

API 581 presents a risk-based methodology for the inspection and maintenance strategy of PRDs in refinery units. This strategy often gets challenged by asset conditions and business needs, being necessary to implement optimization processes that satisfy both economic and safety dynamic requirements of the business in its actual context.

In this work, preventive maintenance interval optimization analysis has been performed for a PRD that couldn’t be executed at its original planned interval because of the failure of its isolation valve. Deferral analysis for the maintenance of this PRD was initially performed using HAZOP methodology, however, the method for updating the initial Risk Reduction Factors, was not standardized, being qualitatively adjusted by the analysis team. As result, the risk of deferring the maintenance was initially considered intolerable, and the unit was planning to shut down to execute the preventive maintenance. Due to the hazards related to the service in which the PRD is installed, more than 15 days of production were at stake, at 500 KUS per day profit loss. A second analysis was developed, Methodologic synergy between API 581 RBI and HAZOP was developed, calculating a more precise adjustment of the Risk Reduction Factor, resulting in an optimized maintenance interval for this PRD that allowed the unit to operate safely until the next shutdown window while satisfying operational risk thresholds and financial goals.

This methodology synergy opened the path for optimization of the rest of PRDs maintenance and inspection intervals at the refinery, which otherwise would have been performed separately either by HAZOP or by API 581 methods, implicating a SILO effect, which could lead to imprecise conclusions as these methodologies were performed by separate departments of the refinery.

Methods that are available from the design stage, such as HAZOP, prove to be a crucial tool in a safety device such as PRD maintenance optimization as they allow dynamic update and computation of initial assumptions for barriers RRF. Both, HAZOP and RBI methodologies and interdisciplinary synergy allowed rigorous safety analysis and documentation in response to dynamic challenges imposed at operational stages.

14:50
Community Detection Algorithm for Natural Gas Pipeline Network Based on Transmission Characteristics
PRESENTER: Yu Li

ABSTRACT. Community detection of natural gas pipeline network is beneficial for optimize operation and divide regions. However, traditional community identification methods just consider the topology structure and cannot reflect the characteristics of natural gas pipeline network. To solve this problem, a set of gas source tracking algorithm, according to the principle of equal proportion, is proposed to locate the distribution of natural gas input at each upload point in the pipeline segment at each download point. Based on this algorithm, the transmission correlation strength is defined, the transmission modularity is constructed and the physical meaning is explained. By replacing the traditional modularity with the transmission modularity, FGC(Fast Greedy Community) algorithm is improved to form a new community detection algorithm for natural gas pipeline network based on the transmission characteristics. The calculation results of both theoretical model and a real Chinese natural gas pipeline network show that the proposed algorithm ,having better effect than the traditional FGC algorithm, can identify accurate and reasonable gas transmission communities adaptively for different networks and different natural gas transmission conditions.

15:05
Worldwide improvement in offshore risk levels – will it extend to emerging green energies?

ABSTRACT. The Risk Level Project for the Norwegian petroleum offshore and onshore industry has published annual reports since 2001 (Ref. ), reporting on the risk level, but with focus on trends and relative values. It is worthwhile to perform an extended analysis of the long term trends in the risk picture on the Norwegian Continental Shelf, including statistics dating back to late 1960s (Ref. ). A broad perspective is taken, and considers quantitative as well as qualitative information, in order to make the most relevant predictions about occurrence of fatalities in the Norwegian national sector in the coming years. This is also related to the current trends in developing new offshore facilities in the Norwegian sector, which is expected to be record high in the coming few years, before probably falling rapidly.

At the same time both The International Association of Oil & Gas Producers as well as International Regulators' Forum publish annual statistical overviews of fatal accidents and some other statistics, making it possible to analyse long term trends with a worldwide perspective. Both these two sources have presented falling trends over a long period, yet with temporary increases occasionally. These trends are also discussed in relation to technical and operational trends in the worldwide perspective.

The components of the fatality risk picture are occupational accidents, major accidents on the installation, and transportation accidents during transfer between shore and offshore installations. While some of these accidents were relatively common in the first 15-20 years, there has been a significant reduction of the number of fatal accidents, and there is now often a few years between fatal accidents in the North Sea and associated waters, even the occupational accidents, which are the least rare occurrences.

These components of risk are considered to be applicable to emerging offshore energy industries, such as offshore wind, sun and waves, as well as offshore mining. Risk exposure for these industries is discussed on the basis of experience from the offshore petroleum industry, making observations about the likely risk exposure and which remedial actions that should be taken.

References: 1. PSA, 2022. What is Trends in risk level in the petroleum activity (RNNP)? https://www.ptil.no/en/technical-competence/rnnp/about-rnnp/ 2. Vinnem & Røed, 2020. Offshore Risk Assessment, Springer, 4th Edition

15:20
Risk analysis of emergency response to community gas pipeline leaks using AcciMap and the STAMP model
PRESENTER: Jiahao Liu

ABSTRACT. The frequency of community gas pipeline leakage incidents is maintained at a high level due to the complex human and architectural environment of urban communities, as well as the use of the pipelines themselves. For example, in Beijing, according to gas company's statistics, 2018 was the year with the highest number of community gas pipeline vandalism incidents in the last six years, with a total of 370 gas pipeline vandalism incidents, accounting for 52% of the city's gas pipeline vandalism incidents, of which over 90% of the vandalism incidents were in the form of pipeline leaks. When a leak occurs in a gas pipeline, the gas company and the relevant departments need to carry out emergency response. As the response process involves personnel, emergency resources, repair equipment and social organisation coordination, these uncertainties lead to the existence of risks in the emergency response process. Furthermore, there have been cases where risks in the emergency response process have led to pipeline leaks becoming accidents resulting in injuries and fatalities. In terms of risk analysis, the AcciMap and STAMP models are currently the most representative methods of systematic analysis. The feasibility of both models in a wide range of industries has been demonstrated in the research literature. However, despite their advantages, their research on risk analysis during emergency response has been limited so far. The aim of this study is to use the AcciMap and STAMP models to identify risk factors during emergency response to the leak of community gas pipeline. Also, to compare the variability of the two models for gas pipe leak emergency response and to measure their effectiveness in terms of the number of risk factors identified.

15:35
A STAMP-based Approach to Reduce Oil Pipelines Risks Underwater
PRESENTER: Ali Alhasani

ABSTRACT. The oil and gas industry has faced significant disruption in underwater pipelines due to various accidents, including leakages, ruptures, and explosions. These occurrences have caused substantial monetary loss and ecological harm. The sector is increasingly resorting to maritime autonomous systems to monitor pipe integrity and to assist in pipe laying phase of the oil and gas infrastructure. But these novel hazards that these technologies bring must be controlled or managed. This paper adopts a Systems Theoretic Accident Model and Processes STAMP approach to identify the risks of using a maritime autonomous system to support oil and gas operations. The benefits of using the STAMP approach for risk assessment of underwater oil pipelines include identifying the causes of human performance, component malfunction and organizational factors.

14:35-15:50 Session 18D: Occupational Safety II
Location: Room 2A/2065
14:35
Innovations for improved emergency preparedness in the Norwegian aquaculture industry

ABSTRACT. The Norwegian aquaculture industry is a leading producer and exporter of Atlantic salmon (Salmo Salar). Open net pens accessed by boats are the most common production technology. Risk in sea-based aquaculture has been described according to five risk dimensions: risks to material assets, to personnel, to fish welfare and health, to the environment, and to food safety (Yang, Utne, and Holmen 2020). Aquaculture workers must handle all these risks in their everyday work.

Even though emergency preparedness is an important part of safety at the fish farms, few studies have addressed the status and potential for improvements. In this article, new innovations are presented - based on knowledge about emergency status and needs as well as a description of key stakeholders. The following questions are answered: - Who are key stakeholders for emergency preparedness in aquaculture? - What is the emergency preparedness status and needs for selected aquaculture companies? - How can new innovations help improve emergency preparedness in aquaculture?

The methods used in this study are interviews, document studies and workshops with representatives the aquaculture industry.

Key groups of stakeholders include the fish farmers, the service branch, government actors and insurance companies. The government stakeholders have three subgroups: public emergency preparedness resources that contribute in case of an accident, ministries and local municipals and provinces.

In case of an incident, the public emergency preparedness resources described above have the following order of prioritization: 1) life and health, 2) non-replaceable natural resources, 3) industry values. Industry values in this context entails the fish, it's health and welfare as well as preventing escapes to the environment.

Regarding the status of emergency preparedness, interviews showed that fish farmers base their emergency preparedness plans on systematic risk mapping, legislation as well as their own and others' experiences. Since industry values are not a priority for public resources, the companies must rely on private resources for their overall emergency preparedness.

Based on the stakeholder mapping and status of emergency preparedness, this paper describes new innovations that may increase the efficiency of operative emergency preparedness. The innovations are: "Operative emergency preparedness support team", "Algae forecasting", "Emergency preparedness vessel(s) and "Training". The Operative emergency preparedness support team can support the fish farmers in case of an emergency, while algae forecasting can provide the fish farmers with information about algae blooms that are harmful for the fish. Custom designed emergency preparedness vessels may assist the fish farmers with trained personnel and emergency equipment for different scenarios. Training involves drills for emergency preparedness, using simulator technology.

14:50
An approach for awareness and assessment of risks in outdoor sports activities

ABSTRACT. Over the last few years, the practice of outdoor sports has become increasingly widespread, transforming itself from a niche activity for a few, generally expert enthusiasts, to a real mass phenomenon. This has inevitably led to an increase in the number and severity of injuries, because the increased number of people exposed to the dangers, but also because of the new type of people exposed: with less knowledge of the dangers, with less awareness of the risks associated with them and sometimes less physically prepared to face the specific sporting activity.

On the other hand, the positive social and economic impact that the diffusion of these activities favours is undeniable, in terms of psychophysical benefits for the sportsmen, jobs linked to the assistance and support services offered to the sportsmen themselves and, not lastly, touristic activities induced in the frequented places.

However, like all phenomena, it must be analysed and managed, considering the new technologies available (GPS, internet, etc.) too, which of course can support operators and sportsmen but which, if not suitable or not effective or not correctly used, can even amplify the existing risks.

This work describes the analysis carried out, extended to the entire field of outdoor tourist activities, with particular attention to the mountain environment, aimed at studying its criticalities. It also briefly introduces the tools currently being studied for some outdoor activities practiced in "no wild" land (at a totally or partially managed risk). Inail has conducted the work together with university researchers, category associations and outdoor specialists.

First of all, the analysis revealed the need for comparison and – where possible – alignment between the terminologies of the various sports disciplines (climbing, canyoning, mountain biking, etc.) with respect to the various categories of generally present hazards (natural hazards, hazards associated with characteristics of the route, dangers associated with any equipment used, etc.).

A standardised risk analysis method was then adapted to identify hazards, hazardous situations and potential damages associated with various outdoor sports disciplines and different natural environments.

Finally, experimentation of an artificial intelligent system of image learning to detect dangers and dangerous situations is currently underway, as well as the definition of protocols for the self-assessment of the physical abilities and level of training of the tourist, sportsman or outdoor operator, to be related to the difficulty of the routes and the potentially present dangers

The ultimate goal of this project is the development of tools to support the outdoor sportsman in assessing the risks to which he will be exposed, defining the level of risk he is willing to accept (consciously and compatibly with his desire of "adventure"), identifying safety measures and understanding when the help of experienced personnel (specialized guides) is appropriate. All this, starting from the information available on conventional means (guides, cartography) or modern (websites) and considering its own psychophysical limits and the risks to which the rescue teams could be exposed should their intervention be necessary.

15:05
Have older workers been overlooked by health and safety standards?
PRESENTER: Paola Cocca

ABSTRACT. Ageing workforce can be defined as “the increase in the number of older people in the workforce” (ISO 25550, 2022). Most developed countries are currently being affected by the workforce ageing phenomenon due to the general ageing of the population and the higher average retirement age of workers (Calzavara et al., 2020). In addition, population in developing countries is expected to be subject to ageing at three times the speed of populations in developed countries over the next few decades (United Nations, 2020). Older workers represent a special group with characteristics that require specific attention from the Occupational Health and Safety (OHS) point of view (Varianou-Mikellidou et al., 2019). International standardization might play an important role to ensure practical, efficient, and ethical implementation of solutions to contribute to a health and safe ageing workforce (Wissemann et al., 2022). Despite the recent establishment of the ISO Technological Committee 314 on Aging Societies, there appears to be a general scarcity of recommendations specifically targeted to the ageing workforce in health and safety standards. The objective of this paper is to review international standards, to identify and summarise the main guidelines related to OHS challenges for older workers. In addition to that, the study is concluding with recommendations on how elements of International Standards such as ISO25550, can be used in an Occupational Health and Safety Management System. This “state of the art” picture might help recognizing and spreading existing recommendations in workplaces, as well as pinpointing the main gaps that still need to be filled.

References Calzavara, M., Battini, D., Bogataj, D., Sgarbossa, F., Zennaro, I. (2020). “Ageing workforce management in manufacturing systems: state of the art and future research agenda”, International Journal of Production Research, Volume 58, Issue 3, Pages 729-747. ISO (International Organization for Standardization), 2022. ISO 25550. Ageing societies - General requirements and guidelines for an age-inclusive workforce. United Nations, Department of Economic and Social Affairs, Population Division (2020). World Population Ageing 2019 (ST/ESA/SER.A/444). Varianou-Mikellidou, C., Boustras, G., Dimopoulos, C., Wybo, J., Guldenmund, F.W., Nicolaidou, O., Anyfantis, I. (2019). “Occupational health and safety management in the context of an ageing workforce”, Safety Science, Volume 116, Pages 231-244. Wissemann, A.K., Pit, S.W., Serafin, P., Gebhardt, H. (2022). “Strategic Guidance and Technological Solutions for Human Resources Management to Sustain an Aging Workforce: Review of International Standards, Research, and Use Cases”, JMIR Human Factors, Volume 9, Issue 3, e27250.

14:35-15:50 Session 18E: S.12: Digital Twins for hybrid Prognostics & Health Management I

Digital twins – in short, digital replicas of physical objects, can be an enabling technology for hybrid PHM in industry, by continually updating physical models as new data is acquired. The objective of the session is to make this statement more precise by showcasing examples of successes and presenting remaining challenges.

We expect this special session to result in coherent and focused contributions on the topic of hybrid PHM driven by Digital Twins. By carefully selecting complementary contributions from academia and industry, we expect this special session will be a forum to exchange knowledge between these two worlds.

Location: Room 100/4013
14:35
Categorization of aircraft missions for exploitation by a mathematical model of digital twin.

ABSTRACT. Prognosis & Health Monitoring of aircraft engines consists in identifying characteristics to assess its condition. These algorithms are generally separated into two parts: an on-board component to build indicators from the measurements collected during each flight and another, on ground computers, which processes these measurements with other contextual elements to estimate trends or drifts in engine behavior. These drifts will be analyzed by experts or artificial intelligence algorithms to anticipate risks of degradation. Initially, PHM directly used summary data produced during each flight in the form of snapshots. This first static analysis made it possible to identify damage present on the engine or a performance drift. By adding contextual data, such as meteorological and pollution data, the damage estimators could be seriously improved. Finally, very recently, we have started to use recurrent temporal models that evaluate a latent state updated after each flight. The addition of this temporal component, which considers the history of the engine's successive missions, has improved the quality of our predictions. The dynamic models seem more efficient than the previous static models even if potential counters which capitalized, for example, the time spent beyond certain load levels, are computed on board the aircraft. One crucial element was still missing from these models, a description of the missions themselves. Indeed, each flight is different, and we have therefore implemented a detailed method of categorizing flights with a metric allowing them to be compared two by two. This method first performs a decomposition of the rotational speed of the fan, which in our case of turbojets is a relevant indicator of thrust. Once the flight has been segmented from this control signal, each flight segment is categorized. The complete flight can thus be described as a sequence of labels. To build a metric between the flights, we took care to use a topographic categorization procedure using self-organizing maps (SOM) to classify the segments. This type of categorization automatically gives a distance measure between segments, which makes it possible the use of an edit distance as a similarity measure between flights. This metric consists of measuring the minimum cost of transforming one flight into another by exchanging, adding, or removing labels. Hence, we categorize the missions and enter the flight class as a new contextual data of the recurrent model. An advantage of this method is that it applies to a very large database of past flights automatically and is fast enough. When some missions are original, for example in the case of helicopter or military aircraft tracking, it is not possible to have instant flight summaries easily. Our method makes it possible to identify the categories of the most frequent flight segments and thus to reconstruct such snapshots from temporal data. This allows us to better control the evolution of the state of these engines, much more difficult to follow than for airliners.

14:50
Learning linearized degradation of health indicators using deep Koopman operator approach

ABSTRACT. Predicting the evolution of the system condition in time, respectively the remaining useful lifetime (RUL) enables to forestall critical faults and improve system reliability and safety. Going one step beyond prediction would enable to control the operating parameters in order to prolong the RUL. To effectively solve the problem of optimal control, linearized representations of the process dynamics are required. In this paper, we propose the use of Deep Koopman algorithm to approximate the eigenfunctions of the Koopman operator and learn the latent space in which the health parameters of the system degrade in a linear manner. In order to enforce the algorithm to find a linear representation of the hidden health indicators, we introduce an additional supervised loss function taking into account the RUL which is known during the training process. The proposed approach enables prediction of the RUL and does not require a large number of training failure trajectories. The proposed solution has successfully demonstrated its ability to find a linearized representation of the degradation process of various industrial systems such as milling machines and aircraft engines. The proposed approach can be applied not only for health monitoring of industrial systems but also for solving the optimal control problems.

References 1. Lusch, B., J. Nathan Kutz, and Steven L. Brunton (2018). Deep Learning for Universal Linear Embeddings of Nonlinear Dynamics. Nature Communications 9, no. 1. https://doi.org/10.1038/s41467-018-07210-0. 2. Xinghui Li. "2010 PHM Society Conference Data Challenge." doi: 10.21227/jdxd-yy51 3. Saxena, A., K. Goebel, D. Simon, and N. Eklund (2008). Damage Propagation Modeling for Aircraft Engine Run-to-Failure Simulation. In 2008 International Conference on Prognostics and Health Management, 1–9.

15:05
Optimizing ultrasonic inspection regimes of railway rails

ABSTRACT. Ultrasonic inspection of railway rails is considered as an important safety barrier. Fatigue cracks develop due to the cyclic loads of the passing trains, and the objective is to detect the cracks before they develop to rail breakages. It is crucial to understand the speed of crack propagation to define the inspection regime. Trains equipped with ultrasonic instruments can run at a speed of approximately 50 km/h. As the train runs suspects are identified with their position along the track. The ultrasonic scans are, and critical suspects are recorded for manual follow up. The manual inspection uses a hand-held trolley also with ultrasonic instruments for a more precise classification. In Norway a classification regime consisting of the categories 2b, 2a, 1 and 0 are used. The main strategy is to monitor 2a and 2b defects, whereas 1 defects have to be fixed within a month, and 0 defects have to be fixed immediately. The inspection- and follow-up regime is now under revision. A Markov model has been developed for modelling the transition between the defect states. There are several challenges with the Markov model. First of all, since we assume that fatigue is the main failure mechanism, it is not realistic to assume that transition times follows the exponential distribution. Secondly, times between running the inspection car are almost deterministic which requires a special treatment when solving the Chapman-Kolmogorov differential equations. Finally, the follow-up activities are also deterministic, and phase-type models are used to handle transitions representing the results of the follow-up activities. In this paper we investigate how we can simplify the modelling without compromising the results too much. Finally we present the a risk-based model for the total optimization of the strategy. Statistics from the Norwegian rail network is used for estimating the transition rates. Cost figures from the track access Agreement and actual costs for one track is used to demonstrate the approach.

15:20
A Study on Gradient-based Meta-learning for Robust Deep Digital Twins
PRESENTER: Raffael Theiler

ABSTRACT. Deep-learning-based digital twins (DDT) are a promising tool for data-driven system health management because they can be trained directly on operational data. A major challenge for efficient training however is that industrial datasets remain unlabeled. This is remedied by simulators that can generate specific run-to-failure trajectories of assets as training data, but extensive simulations are limited by their computational cost. Therefore, it remains difficult to train DDTs that generalize over a wide range of operational conditions. In this research, we propose a novel meta-learning framework that is able to efficiently generalize an arbitrary DDT using the output of a differentiable simulator. While previous generalization approaches are based on randomly sampled data augmentations, we exploit the differentiability of the full pipeline to actively optimize the training data sampling by means of condition parameter gradients. We use these gradients as an accurate tool to control the sampling distribution of the simulator, improving the representativeness, robustness, and training speed of the DDT. Moreover, this meta-learning approach leads to a higher quality of generalization and makes the DDT more robust to perturbations in the conditional parameters.

 

14:35-15:50 Session 18F: S.37: Reliability and Durability Aspects of Circular Economy
Location: Room 100/5017
14:35
Challenges in the real world evaluation of traction batteries at the end of their first life
PRESENTER: Alexander Popp

ABSTRACT. A. Popp, H. Fechtner, and B. Schmuelling T. Scholz, S. Kremzow-Tennie, and F. Pautzke

Lithium-ion batteries (LIB) are used in a variety of applications, e.g. as traction batteries for electric vehicles. As their number increases, more batteries become available within the circular economy [1]. Depending on the vehicle condition, an initial assessment of the battery, in particular its capacity and internal resistance, which are mostly used to evaluate the state of health (SoH), is only feasible for the original equipment manufacturers [2]. Without a digital battery pass or access to the historical data of the battery management system (BMS) - municipal recycling companies, for example, have no knowledge of the system condition of the battery in question [2,3]. LIBs in an unknown condition, pose chemical, thermal and electrical hazards, thus requiring a proper analysis at the end of their first life [4]. Typically, standardized tests (e.g. ISO or IEC standards) are performed before the traction batteries are delivered to the vehicle manufacturers (begin of life, BoL) - with tests designed to be as realistically as possible [5,6]. The situation differs when evaluating batteries at the end of their life (EoL) or vehicles that have been involved in an accident or have been salvaged and arrive at the recycling companies. In this paper, possible approaches to these battery systems as well as procedures for the SoH determination based on known standards are shown. These laboratory tests enable an initial assessment of the batteries system status [7]. Measurements on artificially aged cells offer a largely reliable and safe approach to access the SoH, since their history is known. Batteries aged in real applications pose further challenges for laboratory level assessment of their SoH, due to the mentioned potential hazards. The requirements for the test benches used (measurement technology, safety devices) as well as the test processes used are analyzed in this paper. Furthermore, alternative approaches are shown, which in combination with the laboratory tests can provide an initial assessment of the condition in real use - this ranges from reading out the BMS via the on board diagnostics interface of the vehicle to removing the traction battery with the associated test of SoH with additional test equipment.

[1] IEA. Global EV Outlook 2022. https://www.iea.org/reports/global-ev-outlook-2022 [2] European Commission. Bielewski, M., Blagoeva, D., et al. Analysis of sustainability criteria for lithium-ion batteries including related standards and regulations. 2021, https://data.europa.eu/doi/10.2760/811476 [3] thebatterypass.eu. https://thebatterypass.eu/wp-content/uploads/2022_Battery-Pass_Project-Overview.pdf, 2022 [4] Vicarsa, R. and Hecksherb K. Managing Risks in Lithium-Ion Battery Applications. ESREL2020 PSAM15 [5] International Organization for Standardization. ISO12405-4:2018 Electrically propelled road vehicles —Test specification for lithium-ion traction battery packs and systems — Part 4: Performance testing [6]International Electrotechnical Commission. IEC 62660-1:2018,Secondary lithium-ion cells for the propulsion of electric road vehicles - Part 1: Performance testing [7] Muhammad, M., Ahmeid, M., et. al. Assessment of spent EV batteries for second-life application. 2019 IEEE 4th International Future Energy Electronics Conference (IFEEC), Singapore, 2019

14:50
Applicable Method for Identifying and Managing Waste
PRESENTER: Abbas Barabadi

ABSTRACT. The main objective of the MURI, MURA, and MUDA (3MU) identification process based on Lean methodology (is a continuous improvement process that seeks to identify and eliminate waste to increase efficiency and productivity) is to recruit a workforce to address the challenges of waste elimination. The proposed methodology involves defining value, identifying waste, identifying the origins of waste, and prioritizing them to take necessary measures. The methodology was implemented for the XX alumina complex and specifically targeted the waste of tank leakage in the caustic Soda transferring and receiving unit. Three strategies were presented against the main origins, one of which was the Rapid improvement strategy, which aims to achieve rapid improvements with minimal expenditure and time. The implementation of this approach in the XX alumina complex showed that to achieve rapid improvements and address the origins of waste in the caustic soda tank; the complex must start by learning accurate risk analysis.

15:05
A Bass Diffusion-Inspired Methodology to Predict Device Activity

ABSTRACT. Understanding device activity over the lifetime of consumer electronic products is critical in two ways. First, it determines the extent to which a product maximises utilization during its usage phase, which is a critical pillar of circular products. Second, it allows for better estimations of usage phase carbon footprint, which is essential in Life cycle Assessment (LCA) of consumer electronics. This manuscript proposes a methodology for cold start forecasting of device activity data via a Bass diffusion inspired model to predict Monthly Active Devices (MAD) over the lifetime of a product.

15:20
The Need for a Durability Index Framework for Electrical and Electronic Equipment to Support a Circular Economy

ABSTRACT. Enabling a circular economy aims to reduce the amount of global waste generated from electrical and electronic equipment, mitigate the associated risk to the ecosystem and human health, and address concerns over limited material resources. Durability is a critical concern because keeping products in use for a longer time should reduce resource consumption and waste. Assessing the durability of products and sharing these assessments with the public form a strategy that not only encourages and enables consumers to purchase more durable products but also gives manufacturers an incentive to compete and improve the durability of their products. Although there are some recent initiatives for indexing product durability, there is not yet a standard method for measuring and indexing durability. This extended abstract discusses how indexing product durability can support the shift to a circular economy and overviews the most relevant efforts regarding measuring and indexing durability and relevant product attributes.

14:35-15:50 Session 18G: S.27: Advances in Maritime Autonomous Surface Ships (MASS) III
14:35
Investigation of Statutory and Class society Based Requirements for Electronic Lookout
PRESENTER: Victor Bolbot

ABSTRACT. Novel advanced systems, employing information and communication technology, are emerging. An example of such a system is the electronic lookout (e-lookout), which functions as the visual lookout performed by humans on ships. In this paper, we investigated what types of requirements may arise for e-lookout based on an analysis of statutory documents and existing class society guidelines for autonomous ships. To this end, first, we identified e-lookout functions based on a functional breakdown, considering both existing maritime function classifications as well as experts’ opinion. Second, we investigated the class society guidelines for autonomous ships concerning e-lookout and the applicability of existing regulatory requirements for conventional human lookout including those specified by STCW, COLREGS, and SOLAS. Considering the existing regulatory requirements for lookout, we proposed alternative equivalent requirements for e-lookout. Specifically, based on the analysis, we specified seventeen novel requirements for functionality, reliability, availability, maintainability, and safety. It is expected that the analysis implemented, and methodology presented will support the development of an appropriate regulatory framework for e-lookout and autonomous ships.

14:50
Evaluating the existing watchkeeping regulations as a baseline for developing functional requirements and performance criteria for uncrewed vessels
PRESENTER: Børge Kjeldstad

ABSTRACT. The development and deployment of uncrewed surface vessels and vessels with some degree of autonomy is seeing a rapid increase. Use cases cover the offshore industry, aquaculture, seabed mapping, water column monitoring, public transport, cargo freight, security, and more. The expected business opportunities and societal benefits are reduced crew and vessel costs, reduced energy consumption, less HSE exposure for employees, and a potential mitigation to the challenge with less people being willing to take a job at sea. Yet, regulations for such vessels do not exist. This lack of regulations causes challenges for both developers of the vessels and for the authorities who shall approve them. Costs increase, time to market increases, the risk picture is unclear, and the advantages these vessels offer to the maritime sector and stakeholders in the ocean space are not delivered as expedite as possible. The objective of this paper is therefore to evaluate how the existing watchkeeping regulations may be used as a baseline for developing functional requirements and performance criteria for uncrewed and potentially autonomous vessels. The focus is on the conventional lookout and navigation crew functions with sub tasks and duties. These functions are selected because they are assumed to be the most challenging to perform from a remote-control center or autonomously. Methodologically, this paper uses a literature review and expert judgements to assess if there is a potential gap between existing regulations and if there is a need for new regulations for uncrewed vessels. The work in this paper is partly related to the Sundbåten autonomous passenger ferry project in Kristiansund, Norway, involving both industry and academic partners.

15:05
Autoencoder-Based Anomaly Detection for Safe Autonomous Ship Operations
PRESENTER: Brian Murray

ABSTRACT. The development of autonomous ships is advancing, but ensuring their safe operation remains a challenge. To aid safe operations, autonomous ships are expected to be monitored by humans in a remote operation center. A key challenge is ensuring that human operators remain alert and ready to take control of the system when necessary. Maritime traffic poses a potential hazard to autonomous vessels, and systems to aid the operator in identifying abnormal ship behavior in time should be in place. This study develops deep learning models that automatically detect anomalous ship behavior to aid human operators. A case study related to the remote operation center in Horten, Norway is conducted, where four various autoencoder architectures have been trained on Automatic Identification System data to detect maritime traffic anomalies in the Oslo fjord. The models are trained in an unsupervised manner, such that they are able to automatically identify anomalies, without the need for manual labelling. The results indicate that a recurrent autoencoder is the most promising architecture for decision support of remote operators, as it is able to identify a variety of anomalies, with fewer false positives.

15:20
Do Redundant Systems Make a Remote Control MASS Safer?
PRESENTER: Gengquan Wei

ABSTRACT. Recently, Maritime Autonomous Surface Ships (MASS) have attracted numerous attention, which is environment-friendly and is expected to improve the efficiency and safety of maritime transportation. Remote control ship is regarded as the first step of fully autonomous ships. For a remote ship, the human and autonomy system (machine) are both active in the control loop. In principle, the redundant design of controllers (i.e. human operators on shore and autonomy system on board) would contribute to the safety of the ship, while how the redundant system contributes to the safety of the ship in some critical environments is unknown. Thus, this paper aims at exploring the performance of the autonomy system onboard in the remote control ship under various communication conditions, resulting in tips for the design of control architecture and human-machine interactions. To investigate the performance of the redundant system (i.e., autonomy system onboard) under different communication conditions, a series of simulations are designed in the ROS platform. Two groups are introduced, i.e., the standard group (SG) and the control group (CG). The ship in SG is directly controlled by the shore base, while an autonomy system (AS) onboard is added to the ship in CG, which will take over the ship when necessary. Ships in both groups are controlled to track the same path by the same PID controller onshore that generates a series of control commands as human operators. Hence, the ship in SG directly executes the delayed commands from the shore base regardless of the communication delays, while the ship in CG judges the safety of the ship first and then picks up the command from the shore base or the AS onboard. The tracking errors are applied to indicate the performance of the redundant system. The primary results show that (1) the communication delay influences the performance of the controller in both groups, and the performance becomes poor as the delay increases; (2) the performance of the redundant system (i.e., CG) performs better than the system in SG when the communication delay exceeds 700 ms; (3) the performance of the redundant system (i.e., CG) might get worse than the case in SG. The experiment results reveal that the additional redundant systems might not always improve the performance, and the design of the AS onboard is critical for safety. To validate the findings and explore tips for human-machine interactions in MASS, more experiments (e.g., different paths, different controller structures, sharing control modes, etc.) would be tested in the future.

15:35
Simulation-based Data Generation for Maritime Autonomous Surface Ships Machinery

ABSTRACT. To ensure the safe and efficient operations of machinery systems for Maritime Autonomous Surface Ships (MASS), intelligent systems to monitor and make decisions on plant management need to be developed. The latter are typically based on appropriate datasets, the generation of which is associated with immense challenges due to the lack of historical data (Saxena et al., 2008). Nonetheless, trustworthy simulation tools can be employed to generate the required datasets, hence mitigating this challenge. This study aims to develop trustworthy digital twins of a marine engine for generating simulation-based datasets, which are required to support the MASS’ machinery health assessment and prognosis. A physical digital twin of the thermodynamic type is first developed in the commercial software GT-SUITE and integrated with physical submodels for representing the degradation behaviour of the considered engine critical components, in specific, the exhaust valves recession based on an empirical knowledge-based approach reported in (Lewis and Dwyer-Joyce, 2001). Following the physical model validation against available measured data, datasets are generated for the whole engine operating envelope and several degradations severity. These datasets were subsequently employed to develop data-driven models of low computational cost, which then provide an extensive amount of datasets characterising the engine behaviour. The derived results demonstrate the performance accuracy of the developed physical digital twin in healthy and anomaly conditions, as well as the suitability of the simulation-based data generation methodology for developing intelligent prognosis models. The physical model digital twins predict the engine performance parameters with a maximum 3% error in healthy conditions and consistent variation trends in anomaly conditions. The data-driven digital twin learns patterns of the performance parameters and increases the quantity of the data required to support the MASS machinery prognostics. This study contributes to the development of intelligent systems for next-generation autonomous machinery systems overcoming the lack of appropriate datasets challenges.

LEWIS, R. & DWYER-JOYCE, R. 2001. Design tools for prediction of valve recession and solving valve/seat failure problems. SAE Transactions, 1868-1877. SAXENA, A., GOEBEL, K., SIMON, D. & EKLUND, N. Damage propagation modeling for aircraft engine run-to-failure simulation. 2008 international conference on prognostics and health management, 2008. IEEE, 1-9.

14:35-15:50 Session 18H: S.38: Quantitative Methods for Security Risk Modelling and Prediction

This Special Session is aimed at gathering expert researchers, academics and practicing engineers to present their recent findings and methodological developments related to the use of advanced simulation in risk assessment and the application to CRA.

14:35
Creating a testbed for cyber security assessment of Industrial 4.0 factory infrastructure

ABSTRACT. Addressing cybersecurity in Industry 4.0 is challenging, as it requires a holistic view of the perspectives of people, process and technology [1]. To understand how industrial control systems (ICS) are affected by cyberattacks, we must first understand how systems behaves during nominal operation. The emergence of Industry 4.0, and the upcoming Industry 5.0 (1), results in industries deliberately connecting both new and legacy operational technology (OT) to the internet, i.e., information technology (IT). This interconnection between IT and OT is motivated by gaining data insights to increase efficiency and economic gain, as well as the opportunity to centralize both security monitoring and control over the factory floor [2]. This convergence of IT and OT environments makes OT systems susceptible to external attacks. To obtain realistic insights into the introduced vulnerabilities, real systems should be exposed to cyber security event conditions in hardware in the loop environments. We are using an Industry 4.0 training factory for this purpose from the German company Fishertechnik. The setup of the ICS testbed is comprised by a training and learning environment that through learning can provide comprehends Industry 4.0 applications and demonstration. The Industry 4.0 factory environment is controlled by a real SIMATIC S7-1500 programmable logic controller (PLC) made by SIEMENS. The components building up this out-of-the-box setup consists of different factory modules that replicate real components. In this paper we present how we have established an ICS testbed, including the challenges experienced from aligning best practices, architecture designs and guidelines for network communication and integrating different agents (databeats) for collecting data. We will also discuss the use of a SIEM solution, called Elasticstack, for data collection to provide us with insights for further exploration of methods for anomaly detection and knowledge building.

1) https://research-and-innovation.ec.europa.eu/research-area/industrial-research-and-innovation/industry-50_en

[1] Malatras, A., Skouloudi, C. & Koukounas, A. (2019, May 20). Industry 4.0 - cybersecurity challenges and recommendations (Report/Study). Retrieved July 5, 2021, from https : / /www. enisa . europa . eu / publications / industry - 4 - 0 - cybersecurity - challenges-and-recommendations

[2] Cisco. (2018). IT/OT convergence: Moving digital manufacturing forward, 9. Retrieved August 9, 2021, from https://www.cisco.com/c/dam/en_us/solutions/industries/ manufacturing/ITOT-convergence-whitepaper.pdf

14:50
Cyber Security Anomaly Detection In An Industry 4.0 Testbed – Results and Experiences

ABSTRACT. This study investigates Industry 4.0 cybersecurity challenges and how the interconnection of information technology (IT) and operational technology (OT) impacts industrial control systems (ICS) vulnerability to cyber-attacks. An ICS testbed, connected to an IT system for data processing and anomaly detection, was designed to examine monitoring and detecting cybersecurity threats using the Elastic Stack. The testbed comprises an OT environment featuring a FischerTechnik Industry 4.0 Training Factory controlled by a Siemens S7-1500 programmable logic controller (PLC). It also employs Elastic, a search-powered solution, for data collection and processing. Elastic "beats" (agents) were used for data collection, including Heartbeat, Machinebeat, Filebeat, and Packetbeat. The research employed the Microsoft Threat Modelling Tool to identify threats and vulnerabilities, generating a prioritised threat list. Based on this list, a security event was developed. We found that Elastic Beats and Security Information Event Management (SIEM) struggled to operate effectively in an ICS environment, with issues reading OT data protocols, such as OPC-UA and Siemens S7. In this paper, we examine the significance of choosing appropriate OT data to establish a baseline for cybersecurity and its potential impact. Additionally, we discuss challenges related to competence building in ICS security, TIA Portal functionality, PLC functionality, and OT data handling.

15:05
Enhancing Operational Reliability in Dredging Perception Systems through a Hybrid Redundancy Sensor Strategy
PRESENTER: Bin Wang

ABSTRACT. A decision-making system provides system state conditions and general operation instructions to operators based on all available sensing data. This can effectively assist the operators in these decision-making on how to proceed but high reliability of the perception system is, then, crucial.

Fig.1 Schematic diagram. Fig.1 (A) depicts the operational reliability of the perception system through a time-varying curve. Fig.1 (B) depicts how, in the event of a sudden failure resulting in operational reliability R=0, represented as “0 states”, the system swiftly recovers its reliability by employing a DT model. To improve system reliability, the DT values and measurement values are fused in state (1, 0), as illustrated by the red and black curves, respectively. The fused values are shown in blue (C). The DT model is constructed using sensor data obtained from the perception system (D). As shown conceptually in Fig. 1, we consider the service life, as divided in three states according to the principle of operational reliability. In the first state, the sensing equipment is newly installed and calibrated, and all performance indicators are in the optimal state, defined as the reliability state R = 1. In the second state, in which the operational reliability state is R = (1, 0), the performance of the sensing instrument gradually declines over time. The third state, labeled as the “0 state”, indicates operational reliability R = 0. This occurs when one or more key sensing instruments of the system have sudden failure or reach the specified service life, causing the system to fail to perform assigned tasks (Fig. 1B); this leads to future of the perception system's reliability. To represent this process, we propose a data-driven approach to establish a digital twin (DT) model of the perception system (Fig. 1D), which can increase in reliability with the service time of the perception system (Fig. 1C). By implementing this model during the “1 state” or “(1,0) state”, we can effectively slow down the decline in operational reliability and prolong the system's working life in the second stage, while also preventing the system from entering the third state (of future) during the service period. To validate the effectiveness of our approach, we present a case study of a dredging perception system.

15:20
A GENERALISED LINEAR MODEL FOR THE RISK ASSESSMENT OF CIVIL AIRCRAFT BOMB SABOTAGE ATTACK

ABSTRACT. The risk of a civil aircraft being exploded by terror or criminal groups stays one of the largest concerns in aviation security nowadays. Significant efforts have been made by the industry to mitigate this risk by the introduction of advanced methods of passengers, baggage and cargo screening. Nevertheless, several successful and failed attacks on civil aircraft were registered in recent years. In this paper we argue that it is possible to predict the risk of a bomb attack based on historical data. We show that security and geo-political data can inform aa Generalized Linear Model that estimates a likelihood of bombing incident on a civil aircraft in a given country.

15:35
Application of Data-Driven Bayesian Belief Network for the Analysis of Factors Contributing to Risk of Civil Aircraft Shooting Down Over Conflict Zones

ABSTRACT. Aviation security incidents such as shooting down civilian aircraft over the conflict zones stay one of the most significant challenges of civil aviation industry. While industrial regulations do not provide standardized objective risk assessment methodology, a significant array of publicly accessible data is available for an analysis with the help of machine learning algorithms. This study demonstrates a possibility to utilize data-driven Bayesian Belief Networks in order to develop probabilistic model and identify factors influencing on aviation security event.

15:50
Critical Convergence for enhanced safety: A Literature Review on Integrated Cybersecurity Strategies for Information Technology and Operational Technology Systems within Critical Infrastructure

ABSTRACT. Cyberattacks targeting critical infrastructure highlight that both information technology systems (IT) and industrial control systems (ICS) are vulnerable to cyber security events and that cyberattacks targeting IT can have effects on ICS and vice versa. These events indicate a need for an improved understanding of the similarities and differences between IT security and ICS/operational technology systems (OT) security. This paper explores the technological aspects of tools, methods, and approaches used to secure IT and OT systems, and the crisis decision-making processes related to management, strategy, organization, and governance. The methodology of this exploratory study is a literature methods approach using PRISMA methods that gather academic articles from the “Web Of Science” database. We discuss fifteen papers on IT and OT systems similarities in terms of security needs, and in terms of significant differences between the two that must be considered. The paper explores the trade-offs between applying IT-focused cyber security tools and approaches to ICS and OT. Results are disseminated in terms of two main research questions that are RQA) What are the similarities and differences between IT and OT security? And RQB), how can these disparities be effectively addressed to protect these systems from cyberattacks? We conclude by outlining future research directions aimed at expanding on the findings of research questions A and B.

14:35-15:50 Session 18I: S.20: Natural Language Processing, Knowledge Graphs and Ontologies for RAMS
14:35
Human Factor Detection in Aviation Accidents Using NLP
PRESENTER: Plínio Ramos

ABSTRACT. Aviation accidents are likely to result in the loss of several lives and even though the number of yearly accidents has decreased over the past decades, human factors have taken the lead as the main latent cause of the overall incidents. Moreover, the aviation industry collects vast amounts of data from various sources and in multiple formats, including written accident investigation reports that retain knowledge that can be explored to support decision-making. For this reason, the application of natural language processing (NLP) seems attractive to support experts to perform risk analysis, allowing them to propose effective preventive measures. In this paper we described a methodology to assist in identification of human factors leading to aviation accidents. The methodology involves training of a state-of-the-art NLP classifier and describes an approach to label the accident dataset, fundamental for the development of the classifier, with less effort, that is, without having to manually analyze each accident description. To achieve that, we adopted contextual embeddings, different from the studies found in the systematic literature review, and applied topic modeling to identify human factors categories, then labeled the accidents according to the identified categories. The trained classifier would certainly be useful for the identification of human factors causing accidents. In addition, the prediction provided by the model can also support experts to identify errors made in the previous analysis of the causes of accidents, allowing the database to be corrected. Thus, we believe that such contributions would be beneficial in practice for experts to identify common causes of human failure and have useful insights to propose preventive measures and training plans to reduce the risk of human failure.

14:50
NLP Advances in Risk Analysis Context: Application of Quantum Computing
PRESENTER: July Bias Macedo

ABSTRACT. The practice of Risk Analysis (RA), is crucial to effectively guide investments for the prevention and mitigation of potential risk events. Shortfalls in RA, such as accuracy, decision making and communication, can have negative effects on economy, society, environment and business image. The risk models developed have been changing rapidly, one reason is the advance in computing performance and the ability to record, store and process massive amounts of data. Another reason is the breakthroughs in the field of artificial intelligence that have enabled the efficient extraction of information from complex, high-dimensional and unstructured datasets. Thus, NLP has been successfully applied in different fields such as healthcare, marketing, education, and industry. NLP covers a wide variety of topics from computer science to linguistics and includes any manipulation of natural language to allow computers generating statements and/or words written in human languages. In this context, Quantum Computing (QC) and Machine Learning (ML) represent two of the most significant fields of computational science that have emerged over the last half century. Although fully scalable QC has not yet been achieved, the availability of intermediate noisy scalable QC devices means that near-term computation on quantum devices using quantum algorithms has become a reality. QC can (i) solve classical intractable problems and (ii) problems which, while tractable, are classically infeasible (due to resource constraints); thus, these characteristics motivate the use of QC in ML. In addition, since NLP problems require significant computational resources to infer meaning from text, it is worth developing methods to examine such problems using QC. Thus, this paper proposes the development of Quantum Natural Language Processing (QNLP)-based methodology to predict the severity of aviation accidents according using the accident narratives. Here we use a database consisting of accident investigation reports performed by the National Transportation Safety Board (NTSB) from 1982 to 2022.

15:05
Using Natural Language Processing to Generate Risk Assessment Checklists from Workplace Descriptions
PRESENTER: Adnane Jadid

ABSTRACT. In Germany, risk assessments are estimated to be missing for up to every second workplace (Arbeitsschutzkonferenz 2014), while they are even mandatory by law. In light of the growing introduction of innovative technologies (Barth, Eickholt et al. 2017) and the overall lack of adequately trained personnel (IFA 2021), this problem will likely only get worse. As checklists for hazard identification and risk assessment are the most common tools for occupational safety practitioners, we set our focus on improving their use. Although the checklists present a useful assessment tool, their availability and ap-plicability leave much to be desired. On the one hand, often only generic and static checklists for broad categories of workplaces are available. On the other hand, since the available checklists are developed to cover as much as possible, some of their items might not be applicable at all. Therefore, for complex workplaces (fitting more than one workplace category), several static checklists have to be merged and ad-justed, while for unique workplaces (with no immediate category fit), a checklist would need to be compiled from all the potentially applicable legislations, guide-lines, and regulations. As such, due to the large volume of material and the lack of a systematic approach, assuring completeness and consistency of the final checklist requires a large effort. We propose to use Natural Language Processing (NLP, see e.g. Chowdhary (2020)) to generate tailored checklists from textual workplace descriptions. The algorithm is based on the work of Martinc, Škrlj et al. (2022) and Westhoven and Jadid (In Press) and compares all the available checklists and the workplace description to identify matches, dependencies and contradictions between the clauses and yields a list of necessary workplace checklist items together with an additional list of potentially fitting items. As the final decision to include each item is left to the user, the algo-rithm also collects feedback to improve the quality of future proposals. In this paper, we show how to generate a custom-tailored checklist, which incorporates all the def-initely and potentially suitable items while excluding the unnecessary ones, with the use of NLP techniques.

References Arbeitsschutzkonferenz (2014). Grundauswertung der Beschäftigtenbefragung 2015 und 2011 - beschäftigtenproportional gewichtet.

Barth, C., et al. (2017). Bedarf an Fachkräften für Arbeitssicherheit in Deutschland. Dortmund.

Chowdhary, K. (2020). "Natural language processing." Fundamentals of artificial intelligence: 603-649.

IFA (2021). Arbeitswelten. Menschenwelten. Prioritäten für den Arbeitsschutz von morgen. Berlin.

Martinc, M., et al. (2022). "TNT-KID: Transformer-based neural tagger for keyword identification." Natural Language Engineering 28(4): 409-448.

Westhoven, M. and A. Jadid (In Press). Supporting Work Place Risk Assessments by Means of Natural Language Processing. 69th GfA Frühjahrskongress. Hannover, Germany, Gesellschaft für Arbeitswissenschaft.

15:20
Extending safety control structures: a knowledge graph for STAMP
PRESENTER: Francesco Simone

ABSTRACT. The increasing interactions among technical components and human agents in modern industrial systems poses new challenges for safety management, demanding for new approaches to complete techno-centric investigations with social-oriented analyses. In these scenarios, it becomes crucial the usage of a detailed accident analysis beyond immediate failures, that is capable to extend towards physical, cyber and socio-technical aspects. Systems-Theoretic Accident Model and Processes (STAMP) was developed as an accident model that makes use of systems theory to arrange a causality model focusing on system hazards. However, if applied at larger socio-technical scale, the model can become hard to manage, making safety evaluations burdensome. The inner nature of a STAMP model, which maps connections (feedbacks and control actions) among system elements, matches the principles of a graph representation, that are made up of vertices and edges mapping connections. This correspondence may then enable the exploitation of a STAMP-driven graph to guide safety assessments by systematic graph analyses. This paper explored the possibility of deriving a knowledge graph from a STAMP safety control structure and use it as a key element for subsequent hazard analyses. The study is instantiated on case study related to the inspection (based upon Seveso III directive) of a Seveso company. The analysis is meant to highlight the safety requirements to adapt the inspection procedure to possible future changes, as promoted by an energy transition. The obtained results shows the feasibility of relying on such tools to empower (or possibly update) a Safety Management Systems using systemic units of analysis.

15:35
A Method based on Natural Language Processing for Periodically Estimating Variations of Performance of Safety Barriers in Hydrocarbon Production Assets

ABSTRACT. This work develops a methodology for estimating the performance of safety barriers of hydrocarbon production assets using reports of Process Safety Events (PSEs). We address the challenge of dealing with assets that evolve, e.g., due to degradation of components or maintenance interventions, so that also the performances of the safety barriers vary in time. The proposed methodology combines a taxonomy of the words used in the reports, a method of Natural Language Processing (NLP) for the extraction of the keywords, which takes into account the time of occurrence of the PSEs, and a technique for estimating the barrier performance from the number and severity of PSEs involving the barrier. The proposed methodology is validated using a repository of reports of PSEs in hydrocarbon production plants

15:50-16:20Coffee Break
16:20-17:35 Session 19A: S.23: Dependent failure behaviour in risk/reliability modelling, maintenance and Prognostics and Health Management (PHM) I

This special session aims to gather researchers to discuss recent methodological advances in the study of dependencies modelling and its application in risk/reliability, maintenance and PHM. Innovative applications that addresses the issues of dependencies in engineering practices are also welcomed. We expect contributions on the following topics: Dependent failure modelling, dependent competing failure processes, degradation-shock-threshold models, copula, frailty models, multiple dependent degradation processes, Bayesian network, common cause failure, cascading failure, load-sharing, remaining useful life prediction considering dependencies, condition-based and predictive maintenance considering dependencies.

Location: Room 100/3023
16:20
Data-Driven Approaches for Operation and Maintenance Decision on Power Generation
PRESENTER: Herry Nugraha

ABSTRACT. The paper proposes a decision-making system based on Proactive Operation & Maintenance (POM) data-driven approaches, supported by Condition-Based Maintenance (CBM) and Statistical Data Analysis (SDA). The CBM approach is comprised of both online and offline data acquisition methods. To develop the system, the online CBM data acquisition involves the analysis of real-time sensor data at the asset of power plants that are categorized based on their numerical variables and then mapped on the asset register. On the other hand, offline CBM data acquisition is conducted by performing in-situ measurements at the power plants site. Statistical data analyses of failure data derived from the Computerized Maintenance Management System (CMMS) are also utilised to support the decision system. The POM approach supported by data-driven method and set point values of the controllable parameter is developed. In this decision-making system, the POM approach through the combination of CBM and SDA is enhanced to be structured by proactive online recommendations and comprehensive analysis space followed by expert judgments. This study presents a novel strategy for the creation and implementation of a proactive decision-making system in the context of operation and maintenance of power plants. It presents a decision-making paradigm that encompasses CBM and SDA, which can raise the amount of confidence in expert judgment, as well as the types of recommendations that can be made. The impacts show that the performance of the power plants increased.

16:35
Statistical analysis of offshore wind turbine errors
PRESENTER: Wanwan Zhang

ABSTRACT. This paper aims to explore characteristics of offshore wind turbine errors. Four hypotheses are put forward and strictly tested through a variety of statistics. Cox model is selected to model the dependency of failure process on covariates, such as weather and production. Considering time varying and correlation of co-variates, three forms of covariates matrix are designed and their coefficients are estimated. Results show wind strongly increases baseline hazard and the impact increases with accumulation time. Against common sense, temperature and production condition mildly decrease baseline hazard. Coefficients of principle components (PC) presents only 2 PCs act as wind and temperature, but PC0 that contain the most information does not contribute. Goodness-of-fit tests are passed and results are fully discussed. It shows that three co-variates can explain fluctuation of error numbers and the application of Cox model is successful.

16:50
An asset management framework for wind turbine blades considering reliability of monitoring system

ABSTRACT. In the context of rapid development of wind energy, developing robust asset management modelling tools to minimize wind turbine operation and maintenance costs, and assure their reliability and sustainability is of paramount importance. In this study, a wind turbine (WT) blade asset management (AM) Petri net (PN) model incorporating risk-based maintenance and structural health monitoring (SHM) processes is presented. Firstly, PN modules cover the entirety of the blade AM process, describing degradation, inspection, condition monitoring, and maintenance processes. The PN module is used to predict the future blade condition for a given AM strategy and provide information to support AM decision-making for blades during WT operation. Secondly, the reliability of monitoring system is considered by calculating expected information gain/loss of sensor network based on Bayesian inverse approach. The effect of the monitoring system's accuracy on monitored outputs can be provided. Finally, results related to monitoring outcomes and their detected time are given in detail.

17:05
Deep Learning Models Applied to Intelligent Diagnosis of Rotating Machines
PRESENTER: Lavinia Araujo

ABSTRACT. In an era of digitalization, much of everything that exists is being monitored [1]. The advancement of technology enabled the development of physical support that allows the creation of increasingly intelligent machines. In this scenario, artificial intelligence (AI) algorithms occupy a prominent space, due to their ability to evaluate large amounts of data, adjusting according to the specificities of the evaluated problem [1]. Still on this digitization process in industries, an interesting application of AI algorithms in this context is the intelligent diagnosis applied to rotating machines [2], since this type of equipment is present in several industrial processes and is especially susceptible to failures due to constant mechanical stress to the environment to which they have subjected [3]. In this way, algorithms capable of assisting in the knowledge of failure modes under specific conditions can be useful tools in maintenance management and avoid costly breakdowns. This propitious scenario boosted the field of Prognostics and Health Management (PHM) [4]. Various machine and deep learning models used in the detection of anomalies and failures are found in the literature [2, 5]. Each author builds a different architecture based on the main deep learning models, changing functions, parameters and normalizations, and with different databases, which makes a fair comparison between the models impossible [2]. From there, this work proposes a brief review using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology to identify works in the literature that perform intelligent diagnostics on available datasets of rotating machines using deep learning algorithms. After this review, this work also presents new results from the use of models, such as multilayer perception (MLP), auto-encoder (AE), convolutional neural network (CNN) and recurrent neural network (RNN), making direct comparisons of the result obtained with the results found in the literature after the review. The data sets used as input were pre-processed using Fast Fourier Transform (FFT), Short-Time Fourier Transform (STFT) and Continuous Wavelet Transform (CWT). To support the discussions about the results, confusion matrix, accuracy and losses graphs were generated for all combinations between models and input types applied. References 1 - REZAEIANJOUYBARI, Behnoush; SHANG, Yi. Deep Learning for Prognostics and Health Management: State of the Art, Challenges, and Opportunities. Measurement, 163, 107929, 2020. 2 - ZHAO, Zhibin; LI, Tianfu; WU, Jingyao; SUN, Chuang; WANG, Shibin; YAN, Ruqiang; CHEN, Xuefeng. Deep Learning Algorithms for Rotating Machinery Intelligent Diagnosis: An Open Source Benchmark Study. ISA Transactions, 107, 224-255, 2020. 3 - HENG, Aiwina et al. Rotating machinery prognostics: State of the art, challenges and opportunities. Mechanical systems and signal processing, 23, 3, 724-739, 2009. 4 - FINKA, Olga; WANG, Qin; SVENSÉN, Markus; DERSIN, Pierre; LEE, Wan-Jui; DUCOFFE, Melanie. Potential, Challenges and Future directions for Deep Learning in Prognostics and Health Management Applications. Engineering Applications of Artificial Intelligence, 92, 103678, 2020. 5 - ZHANG, Bo et al. Intelligent Bearing Fault Diagnosis Based on Open Set Convolutional Neural Network. Mathematics, 10, 21, 3953, 2022.

17:20
Thermal Influence on Plastic Optical Fiber: A Reliability Diagnose

ABSTRACT. In the last years, the phenomenon called Fourth Industrial Revolution has promoted an intense digitalization of data and the automation of several processes in real-time, bringing concepts such as the internet of things, smart cities and cloud computing [1]. From this scenario, optical fibers have become key elements as optical sensors or, mainly, composing optical networks due to the possibility of transmitting data in high frequency over long distances without big losses [1]. In the specific case of optical fiber sensors, other important points also have contributed to their popularization. Firstly, more than a decade ago, optical fiber already appears as the main technology in telecommunications, making it present everywhere, allowing the reuse of part of this structure for sensing [2]. Moreover, the same features behind the optical fiber networks' success over the traditional electrical networks, such as immunity to electromagnetic interferences, a high degree of flexibility and passivity, also boost the interest in optical fiber sensor development [3]. This last feature, for example, allows the application of devices based on optical fiber in aggressive environments where electrical devices present some limitations in their operation, such as in explosive atmospheres, corrosive media or submerged [4]. In the face of this increase in interest in optical fiber devices, different types of optical fibers were developed for different applications, such as the one composed of Poly(methyl methacrylate) or simply PMMA, also called plastic optical fiber (POF) [4]. This type of optical fiber is very popular due to its high-value of refractive index and great mechanical resistance in relation to other types of optical fibers, but have an important limitation: a narrow operating temperature range [4]. From there, this work presents a reliability analysis from a brief study of how thermal effects may affect the operation of devices based on POFs and also aims to identify failure modes after performing accelerated life tests. This study was subdivided into two different steps: firstly, the behavior of the optical power level was measured during all heating processes to failure and, after this first step, also performed analyses using microscopy to locate possible mechanical changes in POFs structures. Two different methodologies of accelerated life tests were performed: tests in a continuous temperature and tests in an uniform incremental temperature.

References: 1 - Pratigya, S. et al. Influencing the trend of Industry 4. O using Optical fiber Technology. In: 2021 International Conference on Advancements in Electrical, Electronics, Communication, Computing and Automation (ICAECA). IEEE, 2021. p. 1-6. 2 - Y. Aono, E. Ip, and P. Ji, "More Than Communications: Environment Monitoring Using Existing Optical Fiber Network Infrastructure," in Optical Fiber Communication Conference (OFC) 2020, OSA Technical Digest (Optica Publishing Group, 2020), paper W3G.1. 3 - J. Haus, Optical Sensors: Basic and Applications, 1st ed., Weinheim: WILEY-VCH Verlag GmbH & Co. KGaA, 2010. 4 - Broadcom Corporation. (2016). HFBR-RXXYYYZ Series (POF) and HFBR-EXXYYYZ Series (POF) - Plastic Optical Fiber Cable and Accessories for Versatile Link - Datasheet. Broadcom Ltd.

16:20-17:35 Session 19B: S.17: Human Factors in Natural Hazard Preparedness

S.17: Human Factors in Natural Hazard Preparedness

Chair:
16:20
Healthcare Workers' Perceptions of Disaster Risk Management in Saudi Arabia Hospitals
PRESENTER: Shahad Alshehri

ABSTRACT. This study examines the effectiveness of Disaster Risk Management (DRM) strategies in public hospitals across Saudi Arabia, focusing on the perspectives of healthcare workers (HCWs), such as doctors, nurses, and administrators. Despite Saudi Arabia's significant DRM efforts, an empirical assessment of their effectiveness, particularly from the frontline workers, is still needed to identify potential areas of improvement. The inherent subjectivity and potential biases in risk perception notwithstanding, these findings could guide future policy improvements. A cross-sectional study was conducted using a descriptive, quantitative, non-probability sampling method among HCWs in 22 public hospitals across four regions of Saudi Arabia: Eastern, Western, Southern, and Central. These hospitals were selected based on their size, range of services, and geographical diversity. The HCWs were asked to evaluate the four DRM phases—Mitigation, Preparedness, Response, and Recovery—through questionnaires administered via Qualtrics and distributed via email and WhatsApp. Data analysis was performed using SPSS. The results indicated that the majority of HCWs perceive the DRM strategies as efficient, with disaster mitigation and response strategies being viewed as more effective than preparedness and recovery strategies. The study also revealed significant regional differences in perceived effectiveness of DRM strategies, with HCWs in the central region perceiving their strategies as more effective, particularly in disaster mitigation and preparedness, than their counterparts in the other regions. This study offers novel insights into HCWs' perspectives on DRM in Saudi public hospitals. Despite the overall positive perception of DRM strategies, the regional disparities underscore the need for harmonized and improved disaster risk management practices. Future research should focus on understanding the factors behind these regional differences to develop interventions that strengthen disaster preparedness and response strategies nationwide.

16:35
Designing Floods Risk Messages to Motivate Adaptive Behaviours
PRESENTER: Ian Dawson

ABSTRACT. Effective risk communication is a vital part of natural hazard preparedness and risk management. Previous studies show that when individuals are faced with a forthcoming natural hazard (e.g., flood), the content and presentation of warning messages within risk communication systems can either strengthen or weaken the receivers’ intention to perform the desired preparatory behaviours. Furthermore, evidence suggests that existing risk communication approaches have often been ineffective at cultivating disaster risk awareness or motivating adaptive behaviours. More specifically, there has been a lack of research on how variations in the content of action guidance (detailed v. vague) and the framing (negative v. positive) of warning messages influence the recipient’s intention to prepare for natural hazards, particularly for floods in developing countries. To address this issue, we conducted a study with Jordanian participants (N = 378) that measured the influence of perceived risk, level of message detail, and message framing on the willingness to prepare for a flood. The results revealed a significant increase in the participant’s intention to prepare for a flood when the warning message (i) included detailed (cf. vague) guidance on flood preparation actions and (ii) described the outcome of preparation actions in a negative (cf. positive) frame. Moreover, it was identified that participants’ risk perceptions increased when the messages were detailed (cf. vague), and that there was a positive relationship between the perceived risk of a flood and the willingness to prepare for it. The results of this study enable us to provide disaster risk management authorities (e.g., Jordanian General Directorate of Civil Defence [GDCD]) with important insights into how flood risk communications could become more effective in influencing the willingness of citizens to prepare against future floods.

16:50
The Influence of Self-Efficacy, Sense of Community and Past Experience on Flood Risk Awareness and Preparedness
PRESENTER: Ian Dawson

ABSTRACT. Developing countries are frequently affected by severe floods and their level of flood risk preparedness is often minimal. This situation is not helped by a lack of research exploring the factors that influence flood risk awareness and preparedness among citizens in developing countries. To address this issue, we conducted a study that examined the relationship between self-efficacy, sense of community, experience, and flood risk preparedness in the developing country of Jordan. Our questionnaire was completed by 300 adult residents in the four Jordanian cities of Amman, Madaba, Ma’an and Balqa, each of which had been severely impacted by flood disasters in 2018 and 2019. Multiple regression analysis identified a significant positive relationship between flood risk preparedness and self-efficacy, sense of community, and experience. The strongest of these relationships was with self-efficacy, which had a correlation of r = 0.481, p < 0.01. This particular relationship may have existed because individuals with higher self-efficacy are often those who are better empowered to instigate a greater quality and quantity of actions against disasters. Also, as indicated by previous research conducted in other contexts, individuals with higher self-efficacy may have a greater ability to self-regulate their behaviours, have more confidence to participate in riskier situations and, therefore, may be better equipped to handle the negative emotions that might arise during floods. Our results also indicated that many of our Jordanians respondents did not take flood risk warnings seriously and often ignored governmental risk communications, possibly because trust in governmental entities and the perceived effectiveness of risk warning and communication systems are relatively low in Jordan. Our findings suggest that flood risk preparedness in Jordan could be improved by increasing self-efficacy and risk awareness. This might be achieved via a variety of communication channels and training approaches, as well as through the development of local and national flood emergency plans that can be established and implemented by individuals and regional communities. Our recommendations for further research include quantitative or qualitative studies to understand better the connection between flood risk preparedness and training, and determining how to improve disaster risk warning systems for individuals and communities in developing countries.

17:05
Household disaster preparedness in Istanbul

ABSTRACT. Disaster preparedness systems can be considered hierarchical as high level strategies are developed by governmental bodies and relevant tools are developed by administrative units. In this multi-layered structure; instructions, manuals, training activities, subsidies projects and opportunities encourage communities to increase their coping capacity against natural hazards. Within these factors, household priorities, such as household economy, health problems, person in need of care, play crucial roles on mitigation activities by the means of diversity and cost. Consequently, it is worthy to note that willingness to reduce risks and to be prepared to disasters in households do not solely depend on individual features and perceived level of risks. After the destructive earthquakes in 1999 in Turkey, remarkable changes in regulations, organizations and public awareness have been set to shift disaster management to risk reduction. At institutional and professional levels, new regulations and implementation tools have been issued since 2000s. At community level; earthquake insurance system has been launched, regeneration projects have been developed and training activities have been disseminated nationwide. This study represents findings of three comprehensive surveys in 2008, 2013 and 2019 on Istanbul to discuss earthquake mitigation activities of citizens. The changes on these activities will be evaluated according to the changes in disaster risk reduction system in Turkey since 2000s.

17:20
Opportunities and barriers for effective communication of natural hazards: Cross-cultural experiences

ABSTRACT. All over the world, recent risk events that are related to climate change have demonstrated the vulnerability of communication and waring networks. During the Ahrweiler flood in Germany in July 2021, a lack of risk perception and a barrier in communication may have resulted in delayed evacuation and in the loss of more than 140 lives. Similarly, the 2021 heat waves and wild fires in North America and Europe have risen the question whether establishing a better risk communication network in the preparation phase would help to reduce losses. Even though the theoretical background to prepare for unanticipated communication barriers has been established in the IRGC-framework for systemic risks for years, practical examples for a method to overcome such regional and local barriers are still very scarce, especially in multi risk situations. To identify and overcome unanticipated communication barriers we developed a virtual stakeholder process and tested it in different countries. In all four countries, stakeholders worked together on recommendations to prepare for unanticipated barriers in natural hazard communication. The project in Peru (German funding) had a special focus on multi risks (flash floods in combination with tsunamis and the loss of critical infrastructure). The structured virtual method to talk with the stakeholders about uncertainties of the scientific models offers the possibility to work on barriers and communication recommendations that are highly specific for countries and their characteristic risks. This appears to be much more helpful than generic rules, which have been developed from stakeholder processes in different geographical, cultural and political contexts and imposed to situations in other countries where they are not appropriate.The stakeholders are confident that these recommendations may help to overcome the risk communication barriers in their respective countries.

16:20-17:35 Session 19C: S.10: Advances in Reliability Engineering and Risk Management in Oil and Gas Industries I

Thiis session focus on reliability of machineries, which are exposed to quite harsh conditions in deep water oil wells. Oil and Gas industry has been digitalized, allowing for data availability and integrated databases to improve well design, technical specification, maintenance, and operational decisions. This Special Session comprises papers in these fields (just to name a few): autonomous and remote offshore activities by using digital twins for production management, development of robots for unmanned operations, prognostic and health management for predictive maintenance and real time integrity management, and reliability of electrified fields.

16:20
RELIABILITY CRITERIA ESTIMATION OF O&G INDUSTRY EQUIPMENT IN THE CONCEPT SELECTION PROCESS
PRESENTER: July Macedo

ABSTRACT. The design and development of new products are complex processes since these products must satisfy, such as cost, development time, lifetime, reliability, among others. different criteria. After defining potential alternatives, frequently called “concepts”, that fulfill most of the criteria, these concepts need to be compared, ranked and/or selected.. However, this process is not a trivial activity - the criteria are not met in the same way by the different concepts, and a systematic comparison is needed to select the most suitable alternative. Moreover, the selection of concepts becomes even more challenging at an early stage of the development of novel products, when a limited amount of data is available for the concepts under analysis, making it difficult to quantify some criteria. For instance, the reliability of the concepts evaluated can be a relevant criterion. However, little or no reliability data is available at the concept selection stage. Indeed, many studies do not detail the reliability criterion when estimating it and, in some cases, consider it only in a qualitative way. However, it is possible to obtain a prior distribution of the reliability of each concept with data from expert opinion and/or data from similar technologies. In this sense, this study aims to define the reliability criteria based on Bayesian prior distributions in the concept selection process. Thus, the proposed methodology includes the reliability in the selection process in a quantitative manner and also incorporates the related uncertainty. measthe reliability criterion considering the uncertainty related. We apply multicriteria decision-making methods, such as the Pugh Matrix and the Weighted Rating Method (WRM), to encompass different criteria. We present a case study in which three different concepts of oil well equipment are compared. Besides the reliability criteria, costs, flexibility and development are evaluated as well. The results can help the parties involved in the process to base decisions on more robust reliability criteria, enabling the selection of more credible equipment to contribute to the industry’s end activities’ efficiency.

16:35
Proposal of a test protocol for reliability assessment of the new all-electric intelligent completion interface
PRESENTER: Eduardo Menezes

ABSTRACT. One of the large breakthroughs in the O&G industry is the complete electrification of the completion. In this regard, a whole new group of equipment has been developed for implementing electric completion. One of the most critical is the subsea interface (SI), with the role of providing the subsea valves with the necessary power and the communication layer to the topside instrumentation and control. In order to evaluate the reliability of the SI, a complete test protocol must be designed, encompassing mechanical, thermal, electronic and electrical testing, since the equipment involves different mechanical support parts, power electronics and communication devices. In this work, the elaboration of a test protocol for the SI is executed according to the most recommended guidelines, academic research and reliability considerations. The referred protocol allow to gather data used in a developed reliability model that estimate the SI reliability characteristics.

16:50
Methodology for extracting reliability parameters from the qualification standard tests ISO-23936

ABSTRACT. In the O&G production chain, the presence of elastomers is widely spread in several applications, often as a critical component. In this context, the service providers and equipment manufacturers are claimed to standard-proof their products and systems relative to elastomeric properties, especially tensile strength and elongation at break. This process is mandatorily ruled by the qualification standard ISO-23936, which establishes the baseline procedures for testing of elastomers. The standard determines pressure, temperature, loads and number of specimens required to the qualification process of the elastomer, and as result, presents a procedure to the lifetime estimation under tested conditions. However, the ISO-23936 does not present guidelines to extract a reliability measure from the samples tested in many different conditions. In this work, a methodology to use the data coming from the tests of ISO-23936 with the aim of obtaining reliability estimation for elastomers is presented and an O&G industry case study is performed. Additionally, the discussion about ISO 23936 lifetime traditional approach and reliability methods is carried out, emphasizing the difference between them.

17:05
Automated Well Control Improves Reliability and Reduces Risk in Well Construction

ABSTRACT. The operation of drilling is a process which has traditionally been manually controlled. In all well operations the major accident hazard of losing well control resulting in an un-controlled flow of reservoir fluids to surface and subsequent fuel fed fire, or blowout, can occur. The process of controlling this hazard has also been manually controlled and is therefore subject to significant human factors' issues. Automating the function of well control is perceived as a significant improvement in reliability and reduces risk for drilling operations.

Well control is a safety critical function in upstream operations. Each year we have multiple blowouts and several fatality events due to a loss of well control. Traditionally, well control has been entirely reliant on a human reliably and accurately detecting an influx and shutting-in the well. However, the human condition means the Driller can be distracted, or unexpectedly influenced by extraneous factors. Over-reliance, then, on humans in well control can be dangerous, because of the inherent and constant exposure to human factors risk. Tasks that require sustained periods of cognitive awareness and reaction are best performed by an automated process.

 AnAutomated Well Control system has been designed to fully automate influx detection and shut-in sequences. Once the system detects the influx, it performs a series of operations by taking control of the drilling rig equipment. The drill string is spaced out, top drive and mud pumps are stopped, and the BOP is closed.

A comparative study has been performed which determines the reduction in exposure to human factors by automating the process of well control. The results indicate a reduction of 94%, which is an improvement of two orders of magnitude.

Multiple Rig Trials on test rigs using traditional and cyber rigs have demonstrated the effectiveness of the standard system, proving up the functionality under different operational requirements. A technology qualification exercise was conducted, and the system has received a Technology Qualification Certificate for cyber and traditional rigs.

Existing systems in use on rigs, such as Managed Pressure Drilling (MPD) and Early Kick Detection Systems (EKDS), can also benefit from linking directly with the Automated Well Control system to facilitate a fast and effective shut-in. The Automated Well Control and MPD systems have been combined to create the first integrated package to deliver both pressure control and well control in an efficient and less error prone manner. A full rig trial was successfully performed to demonstrate and verify the integration and functionality of both systems.

The technology has been awarded a patent by the UK Patent Office.

The presentation will describe the system design, functionality, and rig trial results. It will also summarise several other areas of qualification: the total cost of risk for well control, human factors exposure reduction, technology qualification, industry documentation compliance, rig trials and results.

17:20
Novel Implementation of an FPGA-based Real Time Impedance Spectroscopy for Highly Integrated Reliable Safety Systems
PRESENTER: Markus Walter

ABSTRACT. The power supply of safety critical and high reliable applications like subsea production systems [1], [2] and medical technologies [3] requires a reliable and safe energy supply over the whole service life.

To guarantee functionality, highly reliable and accurate diagnosis methods of the energy supply during operation, especially in safety critical systems, are essential. Electrochemical Impedance Spectroscopy (EIS) is a well-known method to estimate the State of Health (SOH) [4], and State of Charge (SOC) [5] of Lithium-Batteries. An accurate estimation of the batteries current state can be made because the impedance measurement reflects chemical degradation and reaction potential [4], [5], [6]. Other analysis methods do not provide such an in-depth insight [7].

EIS is a non-destructive, online feasible and accurate method that can be integrated into power electronics [6]. However, additional measurement equipment and microprocessor are required to calculate the Fourier transformation which is undesirable considering reliability, system cost, system size and hardware complexity.

To solve these challenges a novel FPGA based real time EIS algorithm is proposed. By calculating the Fourier transformation in parallel to current injection additional measurement equipment becomes obsolete. This significantly reduces the system size and increases the system reliability.

A case study is performed to show the functionality of the real time EIS implementation using an RC-Circuit. The benefits and limitations of the EIS implementation as well as future use cases are discussed.

References [1] T. Winter and M. Glaser, “Condition Monitoring of Next Generation Digitized Electric Subsea Actuators,” OTC-31123-MS, 2021, doi: 10.4043/31123-MS. [2] T. Winter et al., “Empirical Reliability Analysis of a Safety-related Battery,” pp. 3622–3629, 2019, doi: 10.3850/978-981-11-2724-3_0913-cd. [3] Johannes Schick, Markus Glaser, and Steven Huber, “Design, realization and verification of a novel knee joint actuator for robotic exoskeletons,” IEEE International Conference on Mechatronics (ICM), 2019. [4] K. Mc Carthy, H. Gullapalli, and T. Kennedy, “Online state of health estimation of Li-ion polymer batteries using real time impedance measurements,” Applied Energy, vol. 307, p. 118210, 2022, doi: 10.1016/j.apenergy.2021.118210. [5] Z. Wang, G. Feng, D. Zhen, F. Gu, and A. Ball, “A review on online state of charge and state of health estimation for lithium-ion batteries in electric vehicles,” Energy Reports, vol. 7, pp. 5141–5161, 2021, doi: 10.1016/j.egyr.2021.08.113. [6] W. Choi, H.-C. Shin, J. M. Kim, J.-Y. Choi, and W.-S. Yoon, “Modeling and Applications of Electrochemical Impedance Spectroscopy (EIS) for Lithium-ion Batteries,” J. Electrochem. Sci. Technol, vol. 11, no. 1, pp. 1–13, 2020, doi: 10.33961/jecst.2019.00528. [7] Jonny Dombrowski, “Review on Methods of State-of-Charge Estimation with Viewpoint to the Modern LiFePO4-Li4Ti5O12 Lithium-Ion Systems.,” THE 35TH INTERNATIONAL TELECOMMUNICATION ENERGY CONFERENCE, 2013.

17:35
Risk Assessment in the Implementation of Advanced Work Packaging (AWP) in the Oil and Gas Industry

ABSTRACT. This study aims to propose a risk assessment in the implementation of AWP as a method of monitoring capital projects of industrial assets by oil and gas operators in a scenario where industrial construction companies execute these projects. With increasingly challenging schedules and complex environments, capital projects are relentlessly seeking cost optimization, risk reduction, better use of resources, and greater predictability. It is no different with projects in the oil and gas industry, especially for operators that rely on a chain of suppliers for their production units' executive construction, assembly, and commissioning projects. Traditional methodologies for managing and monitoring these projects have already brought many advances. However, gaps still translate into delays in time and costs, which, when monetized, represent considerable losses. Advanced Work Packaging (AWP) was born from the union of studies focused on workforce productivity in heavy industry projects associated with structured and multidisciplinary planning and management. The lack of risk assessment in the AWP implementation may lead to project failure. As a methodological approach, the authors conducted a case study and used a matrix to identify opportunities, risks, and impacts of implementing and using AWP to monitor projects in a specific oil and gas operator. PFMEA is used to identify the main risks in the implementation of AWP. Field subject experienced specialists prepared the matrix. As a result, the study shows vital information that can contribute effectively to project managers involved in monitoring capital projects of industrial assets by oil and gas operators and help meet the time, cost, and quality requirements. The contribution is significant since AWP advocates starting the project with the end in mind, allowing a holistic view of the components that integrate and materialize the project. In this context, AWP offers methods to monitor and execute capital projects focused on constructing, commissioning, and delivering complex industrial assets. Oil and gas operators need to monitor and technically supervise the execution of these projects. The AWP offers a system that, in addition to providing a standard interface for dialogue between interested parties, focuses efforts on managing work packages, each focused on a key aspect necessary for the execution of a project stage and which integrated gives life to the asset. AWP can impact the project's success and help understand performance and safety during its lifecycle. Although conducted in a specific oil and gas operator, where industrial construction companies execute these projects, the study can be generalized to other companies affected by risk issues. The risks lead to waste, rework, and unnecessary energy consumption resulting from a lack of monitoring and execution of capital projects focused on constructing, commissioning and delivering complex industrial assets. The study can change the practice and thoughts of professionals dealing with project risk assessment.

16:20-17:35 Session 19D: S.11: Dynamic risk assessment and emergency management for complex human-machine systems I
Location: Room 2A/2065
16:20
A methodology for updating emergency schemes by combining dynamic Bayesian networks with graphical evaluation and review technique
PRESENTER: Xuan Liu

ABSTRACT. The blowout risk in offshore drilling operations is characterized by uncertainty and complexity. Blowout accidents usually result in significant casualties, property losses, and even environmental disasters. To alleviate the consequences of accidents and evaluate the emergency risk, we propose to integrate dynamic Bayesian networks (DBN) and graphical evaluation and review technique (GERT) to develop a risk assessment model. In the proposed methodology, we establish a topological network to describe the failure coupling of nodes in the emergency schemes by DBN. Subsequently, the dynamic failure probability change of different nodes can be obtained through failure probability analysis. To optimize emergency schemes, GERT is integrated into the sensitivity analysis to evaluate the risk of nodes in the emergency schemes. The duration of emergency operations can be optimized by the results. Offshore capping stack, an effective deepwater blowout emergency technique, is used to demonstrate the applicability of the methodology. The results show that the proposed model is beneficial to determine emergency operations in offshore oil and gas activities.

16:35
A brief review of systematic risk analysis techniques of lithium-ion batteries
PRESENTER: Qiaoqiao Yang

ABSTRACT. As the increasing demand and widening range of application, lithium-ion battery (LIB) is becoming indispensable equipment in daily life. However, accidents related to LIB-powered facilities have been reported continually. In this paper, we studied the LIB risk analysis techniques and battery-related emergency response. Fault tree (FT), failure mode and effects analysis (FMEA), Bayesian network (BN) and systems-theoretic process analysis (STPA) are introduced. And the applications of abovementioned techniques are reviewed. Advantages and disadvantages of these techniques are discussed, with suggestions for the method selection. Further, the discussion of battery fire-extinguishing procedure emphasizes standardization of emergency response and battery manufacturing. This work aims to inspect LIB risk in a systematic perspective, which can be instructive to battery system safety from design stage to emergency disposal.

16:50
Operator Decision Strategies in Nuclear Control Centres: A Domain-Specific Information Flow Map

ABSTRACT. Nuclear power plant control room operators respond to accidents by following detailed emergency procedures. At the same time they ensure the procedures match the accidental situation, especially with unanticipated conditions due to multiple failures, automation disturbances, or unreliable indications. Operator adaptation thus allows the system to respond to unanticipated conditions. The engineering problem with adaptation is how to design and evaluate technology, training and work processes that support the decision strategies people may use when acting adaptively. Designs that do not consider flexible strategy choice will not support operator cognitive demands for the complex tasks that are not automated. Human reliability analyses that do not identify the strategies used in unanticipated conditions will not correctly assess the risks. Operators’ strategies can be discovered through cognitive task analysis (CTA), for instance by performing a formative strategy analysis. Yet to date formative strategy analyses have been relatively neglected and are not often applied. Without CTA the task analysis will provide normative support for “the one right way to accomplish the task”, instead of tailored support to several viable and effective strategies for context-specific, unanticipated situations. This paper proposes an information flow map for emergency operation in nuclear control centres. This may facilitate the performance of CTA and thereby the ability to design systems capable of supporting operators’ adaptation to unanticipated situations.

17:05
Dynamic risk assessment of train brake system failure considering the component degradation
PRESENTER: Xiaoliang Yin

ABSTRACT. Train brake system plays a vital role in train operation safety. In this paper, a hybrid model is proposed to evaluate the train brake system failure risk. The hybrid model, which combines fault trees with Bayesian networks, has a good logical structure and probabilistic reasoning ability in dynamic risk assessment. The fault tree model is used to identify the risk influencing factors in the brake system, while the failure dynamic nature is captured by the Dynamic Bayesian network. In particular, we evaluate the degradation of four common failures, insufficient braking, brake test failure, braking relieve failure and wheel lock. The risk influencing factors of the brake system and their relevance are also identified. A model based on fault tree and Dynamic Bayesian network for the train brake system is developed. The proposed model can capture the spatial variability of parameters and simulates the evolution of brake faults in time and space. The information is used to perform sensitivity analysis and diagnostic inference on the model. A case study is performed to demonstrate the proposed hybrid method.

17:20
State of health estimation of lithium-ion batteries based on incremental capacity curves
PRESENTER: Mengyao Geng

ABSTRACT. State of health (SOH) is adopted as a key predictor in the battery management system to ensure the safety and reliability of electric vehicles. In this paper, based on incremental capacity (IC) curves and long short-term memory network (LSTM) with Bayesian optimization, we propose a method for SOH estimation of lithium-ion batteries. Firstly, IC curves are obtained and health features are extracted from partial IC curves. Secondly, LSTM model is established to capture the mapping relationships between health features and SOH. Thirdly, Bayesian optimization is applied to automatically select hyper-parameters of LSTM. Eventually, the effectiveness and superiority of the proposed method are validated on real lithium-ion battery aging datasets from CALCE Prognostics Data Repository.

17:35
Application of Bayesian Networks for real time cyber security crisis classification in passenger ships

ABSTRACT. Similar to other industries, shipping is becoming highly dependent on Information Technology (IT) and Operational Technology (OT). Although such technology is beneficial and gradually advances the way shipping operations are conducted, its utilization involves existing and emerging risks to systems and processes, critical for the safety and security onboard vessels. These risks may arise from vulnerabilities linked to deficiencies in the design, operation, integration, connection and maintenance of the IT and OT systems, that could be exploited by an external or internal threat agent. This paper presents a cyber-risk assessment model based on Bayesian networks (BN) for real-time crisis classification of cybersecurity incidents related to detected vulnerabilities in the IT and OT systems on passenger ships. The model is part of a crisis classification module addressing various categories of security threats, which is under development for the EU-funded research project ISOLA. ISOLA aims at introducing an intelligent security superintendence ecosystem to supplement and enhance the existing ship security processes as well as the protective measures applied onboard passenger ships. Among other services, ISOLA provides functions for continuous security monitoring, including cybersecurity functions. The BN model receives specific IT and OT vulnerability data generated by a specialized ISOLA service and employs Bayesian probabilistic techniques to evaluate any identified vulnerability. According to the results, the model performs real-time crisis classification of the cybersecurity-related incident, utilising a six-level ascending scale for crisis taxonomy, and generates a relevant warning to alert the crew and facilitate early detection of potential or actual safety- and security-threatening occurrences. In the ISOLA ecosystem, the model, as part of the overall crisis classification module, is utilized by a dedicated decision-making module to support the situational awareness of the Master and crew and aid their decision-making process, especially during time-sensitive and stressful circumstances.

16:20-17:35 Session 19E: S.12: Digital Twins for hybrid Prognostics & Health Management II
Location: Room 100/4013
16:20
Learning dynamics of spring-mass models with physics-informed graph neural networks
PRESENTER: Vinay Sharma

ABSTRACT. Recently, graph neural networks (GNNs) have aroused a lot of interest in the scientific machine-learning community. GNNs have demonstrated the ability to learn the interactions between the nodes of a connected graph. Therefore, they have been increasingly applied to simulate dynamic physical systems, where in general, the state of a component of a system evolves as a function of interactions with its neighboring components. For simulations, generalization and extrapolation are essential for predicting the long-time rollout of the system evolution trajectory. However, for current methods generalization has not been rigorously analyzed. In addition to that, a clear interpretable metric for the physical consistency of the learned solution needs to be defined.

In this research, we first propose a physically constrained framework for learning the dynamics of systems of discrete masses coupled with springs using graph neural networks. These spring-mass systems are of interest for reduced-order modeling of various physical systems such as vehicle dynamics models, tire-road interaction models, human gait and stride models, etc. We extract the stiffness matrix of the learned system trained on simulated trajectory data and apply general constraints on the structure of the stiffness matrix namely the condition of isotropy, and semi-positive definiteness. The extracted stiffness matrix at different instances during trajectory roll-out serves as an indicator of the physical consistency of the learned dynamics.

Secondly, we extend our framework to the task of detecting and localizing faults in dynamic systems by taking the measured past trajectory of the system as input. The extracted stiffness matrix of the learned system trained on past trajectory data provides a unique advantage for localizing the degraded springs or interactions between masses.

We evaluate the proposed framework on a system of coupled masses and springs modeled as graphs. The nodes of the graphs represent the masses and encode their current and past states. The edges connecting the nodes represent the springs and encode the distance vector between the connected nodes. The interactions between the nodes are encoded in messages passed along the edges. The stiffness matrix is extracted from the Jacobian of learned nodal interactions with respect to node positions. In order to test the generalizability, the proposed method has been evaluated on multiple configurations with out-of-training-range parameters like the number of masses, rigidity, and rest lengths of springs. The extracted stiffness matrix for each configuration is then compared with the stiffness matrix obtained from finite element simulations.

While in the presented work, simulations of a simple system of masses connected with springs is used for validation and demonstration of the proposed framework, this framework can be easily extended to any complex system involving springs and lumped masses.

16:35
Sensitivity of Stochastic Model Updating tools including Staircase Random Variables using different Cost Functions
PRESENTER: Thomas Potthast

ABSTRACT. Monitoring processes have become increasingly important in structural dynamic applications. In this regard, model updating techniques can quantify and reduce the discrepancy between numerical model predictions and actual system behavior using available measured data. This contribution focuses on stochastic model updating approaches in structural engineering, where uncertainties in either some or all of the input model parameters are explicitly treated as irreducible. A comparison between model classes in an approximate Bayesian computation framework using staircase random variables and the Bhattacharyya distance is provided. Different configurations of staircase random variables are shown considering various cost functions. In addition, numerical examples are presented to illustrate the scope, advantages, limitations, and open challenges of the various model classes. Overall, this work suggests that different cost functions in staircase random variables are potentially useful tools to problem classes in a variety of civil engineering applications.

16:50
Generating Controlled Physics-Informed Time-to-failure Trajectories for Prognostics in Unseen Operational Conditions
PRESENTER: Jian Zhou

ABSTRACT. The performance of deep learning (DL)-based methods for predicting remaining useful life (RUL) may be limited in practice due to the scarcity of representative time-to-failure (TTF) data. To overcome this challenge, generating physically plausible synthetic data is a promising approach. In this study, a novel hybrid framework is proposed that combines a controlled physics-informed data generation approach with a DL-based prediction model for prognostics. The framework introduces a new controlled physics-informed generative adversarial network (CPI-GAN) that generates diverse and physically interpretable synthetic degradation trajectories. The generator includes five basic physics constraints that serve as controllable settings. The regularization term, which is a physics-informed loss function with a penalty, ensures that the synthetic data’s changing health state trend complies with the underlying physical laws. The synthetic data is then fed to the DL-based prediction model to estimate RUL. The framework's effectiveness is evaluated using the New Commercial Modular Aero-Propulsion System Simulation (N-CMAPSS), a turbofan engine prognostics dataset with limited TTF trajectories. The experimental results demonstrate that the proposed framework can generate synthetic TTF trajectories that are consistent with underlying degradation trends and significantly improve RUL prediction accuracy.

17:05
Unsupervised Physics-Informed Health Indicator Estimation for Complex Systems

ABSTRACT. Developing Health Indicators (HI) is an important aspect of prognostics and health management of a complex system. Accurate identification of HI can lead to better performance of prognostic models, as demonstrated by previous research. However, the existing methodologies for determining HI in complex systems are mostly semi-supervised and rely on assumptions that may not hold true in real-world applications. These methods typically involve using a reference set of healthy sensor readings, or utilizing run-to-failure data, to infer HI. But unsupervised inference of HI from sensor readings, which is a difficult task in scenarios where diverse operating conditions mask the effect of degradation on sensor readings, has not been explored in previous literature. In this paper, we propose a novel physics-informed unsupervised model for determining the HI. Unlike the previous methods which are constrained by assumptions, our method uses prior knowledge about degradation to infer the HI, thereby eliminating the need for a reference set of healthy sensor readings. The proposed unsupervised model is an Autoencoder that incorporates constraints on its latent space to ensure its consistent with knowledge about degradation. The effectiveness of the proposed model is evaluated on two common prognostic case studies, namely turbofan engines (CMAPSS) and bearings (PRONOSTIA). The evaluation is based on the sensitivity to data availability and the quality of the resulting HI, such as trendability and monotonicity.

16:20-17:35 Session 19F: Security I
Location: Room 100/5017
16:20
An Adaptive Requirements Analysis Tool to Identify CSIRT and IDS Services for Energy Industry Stakeholders

ABSTRACT. The Intrusion Detection Prevention System - Weighted Sum Model (IDPS-WSM) is a sub-module of the CERT Requirements Metamodel (CR2M). The CR2M is a modular and incremental requirements analysis tool for energy, water industry stakeholders and CRITIS. The CR2M addresses the specific needs and requirements analysis of e. g. distribution system operators to design targeted integrative security processes. It uses its methodology to identify the specific requirements of CRITIS for an Operational Technology CSIRT solution and intrusion detection and prevention system (IDPS) services. Based on technical and organization-specific characteristics, strategic decision-makers are provided with different solution approaches as a basis for decision support. The IDPS-WSM sub-module includes a dedicated utility analysis, which can be used to examine and select possible IDS solutions based on an evaluation matrix. The IDPS-WSM sub-module represents the result of empirical and industrial research, which is a decision support tool for the purpose of fulfilling legal requirements for the use of systems for automatically attack prediction and intrusion detection and prevention systems.

One of the essential security requirements in detection and prevention defined, for example, by ISO/IEC 27001 and ISO/IEC 27019 or NIST SP 800-53 Rev. 5 as well as NIST Cyber Security Frameworks in terms of a holistic approach. The focus of this consideration is on security requirements in the areas of resilience, attack predictions, incident response and disaster recovery management. CRITIS are thus called upon to define technical and organizational processes that are used as detecting, corrective and proactive measures to maintain the operational capability of the OT systems. In principle, differentiated internal and external technical as well as organizational early detection systems and instances can be used here, which detect the existing vulnerabilities or behavior-based system anomalies at an early stage and eliminate them before they are exploited.

From these results, the necessity of designing needs-based solutions as a success factor of a sustainable IDPS can be derived, which help CRITIS to determine and fulfill their binding requirements for automated early attack detection in a resource-saving and efficient manner. So is IDPS-WSM able, for example, to compare existing binding legal declarations with the current technical, personnel and financial resources and to derive an objective and fact-based solution from this which corresponds to the corporate reality of CRITIS and at the same time represents a sensible and sustainable solution. In this context, a decision for or against an IDPS solution is made based on facts or based on objective and real organizational characteristics as well as based on the complexity and heterogeneity of the process networks. The key advantage here is the economic prosperity that can be generated using a shared service. IDPS-WSM can play an important role, as it supports CRITIS or strategic decision-makers in avoiding hasty and ill-considered decisions, but at the same time offer the opportunity to deal with one's own organizational and system properties in a more relevant way, to identify and record requirements in a transparent and comprehensible manner, and ultimately to make a sustainable investment and efficient decision.

16:35
Security management of chemical facilities based on quantitative vulnerability assessment

ABSTRACT. Security aspects of the process industry became a matter of concern in the last twenty years. Indeed, plants that process and store significant quantities of hazardous substances might be an attractive target for malicious attacks, resulting in severe fires, explosions, and toxic dispersion scenarios. Additionally, the impact of such events may escalate towards neighboring units, causing the escalation of domino effects. The most consolidated techniques for Security Risk Assessment (SRA) aim at evaluating the effectiveness of security countermeasures or physical protection systems (PPS) and only provide qualitative or semi-quantitative indications. However, as the credibility of security threats increases, the development of quantitative metrics is essential to enhance the protection against external attacks and to ensure a correct allocation of security-related investments. There are many factors that influence SRA. Firstly, the performance of safety barriers, e.g., firefighting, interlocks, alarms, etc., should also be included in security analyses, as their intervention might prevent or mitigate intentional attack scenarios. This implies that the coupled performance of PPS and safety barriers is to be assessed. Secondly, equipment fragility should be evaluated, i.e., the equipment structural integrity in response to different types of attack vectors. Finally, the interaction among different units should be considered in order to characterize potential domino effects. Considering the number of elements that factor in SRA, approaches based on advances probabilistic models such as Bayesian Networks (BN) could be beneficial. Bayesian Networks are a graphical method of reasoning under uncertainty using probabilities, in which variables are represented as nodes and the relationships between nodes are represented as arcs. The advantage of BN is the use of the Bayes Theorem to update the probability in case of specific evidence, e.g., attack vector or intrusion path. Moreover, special nodes in BN can help to define risk. In this work, a special node called utility node was used for the evaluation of the economic security risk, i.e., the potential economic loss coming from a successful attack and related domino effects. Potential domino effects have been accounted for by carrying out a simplified consequence assessment using integral models for physical effects. A case study, based on the analysis of an industrial site storing and handling hazardous materials, was defined and the specific BN was built accounting for all factors influencing SRA. The results may constitute a verification of the PPS and safety barriers in place in a given facility, allowing for a quantitative evaluation of the credibility of attack success and economic risk, leading to the identification of the more critical security-related escalation scenarios.

16:50
Safety and security integrated analysis approaches considering new updates for maritime systems
PRESENTER: Rogério Ramos

ABSTRACT. As the modern ships have their operation dependent on information systems, a cyber-attack can impact the safety objectives. Cyber security analysis (related to protection against malicious attack) has become part of the modern design system as well as traditional safety analysis (related to prevention from accidents). However, both disciplines are usually performed in distinct or weak linked processes, which to succeed rely on exhaustive tasks and on the expertise of the analysts. Combined analysis approaches are required to provide a straightforward identification of safety and security issues, once it is possible that a countermeasure to minimize a safety issue can expose a security vulnerability, and a deployment to fix a security question could also bring new safety hazards. The purpose of this paper is to present the characteristics of some selected safety and security integrated analysis approaches applied to a case study with technologies deployed in maritime systems. The results suggested that the approaches can be useful to capture vulnerabilities, hazards and conflicts that could not be detected by a separated or empirical analysis. Another conclusion is that to select a suitable approach should be part of a safety and security analysis process. For future works, those steps can be extended to other types of critical systems.

17:05
Drawing on the success of developing a safety culture to improve the security culture of companies that use Operational Technology

ABSTRACT. Companies that use operational technology (OT), such as those operating critical infrastructure, are increasingly becoming more digitalised. This digitalisation, however, has also led to a widened attack surface, making cybersecurity a necessity. One approach to enhance a company’s security and reduce the human risk factor is the development of a security culture. While these companies have been cultivating their safety culture for decades and the concepts of security and safety culture share many commonalities, there has been limited research into their relationship. We have conducted a critical analysis of the safety and security culture literature, as well as 31 interviews with security professionals from various UK industries on the topic of security culture development. Our findings demonstrate that both cultures share almost entirely overlapping enabling factors, such as senior management support. Additionally, there is universal recognition that these companies have successfully developed a strong safety culture. Accordingly, the experience and knowledge of developing a safety culture can be leveraged by security practitioners. For example, demonstrating the impact security can have on functional safety can positively influence OT personnel’s security perceptions. Additionally, using established safety communication channels and techniques can also strengthen this security culture. Finally, a discussion is provided on whether security culture can reach the prominence levels of safety culture, with resource availability, and differences in how safety and security risks are perceived being two major obstacles towards that. As security culture is in its early stages of maturity, future research could investigate ways to integrate both cultures, especially in operational environments where safety is of paramount importance.

16:20-17:35 Session 19G: S.27: Advances in Maritime Autonomous Surface Ships (MASS) IV
16:20
Towards the Development of a Dynamic Reliability Tool for Autonomous Ships: A Bayesian Network Approach

ABSTRACT. Autonomous ships developments have been driven by recent advances in smart and digital technologies. As autonomous systems will be responsible for the MASSs’ operation, their reliability is of paramount importance. This study aims to develop a Bayesian network (BN) for monitoring the reliability time variation considering subsystem and component levels. The case study of a cargo vessel for short sea shipping operations is employed and its power plant is investigated. The BN is developed based on the power plant’s critical components, whilst defining the interconnections between these components. Pertinent data for the component failure rates are derived from multiple sources, including reliability databases and scientific papers. The derived results demonstrate that the ship main engine is identified as the most critical subsystem. This study serves as a foundation for the development of a dynamic reliability tool for autonomous ships which can incorporate sensor measurements to update component reliability in real-time.

16:35
Review of Human Error Assessment Methods Suitable for the Design of Maritime Remote Control Rooms and Processes
PRESENTER: Danilo Abreu

ABSTRACT. Shipping is facing numerous innovations nowadays that, if pursued, could significantly change the way ships are designed, operated and navigated. One of these innovations is the remote steering and control of ships. In this new context, decisions are made outside the controlled vessel, from a remote control centre and with limited awareness of the vessel and surrounding conditions. To ensure sufficient operator performance in remote control centres, they must be designed with a human-centred approach. To this end, appropriate human reliability assessment techniques must be used.

Currently, there exists a number of techniques for assessing human reliability, both in the scientific literature and in industry standards. However, most of them were developed or tested in the nuclear, aviation, and healthcare industries. Unfortunately, a maritime technique for human reliability assessment is lacking. Instead, modified versions of the nuclear and aviation-based methods and tools are applied. However, the validity of such approach and obtained results can be easily questioned.

Therefore, the objectives of this paper are threefold: 1) to review existing methods suitable for designing remote control centres and processes; 2) to shortlist methods found applicable in the maritime context; 3) to elaborate on an overall method requirements and future research directions.

16:50
From Aviation to Maritime: An approach to define target safety levels for the safety assurance of autonomous ship systems
PRESENTER: Meriam Chaal

ABSTRACT. The safety assurance of autonomous ship systems is anticipated to present various challenges in the near future, necessitating the establishment of unambiguous procedures and references to facilitate the risk-based design of future ship systems. The International Maritime Organization's (IMO) guidelines, as outlined in the current version of the Formal Safety Assessment (FSA), Goal-Based Standards, and the rules from classification societies lack in detail for the risk-based design of autonomous ship systems. In the meantime, the aviation industry's regulations include more structured techniques for aircraft systems engineering, including a risk matrix that is employed as a benchmark to set the system safety objectives throughout various stages of system design. Consequently, this research suggests a methodology to establish target safety levels for the safety assurance of future ship systems, guided by aviation standards, to support the development of risk-based procedures and regulations that ensure the design of safe autonomous ship systems.

17:05
Analysing the Need for Safety Crew Onboard Autonomous Passenger Ships – A Case Study on Urban Passenger Transport in Norwegian Waters.

ABSTRACT. Today, the number of crew required to operate small, conventional passenger ships is often equal to the number of safety crew required to ensure passenger safety in emergency situations. This paper investigate whether it is possible to realize autonomous passenger ships and still maintain passenger safety as the number of safety crew is reduced towards zero. An important role of the safety crew is to manage emergency situations, and by that comply to the crew safety instructions in such situations. The safety instructions of two use cases have been analysed in terms of which tasks that is possible automate, and by that reduce the need for onboard crew. The analysis resulted in a classification of safety tasks that can be automated and those who appear more difficult and needs to be managed either by onboard safety crew or by a remote control centre operator. We argue that given the current technology gaps and short-term expected developments, there will still be a need for safety crew onboard autonomous passenger ships. We propose a definition for a safety responsible officer. Requirements are also derived for new developments of safety equipment that will be needed with reduced safety crew. The results of the study provide input to ongoing regulatory discussions where distribution of tasks between automation systems and humans on ship and in remote control centres are relevant.

17:20
Investigating the Impact of Day-Night Conditions and Time Progression on the Fatigue of Maritime Autonomous Surface Ship Remote Operators: Implications for Remote Control Centre Design
PRESENTER: Zhihong Li

ABSTRACT. Maritime Autonomous Surface Ships (MASS) have attracted significant interest in recent years due to their potential to confer economic, safety, and environmental benefits. However, a key aspect of the MASS system that remains unclear is the scientific and reasonable design of Remote Control Centers (RCCs), which is responsible for remotely controlling and monitoring MASS operations. RCC design can significantly impact the performance of remote operators. Therefore, this study aims to investigate the impact of day-night conditions and time progression on the workload and fatigue level of MASS remote operators, in order to provide guidelines for RCC design. A remote-control simulation platform was utilised to conduct two rounds of 4-hour daytime and night-time remote control experiments. Physiological data were collected in real-time using various measurement instruments, and the Karolinska Sleepiness Scale (KSS), Reaction time (RT), and NASA Task Load Index (NASA-TLX) were assessed every 25 minutes. Some findings suggest that fatigue level in the night-time condition is higher than in the daytime condition, sleepiness significantly increased over time and reached a peak at around 1.5 hours (daytime) and 2 hours (night-time), before maintaining a steady level, which means day-night conditions and time progression can significantly impact remote control operators’ performance potentially leading to decreased performance and increased risk of misoperation. The study highlights the need for effective work schedules and interventions to improve remote control operators’ performance in MASS operations. This research takes the first step to the investigation of the remote control centre operator, and could provide valuable insights into the design of RCCs, which will improve the performance of RCC operations and ensure the safety of MASSs.

16:20-17:35 Session 19H: S.08: Assistive Robots in Healthcare
16:20
What will it take to hire a robot? Views from health care personnel and managers in a rehabilitation hospital

ABSTRACT. Currently, policy makers try to cope with a continuous decrease of health care personnel, and an increase in health care expenses (Hjemås, Zhiyang, Kornstad & Stølen, 2019). Globally, WHO estimates a projected shortfall of 10 million health workers by 2030 (WHO, 2016). An additional challenge for the health care sector is a high turnover rate. Among nurses, this has been associated with high workload, time pressure, inconvenient working hours, and low pay (Beyrer, 2017). Furthermore, health care personnel daily perform time consuming tasks that they do not define as a typical health care task (Bergsagel, 2019).

In the current work we explore the views of healthcare workers and managers towards including a human-like robot as an assistant in their daily routines. How employees perceive the technology can, in turn, influence their behavior in ways that are highly instrumental for the innovation efforts. We investigate initial reactions, preferences, and expectations, exploring whether a robot can be used to assist health care personnel at this hospital in their daily tasks.

In this paper we ask: What will it take to hire a robot? We explore this question using mixed methods: 1) Eight interviews with health care staff after they had performed tasks with and without assistance from a robot. 2) Group interviews, one with managers of the hospital and one with staff from two different departments after seeing the robot and watching a video of how the robot can be used to assist health personnel. 3) Questionnaires to personnel on their attitudes, trust, and general acceptance of the robot.

Preliminary findings indicate that the healthcare personnel and managers are positive to a robot assistant, focusing mostly on the benefits of introducing a robot to assist at the rehabilitation hospital. However, it was emphasized that the robot needs to be reliable. If it stopped working, or if using the robot turned out to be cumbersome and time consuming, the staff would get frustrated, and rather perform the tasks themselves. Data is currently being analyzed, and the paper will discuss the findings related to theories and previous studies on trust, technology acceptance and usability. We expect that our findings will be useful for ongoing efforts to identify user needs across diverse stakeholders and work contexts to alleviate health care staff (through technological interventions) in their daily work.

References:

Bergsagel, I. (2019). 6 av 10 sykepleiere bruker daglig tid på oppgaver de mener andre burde utføre.Sykepleien, Sykepleien.no. Retrieved from https://sykepleien.no/2019/02/6-av-10-sykepleiere-bruker-daglig-tid-pa-oppgaver-de-mener-andre-burde-utfore

Hjemås, G., Zhiyang, J., Kornstad, T., & Stølen, N.M. (2019). Arbeidsmarkedet mot 2035 2019:Statistisk Sentralbyrå, Norway.

Skjøstad, O., Hjemås, G., Beyrer, S. (2017). https://www.ssb.no/helse/artikler-og-publikasjoner/1-av-5-nyutdanna-sykepleiere-jobber-ikke-i-helsetjenesten. Accessed 2022.09.28.

WHO (2016). Global strategy on human resources for health: Workforce 2030. WHO Document Production Services, Geneva, Switzerland

16:50
Workload of rehabilitation healthcare personnel when assisted by a robot
PRESENTER: Maren Eitrheim

ABSTRACT. The current study investigated how a human-like robot, EVE, can be applied in healthcare services to support personnel in their daily tasks. The focus of the current study was staff workload, since high workload may have adverse impacts on patient safety, job satisfaction and turnover. We conducted a small-scale study with eight participants in a rehabilitation hospital in Norway. The participants were asked to perform professional tasks and pick-up tasks in a simulated setting. In one condition they were assisted by EVE for the pick-up tasks, and in the other condition they performed the pick-up tasks themselves. The findings indicated that when assisted by EVE, the healthcare personnel experienced reduced workload and improved performance. They also reported less time pressure and a possibility to perform tasks with less interruptions and better quality when assisted by the robot. When the healthcare personnel did not have support from the robot, they spent a considerable amount of time outside the patient room, as they needed to fetch the necessary equipment themselves. In the current study, minimal interaction between healthcare workers and the robot was required. Future studies may expand interaction tasks or make them more complex to investigate safe interaction and user acceptance including both staff and patients.

17:05
User-centered Evaluation Framework for Telerobot Interface and Interaction factors – a case study on medical device manufacturing

ABSTRACT. The use of telerobots for surgery, military missions or rescue activities are increasingly commonplace, and so are guidelines to achieve an ergonomic design and optimal system-operator performance. However, in medical device manufacturing, in particular for fine-manipulation and highly precise operations, the existing human-machine interface and interaction (HMII) design standards to date are insufficient to ensure a cost-effective ergonomic teleoperation solution, that complies with the learnability, usability, dependability and efficiency required for this case-study. We analyze the most relevant human-system interface and interaction requirements for telerobotic systems applied to medical device manufacturing, and expand on the current standards with knowledge from recent research results in this field. We further our contribution by proposing a use-case-based telerobotic system architecture and experimental setup for human-centered evaluation of the telerobotic system HMII design.

16:20-17:35 Session 19I: S.04: Bayesian Networks for Oil & Gas Risk Assessment
16:20
PetroBayes’ Modules for Reliability Assessment for Oil and Gas Industry
PRESENTER: Thais Lucas

ABSTRACT. PetroBayes is user-friendly software that performs Bayesian reliability estimation. The software comprises three main modules that can provide the reliability measures of interest of the system under analysis. The first, the Bayesian module, enables the user to assess the variability distribution of non-homogenous failure data. In this module, one can obtain a prior distribution for the reliability measure of interest based on generic data; the posterior distribution is a result of the update procedure with specific information of the system of interest. The Statistical module can fit data into distributions (e.g., the duration of maintenance actions) and perform statistical tests (e.g., goodness of fit). The Availability module can be fed with data from the previous models to build a continuous time Markov process (CTMP), and estimate the system’s availability. The failure rate can be derived from the Bayesian module, while the repair rate, from the Statistical module in a straightforward workflow. Note that these rates need not to be constant (i.e., exponentially distributed), thus allowing a more robust assessment. All the results can be displayed to the user, be given in written reports and images. The software can be hosted on a remote server, minimizing the usage of the user’s own computation resources. We illustrated the use of the software considering generic databases.

16:50
Key Performance Indicators of aging safety barriers in oil and gas plants

ABSTRACT. Aging of safety barriers can degrade their performance and increase risk in Oil and Gas (O&G) facilities. Condition-informed risk assessment can be used to assess the risk of a facility given the actual performance of its safety barriers and eventually prescribe maintenance activities. In this work, we propose a novel definition of Key Performance Indicators (KPIs) of safety barriers that allows accounting for their aging. When sufficient barrier failure data are available, a Q-Weibull model is used to quantify a corrective factor that multiplies the no-aging basis KPI value. When barrier failure data are scarce, which is often the case in practice, the corrective factor is quantified by an expert-based Weibull-like distribution, that anchors some or all the different stages of barrier life (such as infant and wear-out mortality phases) with the few, limited data available. The safety barrier of Design Integrity (DI), typically employed in upstream Oil and Gas (O&G) platforms, is numerically elaborated as a practical example.

17:05
A Bayesian Population Variability-based Methodology for Reliability Assessment in the Oil and Gas Industry
PRESENTER: Thais Lucas

ABSTRACT. Scarcity of historical failure data is very common in many industries, especially Oil and Gas (O&G). In this context, the Bayesian analysis is paramount to obtain reliable estimates for the system of interest. To perform this analysis, we propose using the Bayesian population variability analysis in a two-step approach. Such an approach allows the assessment of the variability of reliability measures among a similar population of systems. The first step is based on the prior estimation, and it involves gathering available data from similar systems (generic data) and constructing the prior distributions, that represents the population variability. This prior information consists of data of systems that exhibit similar, yet different reliability behavior. In the second step, one can proceed to posterior estimation, where the prior distribution is updated with the available evidence from the system of interest. To obtain the posterior estimates, Markov Chain Monte Carlo-based methods are required. In this work, we illustrate this approach under two cases: systems with (i) non-constant failure rates, and (ii) interval-censored failure data. Finally, the model was tested using simulated data as validation and it is useful for the O&G industries to better address the reliability measures of its systems.

17:20
WELLRISK - A PROPOSITION OF A DATA-DRIVEN WELL INTEGRITY MANAGEMENT SYSTEM
PRESENTER: Isamu Junior

ABSTRACT. In the petroleum industry, well integrity and risk management are critical concerns that must be balanced against the need for efficiency and cost reduction. The challenges of well integrity are becoming increasingly complex due to energy crises, cost pressures, and environmental considerations. To address these challenges, this paper presents a new data-driven approach to well integrity management. The proposed solution uses quantitative risk analysis that includes three figures of merit: well integrity level, blowout risk level, and incremental cumulative risk. These figures of merit are used to assess the risk of well failure and well leak, and to support decision-making on how long one can tolerate a well barrier failure.

The approach is implemented in a system designed to integrate with existing data systems and to automatically update the risk of all managed wells, providing real-time insights. The methodology is supported by advanced reliability models for the operational phase and quantitative risk analysis models. The paper explores each of the figures of merit and explains how their acceptance criteria guide the decision process, supporting the prioritization of resources.

Three case studies based on classical well integrity issues are presented to demonstrate the potential of the data-driven approach: the failure of a well barrier element, the impact of the inspections and testing on the integrity of temporarily abandoned wells, and the comparison between the risk of operating in a degraded situation versus the intervention risk.

The approach main goals are to improve risk management and to reduce risk exposure by making better risk-informed decisions. However, the case studies have shown that is possible to be more efficient, save cost and increase production volume using the data-driven methodology, and thus gain substantial financial benefits at comparable risk levels.

The paper demonstrates the potential of a data-driven well integrity management system to improve decision-making and enhance well integrity in the petroleum industry. The approach offers a powerful tool to mitigate risk and to increase efficiency, making it a valuable asset for petroleum operators around the world.