ESREL2023: EUROPEAN SAFETY AND RELIABILITY CONFERENCE 2023
PROGRAM FOR MONDAY, SEPTEMBER 4TH
Days:
previous day
next day
all days

View: session overviewtalk overview

10:00-10:40 Session 2: Plenary session - Professor Anne Barros- Resilience analysis and optimization for interconnected or distributed systems: use cases and methodological contributions from the chair RRSC

Prof Anne Barros (Research group Head of SAFETY AND RISKS RESEARCH GROUP, Ecole CentraleSupélec, University of Paris-Saclay, France)

Resilience analysis and optimization for interconnected or distributed systems: use cases and methodological contributions from the chair RRSC

10:40-11:10Coffee Break
11:10-12:30 Session 3A: Maintenance Modelling and applications I

Maintenance, Modelling and Applications session

Location: Room 100/3023
11:10
Optimizing Preventive Maintenance Policies: A Hydroelectric Power Plant Case Study

ABSTRACT. Optimizing preventive maintenance (PM) policies consists of determining the optimal times for carrying out maintenance actions to minimize the process's total cost per time interval. The longer the time interval between preventive maintenance, the lower the cost per corresponding time interval. On the other hand, longer intervals between PMs increase the expected number of failures and, consequently, the need for corrective actions that increase the overall maintenance cost. Preventive maintenance actions are associated with a level of severity, which can be defined as a weighted average of the number of tasks performed, the execution time, and the number of items replaced in the maintenance plan. Therefore, the objective of this work is to propose a method to determine the preventive maintenance intervals and the severity of these maintenance actions to minimize the process's total cost per time interval. In general, severity is treated as a variable dependent on preventive maintenance time, which means that the longer the time interval between PMs, the greater the severity of the maintenance action should be, in theory, and consequently, the greater its cost. In the optimization process proposed in this work, however, severity is treated as an independent variable of PM intervals, representing a contribution of this paper and providing greater flexibility in the creation of maintenance plans. In order to validate the proposed method, it is applied to the preventive maintenance policies of a hydroelectric power plant in southern Brazil. Results show that optimizing maintenance intervals can significantly reduce the total cost of maintenance.

11:25
Predictive maintenance planning using renewal reward processes and probabilistic RUL prognostics - analyzing the influence of accuracy and sharpness of prognostics
PRESENTER: Mihaela Mitici

ABSTRACT. We pose the maintenance planning for systems using probabilistic Remaining Useful Life (RUL) prognostics as a renewal reward process. Data-driven probabilistic RUL prognostics are obtained using a Convolutional Neural Network with Monte Carlo dropout. The maintenance planning model is illustrated for aircraft turbofan engines. The results show that in the initial monitoring phase, the accuracy and sharpness of the RUL prognostics is relatively small. The maintenance of the engines is therefore scheduled far in the future. As the usage of the engine increases, the accuracy of the prognostics improves, while the sharpness remains relatively small. As soon as the estimated probability of the RUL is skewed towards 0, the maintenance planning model consistently indicates it is optimal to replace the engines immediately, i.e., ``now''. This shows that probabilistic RUL prognostics support an effective maintenance planning of the engines, despite being imperfect with respect to accuracy and sharpness.

11:40
Joint optimization of condition-based operation and maintenance for continuous process manufacturing systems under imperfect maintenance
PRESENTER: Zhaoxiang Chen

ABSTRACT. For continuous process manufacturing systems (CPMSs) where the production process cannot be stopped, two popular performance evaluation metrics are production efficiency and stability. With the development of sensors and communication technologies, the condition-based decisions can effectively coordinate the operation and maintenance (O&M) management of CPMSs to improve the production completion rate. However, most papers studied condition-based operation (CBO) and condition-based maintenance (CBM) separately which led to the inability to obtain optimal solutions. In addition, the effect of imperfect maintenance on production efficiency and stability has also been ignored. Therefore, this work develops an optimal condition-based operation and maintenance (CBOM) policy for CPMSs. CPMSs are required to complete a series of specified production missions within a finite horizon, and the optional maintenance actions include do nothing, imperfect maintenance, and replacement. The optimization objective to determine the optimal joint O&M policy by maximizing the average production completion rate. In the CBOM policy, the production completion rate of CPMSs under different missions is evaluated by a stochastic flow manufacturing network (SFMN). Since the CPMS has Markov property, we use the Markov decision process (MDP) framework to solve the CBOM optimization problem. The main contributions of this work include: (1) compared with existing studies, a more rational CBOM policy is proposed, which focuses on maximizing the production efficiency and stability of CPMSs; (2) the impact of imperfect maintenance on production efficiency and stability is considered in the CBOM, which makes the proposed policy has better applicability. Finally, the proposed approach is demonstrated in a hot rolling manufacturing system, and a sensitivity analysis of the relevant parameters is also performed. The results show that CBOM can improve the average production completion rate while reducing the implementation of maintenance actions.

11:55
Condition-based maintenance model for a single component subject to fixed maintenance opportunities and lead time

ABSTRACT. In the context of unmanned and minimum manned remote offshore facilities, fixed maintenance schedules are pre-planned where maintenance actions are required to be planned considering the lead times of required components. In this paper, it is assumed that the component’s health condition can be monitored continuously, and the component can be maintained in an upcoming window only if a maintenance order is placed sufficiently early to cover the corresponding lead time. The objective is to find the optimum maintenance threshold for ordering maintenance. In this study, an analytical solution is proposed and verified by numerical study.

12:10
Predictive Strategy and Technology for Operation & Maintenance Decision Making

ABSTRACT. In an organization (a state electricity company), business objectives or visions must be based on the wishes of stakeholders, especially the government. These business objectives are contained in the Company's Long-Term Plan which is published every 5 (five) years. To achieve the business objectives for the next 5 (five) years, strategic goals and strategic enablers are needed. Strategic goals consist of 4 (four) goals, namely green, innovative, customer focused, and lean. Where one of the strategic objectives of lean is to increase operational efficiency, one of which is through prognostic health management in digital power plants as a performance indicator. In previous research related to prognostic health management, the asset condition criteria were based on the asset health index (AHI) of the asset. The smaller the AHI, then the criterion is danger. Even though every asset that has the same AHI, it doesn't necessarily mean that the remaining uptime is the same. So that the recommendations generated based on the criteria have a low effect on improving asset condition and increasing company performance. Therefore, it is not possible to use AHI as the asset condition criteria. In this research, the prognostic (prediction) requires strategy and technology, where there are 3 (three) predictive strategies, namely predictions obtained from the input parameters of online performance monitoring (Asset Performance Management / APM), online / offline condition based monitoring (APM), and computerized maintenance management systems (Enterprise Asset Management / EAM). The asset condition criteria were developed based on the remaining uptime of asset. The criterion is good when the remaining uptime is more than or equal to the periodic maintenance schedule (can be repaired during periodic maintenance). The criterion is alert when the remaining uptime is more than or equal to 1 month (can plan Maintenance Outage / MO). The criterion is danger when the remaining uptime is less than 1 month (can't plan for Maintenance Outage / MO). Thus, the resulting recommendations will order preventive maintenance to be carried out so that the remaining uptime is equal to the periodic maintenance schedule or equal to 1 month in order to be able to plan MO, so that forced derating or forced outage does not occur which will cause disturbances in the power system and decrease company performance. In addition, with the longer remaining uptime, capital investment for purchasing spare parts can be carried out normally without having to act as an emergency, so as to reduce material costs.

11:10-12:30 Session 3B: Human Factors and Human Reliability I

Human Factors and Human Reliability I

11:10
Advanced Situation Awareness: Perspectives Toward Enhancing Future Officers' Psychological Resilience Under Perceived Stress Contexts

ABSTRACT. With military personnel being engaged in challenging and formidable situations, it is feasible to discern several emotional and cognitive determinants. The Armed Forces are assigned to conduct a variety of missions and tasks carried out by the postmodern military in a constantly changing operational environment (Colvin, 2014). It raises high demands on the soldiers to operate in an unknown environment under conditions of high operational tempo where traditional military training is no longer sufficient. Any combat environment presupposes that one possesses emotional and psychological, physical and cognitive strength, which in turn causes a considerable amount of stress even to the full-fledged soldiers (Williams-Bell et al., 2022). Furthermore, perceived stress can reduce the effectiveness of soldiers (Bekesiene et al., 2022). Therefore, a postmodern warrior must understand the strategic environment and be prepared to act not only within the framework of his national identity. In addition, soldiers must be physically, emotionally, and mentally strong, motivated, and ready to perform under difficult conditions. Dealing with modern threats, they are bound to also acquire highly cultivated resilience skills that will enable them to remain alive in the direst circumstances and adroitly conduct a military operation under unpredictable conditions (Johansen, I., 2015). Unconventional future conflicts are likely to assume an unorthodox stance on how to utilize the human capital in warfare alongside technological breakthroughs. The development and efficient use of human capital has become a part of military preparedness. Moreover, soldiers undergo major changes in military operations, and certain transformations occurring in military identity can influence areas of military performance. Before new selection and education procedures, it is very important to measure both military identity and expertise. This may be an expedient contribution to the development of future military officers. Therefore, it emerges that there is a need to conduct more extensive research related to the development and application of human capital in a military institution. This study aimed to ascertain the extent to which military identity envisages military performance in the Lithuanian military. The study involved cadets from the Lithuanian Military Academy and investigated whether military achievements and attitudes – which were evaluated by military skills, general military competence, and organisational commitment – can be predicted on the basis of military identity. The study hypotheses are tested using the statistical software package SPSS v29 and Hayes’s (2022) PROCESS macro program (version 3.5). This study provides a better understanding of the relationship by showing that education and psychological cadet training have unique value. It finds that what is extremely positive is an overall effect of psychological resilience that influences the perceived military performance of the cadets. Furthermore, the modelling results enhance our understanding of military identity role and are likely to be useful for the future military officers’ professional development.

11:25
Remote supervision of autonomous ships: principles for interaction display graphics
PRESENTER: Alf Ove Braseth

ABSTRACT. In the maritime industry, there is currently a focus on developing technologies for efficient and sustainable “greener” transport. One promising concept supporting this development is to supervise a fleet of highly automated ships remotely from a shore-based center. It is expected that such partly autonomous ships can be operated with a reduced crew and speed for a reduced cost and fuel consumption. There is, however, a need to perform research into the operational concept, that is, how to operate such ships while maintaining a high safety level. This paper explores the topic within a research project funded by the Norwegian research council. It builds on previously published work (Braseth et al. 2022, Kaarstad et al. 2021). The project performs research into how to present information about a fleet of autonomous ships to land-based operators through display interfaces. From this, we ask: how to design human machine display interfaces for safe and efficient monitoring and intervention of several autonomous ships? The research question is explored through empirical simulation studies, as well as results from a recent workshop where the participants are naval officers with extensive maritime experience. The research contribution is a set of design principles, which are presented through the concept of the three levels of Situation Awareness (Endsley 2013). Findings from interaction design and human perception limitations (Ware 2008, Healey & Enns 2013) have been used to shape the visual form of display graphics. The proposed principles can be used to identify information content and the appropriate visual presentation of graphics for displays. Examples of how to apply the suggested principles for design are presented through graphical prototypes. We suggest using them in future user studies to learn how they can be further improved. As the proposed principles are not targeting a specific type of display, further work should also modify the design principles to be useful for actual operation, including overview displays that present the bigger maritime picture.

Braseth A. O., Kaarstad M, Høstmark J. B., Strømmen G, (2022): Supervising Autonomous Ships - A Simulator Study with Navigators and Vessel Traffic Supervisors, Proc. Esrel. doi.org/10.3850/978-981-18-5183-4_R12-10-278-cd Endsley M. R. (2013). Situation awareness. In J. D. Lee & A. Kirlik (Eds.), Oxford library of psychology. The Oxford handbook of cognitive engineering (p. 88–108). Oxford University Press Healey C. G, Enns J. T., (2012): Attention and Visual Memory in Visualization and Computer Graphics, IEEE Trans. On Visualization and Computer Graphics, Vol. 18, No. 7, pp. 1170-1188, doi: 10.1109/TVCG.2011.127 Kaarstad M., Braseth A. O., Strange E., J. B. Høstmark (2021): Towards Safe and Efficient Operation of Autonomous Ships from a Land Based Center, Proc. Esrel. doi: 10.3850/978-981-18-2016-8_513-cd Ware C. (2008): Visual Thinking for Design, Third Edition, Elsevier, Morgan Kaufman Publishers, USA

11:40
Analysis of risk factors in motorcycle riding and distribution of attention using eye-tracking, interview and video – Preliminary study

ABSTRACT. Fatal motorcycle accidents accounted in 2022 for over 20% of all traffic deaths in Norway; 21 motorcyclists lost their lives, and this was the highest number since 2016 (Statistics Norway, 2023).

Around 52% of these fatal accidents were single-motorcycle accidents and there is a need to understand why the motorcycle accidents are increasing in Norway.

Nord University and SINTEF Community, in collaboration with Trygg Trafikk, the Norway's largest traffic safety organization have carried out a preliminary research study examining the motorcycle accident risk factors and the distribution of motorcyclists' attention using eye-tracking, interview and video. The study was financed by the Norwegian ministry of transport and investigated possible causal relationships with regard to single-motorcycle accidents and multiple vehicle collisions.

The main research question of the present study was:

What are the most critical factors for riding a motorcycle safely? - A focus on single-motorcycle accidents and multiple-vehicle collisions

Nine motorcyclists with different knowledge and experience levels in riding a motorcycle participated in the present study. The Tobii Eye-tracker system was used to record and reveal the ability of motorcyclists to orient themselves when riding the same route with roundabouts, intersections and roads with different speed limits. Their riding tours were recorded, and the eye fixation points, fixation point durations and eye movements were analysed. All the motorcyclists were interviewed after riding that specific route to understand the individual differences in their subjective experiences of riding that specific route: self-reported behavior related to planning, attention-seeking, speed selection and road positioning. Experts in traffic safety and in education for motorcycle instructors and examiners compared the self-reported strategies and tactics of the participants with the analyses of their observable behaviour in the videos.

The results highlighted key risk factors in both single-motorcycle accidents and multiple vehicle accidents.

Three categories were found to contribute to risky situations at intersections.

-Theoretical knowledge related to multiple-vehicle accidents.

- Preparedness when approaching and riding through intersections

- Preventive behaviour when approaching and riding through intersections.

When riding motorcycle in curves, three categories were found to contribute to risky situations.

- Attention when riding on damaged road pavement (asphalt cracks and unevenness).

- Unclear strategies for speed adaptation in curves

-Riding a motorcycle through a left-bending curve. These six different categories are detailed and presented in the paper.

The study provides findings that strengthen the knowledge base for those who plan and carry out the education of motorcycle instructors and motorcycle riders. The findings will be of high interest for road safety decision makers and for launching new awareness and information campaigns. They will also form the basis for new hypotheses in future analyses of motorcycle accidents to find explanatory factors in the interaction between the different types of riders, types of motorcycles, the road conditions and types and its surroundings and the thinking of Vision Zero strategy (NTP 2022-2033).

References

National Transport Plan (2022-2033). Ministry of Transport.

Statistics Norway (2023, January 05.) Fatal motorcycle accidents 2022.

11:55
Cognitive Workload when Novices and Experts Supervise Autonomous Ships – Findings from Empirical Studies

ABSTRACT. In the maritime industry there is currently a drive towards more environmentally friendly operations and reduced costs while maintaining a high level of safety. It is expected that the next major change in the maritime industry will be autonomous or partly autonomous ships that can sail with less fuel consumption, an estimated reduction in operating costs, as well as increased safety.

There is, however, a need to perform research into aspects related to operating autonomous or partly autonomous ships to build a strong foundation for operational concepts. A research project, financed by the Norwegian Research council of Norway, aim to develop and test interaction solutions for a land-based operation center ensuring safe and efficient supervision of autonomous ships. The research is performed through a series of empirical studies.

In a land-based concept, the tasks of the navigators will change from controlling one ship at sea, to supervise one or more autonomous ships from a land-based control center. One important research topic is to investigate how operator workload is affected in different situations. In this paper, we explore the following research questions: 1) How is workload experienced when supervising one vs three autonomous ships? 2) How is workload experienced by novices (gamers) and experts (navigators) while supervising autonomous ships? 3) How is workload experienced by experts while supervising three autonomous ships in different interaction design solutions? The questions are explored through empirical studies.

Two maritime simulation exercises with novices and experts as participants were conducted. Short, realistic videos were developed prior to the study, where autonomous cargo ships made a crossing in the Oslo fjord. While watching the videos, the participants were asked to act as expert commentators and express observations, potential actions and calls they would make along the way, and whether they felt the situation required them to take manual control of one of the ships. Data was collected thorough video and audio recordings, qualitative interviews, as well as self-reported workload (NASA-TLX) and situational understanding.

The findings indicate that workload is higher when supervising three ships compared to one ship. The findings also suggest that different display design concepts affect navigators´ situation understanding, and that some interaction design solutions are particularly challenging for novices. Findings from the study can be used to further guide interaction design development for supervising autonomous ships, and as a first step to explore competencies needed by future navigators.

12:10
A Study on Quantification of Operator Manual Action for Fire PSA
PRESENTER: Sun Yeong Choi

ABSTRACT. The purpose of this paper is to describe the fire HRA (Human Reliability Analysis) method for domestic fire PSA (Probabilistic Safety Assessment) at full power operation and considerations for quantifying OMA (Operator Manual Action) using the fire HRA method. OMAs are actions performed by operators to manipulate components and equipment from outside the MCR (Main Control Room) to achieve and maintain post-fire hot shutdown, but do not include “repairs” by NUREG-1852. NEI 00-01 classified impacted cable/component by MSO (Multiple Spurious Operation) as either a required or important to safe shutdown cable/component and established OMA as one of the measures to mitigate the effects of MSO of the important to safe shutdown cable/component for fire area assessment. More broadly NRC defined post-fire OMA as actions performed by plant personnel on plant equipment to recover from a fire outside MCR. Currently, domestic NPPs have selected OMAs to mitigate MSO by considering the feasibility and reliability factors in way of a deterministic approach based on NUREG-1852. In this study, the existing fire HRA method is reviewed to quantify OMA to model it into fire PSA. To achieve this goal, complementary factors of the fire HRA method for OMA quantification such as wearing SCBA (Self-Contained Breathing Apparatus) outside MCR and the need to establish detailed timelines to model the relation between MCRA (Main Control Room Abandonment) and OMA were derived.

11:10-12:30 Session 3C: Energy Transition to Net-Zero Workshop on Reliability, Risk and Resilience - Part I

This workshop will provide a platform for risk, reliability, safety researchers and industry professionals to present and discuss the latest developments in modelling and analysis of energy transition risks. The focus will be on how these models and analysis can be used to inform decision making to manage the risk of failure of new energy solutions meeting the energy demand and thus comprising the energy transition to zero-carbon. The following topics are discussed: New Energy Carriers; Renewable and New Technologies; Climate Change Effects and Extreme Weather Conditions

11:10
Estimation of inherent risks for five hydrogen transport scenarios produced in the ocean
PRESENTER: Kwangu Kang

ABSTRACT. Hydrogen has a very low density at room temperature/pressure, so in order to transport it in large quantities, it is generally transported after being pressurized/liquefied or converted into ammonia, LOHC, etc. Representative methods of transporting hydrogen by sea include ship transport and pipeline transport. The purpose of this study is to establish five representative scenarios among various scenarios for transporting hydrogen produced at sea to land, and to quantitatively compare the inherent risks of each scenario. In the five scenarios, four correspond to ship transport and one correspond to pipeline transport. The four ship transport scenarios in this study are composed of pressurized hydrogen, liquefied hydrogen, ammonia, and LOHC, and the pipeline transport scenario is composed of only pressurized hydrogen. The LOHCs used in this study are toluene and methylcyclohexane. The method used to quantitatively derive intrinsic risk in this study is F&EI (Dow Fire and Explosion), the most representative of relative ranking risk indices. F&EI is a methodology that can quantify the inherent risks of major processes, and is a very useful method when comparing inherent risks in the concept design stage. In F&EI, the inherent risk quantified for each major unit is calculated by multiplying [General Process Hazards Factor], [Special Process Hazard Factor], and [Material Factor]. In this study, the inherent risk of each scenario was quantified by summing up the risk indices of major units within the scenario, and also compared with each other. The disadvantage of this methodology is that it is difficult to adequately reflect the effects of toxicity. In the case of the pressurized hydrogen-ship transportation scenario, the risk of the storage tank at 700 bar was calculated to be very high, and this unit is the unit showing the highest risk among all scenarios. In the case of the pressurized hydrogen-pipeline transportation scenario, the overall risk was estimated to be low because it is the simplest configuration among the five scenarios. In the case of the liquefied-ship transport scenario, there is a solid oxide fuel cell process that can supplement intermittent power, and this process significantly increases the risk. In addition, the high risk of fire-explosion of liquefied hydrogen has resulted in a high risk in liquefied hydrogen storage tanks. Therefore, the case of transporting liquefied hydrogen by ship shows a higher risk than the case of transporting pressurized hydrogen by ship. The scenario of transporting LOHC by ship showed the highest risk among the five scenarios. In the case of LOHC, it has both toxicity and risk of fire, and the amount of LOHC to be transported compared to pressurized/liquefied hydrogen is very large, so it shows a high risk. In the case of the ammonia-ship transportation scenario, the toxicity was high, but the risk of fire was significantly low, showing the lowest risk.

11:25
Numerical modelling of liquid hydrogen tanks performance during fire engulfment
PRESENTER: Alice Schiaroli

ABSTRACT. The incumbent need to tackle global warming draws the attention on potential zero-emission energy solutions. The transportation sector is proved to be one of the most impactful in terms of greenhouse gas production [1]. Therefore, several decarbonization strategies that rely on the use of hydrogen as a fuel were proposed. Liquefaction is one of the most appealing emerging alternative for the storage and transport of large amounts of hydrogen [2]. Due to the relative newness of liquid hydrogen (LH2) in the transportation sector, risks linked to hydrogen mobility have not been deeply investigated yet. The lack of experimental data makes it difficult to perform a reliable risk assessment [3], leaving safety questions still unanswered. From this perspective, the failure of an LH2 storage tank and the consequent release of the fuel in external fire conditions represent the worst-case accident scenario and must be avoided. At present, experimental data regarding the performance of LH2 thermally insulated tanks exposed to fires are limited to the results provided by a restricted number of tests. The present work aims to investigate the hydrogen behaviour (e.g. pressure build up, temperature gradient) when the LH2 tank is completely engulfed in a fire by carrying out a CFD analysis. The validation of the numerical approach will be performed by verifying the accordance of the results with available experimental data. The outcomes of this study are proved to be a useful support for the prediction of the response and the performance of LH2 tanks in extreme accident conditions. The results can support the analysis of consequences of failure as part of a risk assessment and potentially provide critical insights to the definition of safety codes and standards and the deployment of effective emergency plans. References [1] H. Ritchie, M. Roser, and P. Rosado, “CO2 and Greenhouse Gas Emissions - by sector,” Our World in Data, 2020. [2] F. Ustolin, N. Paltrinieri, and G. Landucci, “An innovative and comprehensive approach for the consequence analysis of liquid hydrogen vessel explosions,” J. Loss Prev. Process Ind., vol. 68, p. 104323, 2020, doi: https://doi.org/10.1016/j.jlp.2020.104323. [3] C. Correa-Jullian and K. M. Groth, “Data requirements for improving the Quantitative Risk Assessment of liquid hydrogen storage systems,” Int. J. Hydrogen Energy, vol. 47, no. 6, pp. 4222–4235, 2022, doi: 10.1016/j.ijhydene.2021.10.266.

11:40
Green Hydrogen Production and Storage: A Review of Safety Standards and Guidelines with a focus on Safety Instrumented System Applications
PRESENTER: Tzu Yang Loh

ABSTRACT. In response to global climate change and the energy crisis, the need for decarbonization is becoming increasingly important for many countries, including Singapore. The transition to a zero-carbon society relies on reducing greenhouse gas emissions. Hydrogen, a clean energy source with relatively high specific energy, has received significant attention in recent years. However, ensuring the safety of hydrogen production and storage is crucial for its widespread adoption. This paper conducts a literature review of hydrogen safety standards and notable incidents, focusing on applying safety instrumented systems to avoid a catastrophic hydrogen disaster. Based on the findings, the paper will discuss the challenges and make recommendations to enhance hydrogen safety, including compliance with international safety standards, the establishment of risk assessment programs, and regulations and guidelines for safe hydrogen handling and storage. Hydrogen can become an alternative fuel for sustainable development with appropriate safety measures.

11:55
Analyzing Hydrogen-Related Undesired Events: A Systematic Database for Safety Assessment

ABSTRACT. The global energy landscape must undergo a radical change to reduce the human impact on the environment and mitigate the increasingly pressing issue of global warming. From this perspective, hydrogen can channel a large amount of renewable energy from the production sites to the end users. It can be a vector for transporting and storing clean energy, thus representing a missing link for the energy transition. Nevertheless, the extreme combustion properties and the capability of permeating and embrittling most containment systems produce significant safety concerns. In such a context, the knowledge of past undesired events and a deep understanding of their root causes are fundamental to avoid the occurrence of similar accidents in the future. Hence, safety reporting systems are necessary to collect and systematically analyze all available information on hydrogen-related incidents, accidents, and near-misses, thus maximizing the lessons learned from previous events. Databases such as HIAD 2.0 and H2Tools are dedicated to hydrogen-related undesired events and are already publicly available. They provide meaningful information for classical statistical analyses, and, in some cases, they offer in-depth investigations of both primary and secondary causes. Nevertheless, the main limitations of the existing reporting systems are represented by the scarcity of quantitative information, the limited selection of features, and ambiguous glossaries. These drawbacks make it difficult to apply advanced data-driven analyses based on Machine Learning to the existing databases. In this paper, the undesired events involving equipment and facilities for hydrogen production, transport, storage, and utilization were selected from the HIAD 2.0 and MHIDAS databases. All the records compliant with the defined inclusion criteria were collected in a structured database, namely Hydrogen-related Incident Reports and Analyses (HIRA). The definition and selection of the features are based on a critical comparison of the strengths and weaknesses of the primary databases, and an analysis of the literature regarding hydrogen safety. Subsequently, text mining tools were used to analyze the event descriptions in natural language, extract all the relevant quantitative information, and sort them systematically and coherently in the database. Finally, the newly developed HIRA database was analyzed through a Business Intelligence (BI) approach. Data-driven analyses of the HIRA database could help to identify valuable information about hydrogen-related undesired events, promote a safety culture, improve accident management, and stimulate an increasingly widespread rollout of hydrogen technologies.

12:10
Evaluation of the Factors Determining Hydrogen Embrittlement in Pipeline Steels: an Artificial Intelligence Approach

ABSTRACT. The need to decarbonize the energy sector demands extensive renewable energy production and its widespread utilization. This is stimulating the global rise of energy production from solar, wind, hydropower, and other renewables, its cost competitiveness, and socio-political acceptance. However, the intermittent supply and the management of surplus energy represent significant setbacks for a renewable-based global energy landscape. In this scenario, hydrogen is emerging as an effective alternative energy carrier for energy management and distribution. Hydrogen has inherent environmental benefits and has the potential to decarbonize industrial applications that require high-grade heat. In addition, it allows centralized clean energy production and distribution to remote end-use sites. For a smooth transition to hydrogen technologies, it is important to guarantee an inherently safe distribution system. In Europe, hydrogen could be transported through the existing widespread pipeline network. Nevertheless, most high-strength pipeline steels were not designed for hydrogen service and are prone to hydrogen-induced degradation, which could result in sudden component failures and undesired releases with severe consequences. Hydrogen atoms can penetrate the metal lattice of pipeline steels, deteriorate their mechanical properties, and induce cracking in otherwise high-performance materials. Hydrogen embrittlement depends on the interplay of three factors: mechanical loading conditions, operating environment, and material properties. The evaluation of the synergistic interaction of these parameters has implications in safety science. A better knowledge of the susceptibility factors for hydrogen embrittlement could facilitate risk-informed inspection and maintenance planning of equipment operating in gaseous hydrogen environments. Hence, there is a need for a systematic database and tools capable of identifying the embrittlement susceptibility of materials. This study introduces a machine learning approach to evaluate the role of several environmental, material, and mechanical factors in the occurrence of hydrogen-induced damages. Several reference materials have been assessed for embrittlement under different parametric conditions. An extensive database has been created, and a decision tree model has been trained to determine the hydrogen embrittlement of materials. The main advantages of this model are its “white box” nature and simple interpretability. From this perspective, this artificial intelligence approach could be a tool for ensuring the safe application of hydrogen systems and allow advancements in inspection planning and predictive maintenance.

11:10-12:30 Session 3D: Accident and Incident Modelling I

This session presents papers that discuss reviews of accident investigations and accident modelling studies.

Location: Room 2A/2065
11:10
Assessment of Coal Handling Facility using the Swiss Cheese Model: A Case Study of Fire Incident in Coal-Fired Power Plant
PRESENTER: Hery Affandi

ABSTRACT. Fire incidents in a coal-fired power plant (CFPP) can be defined as any event of an undesired fire which causes a catastrophic event, particularly in a coal handling facility (CHF). CHF is a critical part of fuel management systems in CFPP. In a specific case, low-rank coal dust particles are sufficient to create an explosion hazard if these particles accumulate in large quantities. This fire incident impacts the loss of human life and equipment damage, which causes extended downtime, prolonged recovery, high maintenance costs, not generating revenue and low reputability. The self-combustion events of dust particles and extinguishing of fire incidents have provided awareness to define contributing variables and some risk assessment experience to find alternative mitigation. This paper aims to describe an integrated effort to define and measure organizational factors related to power plant safety, particularly CHF, using the Swiss Cheese model (SCM) as an assessment method. The model was used to investigate the accident and prevent accidents as a lesson learned. A survey set of statements about the plant and its operations were conducted. The evaluation began by reviewing existing conditions. The process consists of assessing loss prevention (by three barrier parameters) and loss reduction (by one barrier parameter). Each barrier was evaluated by compliance-defined criteria to mitigate hazard loss events. The assessment result shows that the condition of the equipment was unhealthy, with an ineffective program and unclear standard procedures. Meanwhile, staff competency and condition have also been investigated. The assessment found an unbalanced workload, poor communication. An evaluation of emergency preparedness was also carried out in case the loss event occurred. By SCM, the existing conditions show a high probability of hazard, which cause potential loss events. Finally, several recommendations were conveyed for each barrier parameter to mitigate and prevent fire incidents in CFPP. Compliance with defined criteria is expected to decrease the occurrence of hazards in future.

11:25
Involvement of the Central Administrative Authorities of the Czech Republic in crisis management exercises

ABSTRACT. Security must be perceived as a public good, the level of which is the responsibility of public administration bodies, specifically crisis management bodies. The basis for successful crisis management is preparedness for dealing with emergencies or crisis situations. The deteriorating global security situation (war in Ukraine, the COVID-19 pandemic, climate change and its effects) is affecting individual states and their governing bodies. The situation shows the inadequate preparedness of crisis management authorities. In response to this fact, the article analyses the implementation and involvement of the central administrative authorities of the Czech Republic in exercises of crisis management bodies at the national level. On the basis of a questionnaire survey conducted with individual ministries and central administrative authorities, the article demonstrates the stereotyping of exercise topics, where exercise topics do not respond to the most significant potential risks, and identifies the main reasons for their minimal involvement in exercises, both as organisers and practitioners. The results of the analysis will serve as input data for a case study defining the requirements for each phase of the exercise - preparation, implementation and evaluation.

11:40
Ship allision risk analysis for the 28-year-old Nordhordland bridge in Norway

ABSTRACT. The Nordhordland bridge is a 1246 m long floating bridge in Norway, completed in 1994. Since its opening, the bridge has suffered two ship allisions leading to minor damage. With new and improved tools and knowledge about ship allision risk, ship impact analysis, and changes in maritime ship traffic, the original design requirements of the bridge are revisited to investigate if the structure meets the original and current design code requirements. A simplified frequency analysis for ship allisions against the bridge is performed based on past events as well as by using the software IWRAP to determine the design impact load for the bridge. Furthermore, a structural impact analysis is performed using the software LS-DYNA to investigate if the bridge can survive the previously determined impact load level. It is found that the pontoons are the weak spot when it comes to ship impacts and do not have sufficient capacity to meet today's requirements.

11:55
The proportion and the impact of human factor in the causes of laboratory accidents
PRESENTER: Jinchao Zhang

ABSTRACT. Laboratory safety is a new topic for safety research, and during operating a risky experiment, the issue about human factor is significant. However, compared with the research on human factor for safety in industry, there is limited research on human factor in laboratory safety. Therefore, some particular efforts should be done to indicate how important the human factor is. In order to addressing this problem, this study collects the information about 90 laboratory accidents happened in recent years, and then by using statistical method to analyse them. With the information of 90 laboratory accidents, the proportions of human factor as direct causes of accidents and indirect reasons of accidents for all accidents and different kinds of laboratory accidents are analysed respectively. In addition, the impact of human factor in laboratory accidents that have casualties is also analysed by ANOVA approach. The results point out that in laboratory accidents, human factor has the highest proportion in the collected laboratory accidents, and the impact from human factor to the accidents with casualties is significantly higher than other accident causes.

12:10
Normal maritime accidents in the Navy – analyzing the collisions of US Navy J. S. McCain and US Navy Fitzgerald

ABSTRACT. This paper explores two maritime accidents in 2017, based on the NTSB accident reports issued in 2019 and 2020. The two different collisions involved modern destroyers, i.e. the US Navy John S McCain and the US Navy Destroyer Fitzgerald. Based on a system approach, we have analysed design and Human Factors issues as described in the NTSB accident reports. We have explored safety incidents, and have explored root causes based on a system perspective, i.e. exploring Man (Human Factors issues), Organizational issues, and Technology issues; abbreviated to MTO. In addition we have explored how the blunt end (Command and control in the Navy) and the sharp end (i.e. crew on the bridge) has been evaluated by key actors such as the Navy, courts and the investigation authority - NTSB. We have based our evaluation on an accident investigation model as used by the safety investigation authority. In addition we have performed a limited survey of maritime accidents investigations focusing on the technical equipment on the ship bridge and an exploration of best practices of bridge design. Design challenges has been a part of our analysis. We have tried to find the root causes leading to human errors, and we have tried to include the sensemaking of the involved actors from the sharp end. We have tried to identify the difference between work as imagined (procedures) and work as done (as documented by the accident investigation reports). The two specific accidents took place on a bridge, that were dependent on the use of modern technology. Both accidents happened during night time (where sensemaking is impacted by the circadian rhythm), at high speed (i.e. around 20 knots), with poor interaction with surrounding traffic - the AIS (automatic identification system) was turned off making communication with other ships challenging, the Navy’s ineffective oversight in the areas of crew training and fatigue mitigation, loss of situational awareness/ ineffective communication and cooperation on the bridge. In addition our limited survey of maritime accidents, have highlighted the poor quality of situational awareness on the bridge, too many alarms, insufficient training, insufficient passage planning, poor work load assessment and poor (safety) management. The accident reports raise the issue of usability and user involvement from design through acceptance of the bridge systems, and raise the question “are the systems so poorly made that they are a challenge to use?” Based on the issues highlighted in the NTSB report, and our review of maritime accidents, it seems that the operation of the destroyers created the environment for a Normal Accident – or an accident waiting to happen. However the John S McCain and the Fitzgerald case, the actors in the sharp end was blamed and punished for the accidents, thus indicating the need for a more “just culture” in the naval environment.

References NTSB (2020). Collision between US Navy Destroyer Fitzgerald and Philippine-Flag Container Ship ACX Crystal June 17, 2017 NTSB (2019) Collision between US Navy Destroyer John S McCain and Tanker Alnic MC August 21, 2017

11:10-12:30 Session 3E: Risk of Natural Hazards
Location: Room 100/4013
11:10
Natural disasters management support models: a hybrid approach focused on humanitarian logistics
PRESENTER: Marcelo Alencar

ABSTRACT. Natural disasters worldwide have highlighted the need for special logistical treatment, called humanitarian logistics. It is known, however, that there are significant challenges in implementing systematized logistics processes, especially those related to the infrastructure and location of humanitarian assistance centers and coordination of emergency processes, including the location of temporary shelters. This paper proposes a hybrid approach of a mathematical multicriteria decision model based on multi-attribute utility theory (MAUT) with an agent-based simulation model to assist the management of emergency coping strategies for disaster risks caused by urban flooding. Focusing on the principles of humanitarian logistics for prioritizing the spatial location of temporary shelters. By addressing objectives that require a global and comprehensive view, the multicriteria methods are effective in risk management due to their main characteristic of recognizing subjectivity as an intrinsic part of decision problems. Therefore, four criteria were raised to evaluate the order of prioritization for the deployment of temporary emergency shelters. Similarly, Agent-based models simulate complex and heterogeneous systems, such as infrastructures, and can be applied in many areas. Therefore, complex situations such as flooding, which require contingency planning over large areas and managing logistical activities, are difficult and complex tasks. As a result, therefore, an order of locations to be considered as community or collective temporary shelters was established, as well as the computational vision and the logistic operations mode needed to operate these shelters and save lives, which constitutes a helpful decision support tool regarding the selection and location of temporary shelters capable of assisting in the construction of the Emergency Plan in response to floods, at the strategic or operational level of logistical decisions.

11:25
FLAME FRONT SPREAD MODEL OF FOREST FIRE IN STEEP CANYON TERRAIN
PRESENTER: Tzu Yang Loh

ABSTRACT. Against the backdrop of global climate change, unusually intense and widespread forest fires are becoming more common, leading to significant human fatalities, as well as socioeconomic and ecological losses. In Europe alone, the damage inflicted by wildfires in 2022 is estimated to be at least €2 billion. To prevent and suppress forest fires effectively, it is crucial to understand their propagation behavior, especially in steep-sloped canyons where the risk of rapid spread is high. However, there has been limited research on this topic. In this study, we conduct a mathematical analysis of the existing gentle slope canyon model to improve our understanding of steep-slope canyon fires' behavior and enhance the accuracy of fire spread models. Our model is validated through publicly available experimental data, which demonstrates its accuracy. The derived fire spread model has practical implications for forest fire protection measures and fire suppression strategies in canyon terrain, helping to reduce the risk and impact of forest fires in the future.

1. Torn, M. S., & Fried, J. S. (1992). Predicting the impacts of global warming on wildland fire. Climatic change, 21(3), 257-274.

2. Boer, M. M., Nolan, R. H., Resco De Dios, V., Clarke, H., Price, O. F., & Bradstock, R. A. (2017). Changing Weather Extremes Call for Early Warning of Potential for Catastrophic Fire, Earth’s Future, 5, 1196–1202.

3. Boer, M. M., Resco de Dios, V., & Bradstock, R. A. (2020). Unprecedented burn area of Australian mega forest fires. Nature Climate Change, 10(3), 171-172.

4. Tidey, A. (2023). ‘Just not blond’: Wildfire damages cost €2 billion last year, says EU Commissioner. Euronews. https://www.euronews.com/my-europe/2023/01/11/wildfire-damages-cost-2-billion-last-year-says-eu-commissioner

5. Viegas, D. X., & Pita, L. P. (2004). Fire spread in canyons. International Journal of wildland fire, 13(3), 253-274.

11:40
A risk-based multicriteria approach for assessing and monitoring flood disasters under heavy precipitation
PRESENTER: Lucas da Silva

ABSTRACT. Public administration, whether at the local or national level, faces new challenges in adapting human life to alarming trends such as: an increase in the extent and frequency of natural disasters, threats to food and water supply, inadequate energy distribution and migration crises. Given this context, the worsening of the climate crisis forces policymakers to adopt a new perspective to combat its damaging impacts on urban functioning, especially concerning the quality of life under the occurrence of hydrological events. This problem is multifaceted so usually conflicting objectives impose hard dilemmas to decision-makers (DMs) once heavy precipitations can potentially promote fatalities, displacements, contamination of water bodies, economic losses, and others. This paper aims to propose a novel multicriteria decision model for assessing and monitoring flood disasters, using the DM's subjective preferences to establish value judgements under risky situations. A numerical application in a Brazilian municipality is performed with the aid of a Decision Support System (DSS) with views to validate the new approach. By integrating statistical, graphical, and tabular information, this model is replicated in other urban areas in which the model assumptions are assumed. Moreover, the model results can be analyzed by DMs not only for taking preventive actions against floods, but also for enhancing early warning systems to reduce disasters.

11:55
Disaster management performance under behavior changes: exploring scenarios using agent-based modeling

ABSTRACT. Disaster risk management consists in implementing disaster risk reduction policies and strategies to prevent and mitigate potential risks by strengthening resilience and reducing disaster losses. Such actions encompass planning, coordination, and execution in response and recovery. Flood forecasts and warnings aim to reduce flood-related property damage and the loss of human life. Several mathematical models present in the literature can improve the accuracy of flood forecasting and duration. Delivering better forecast information is necessary, but it will not be enough to prevent damage and fatalities because the effectiveness of disaster management measures depends on how different people react to flood warnings. Thus, this paper aims to propose an agent-based simulation model that acts as a "virtual laboratory" to explore the impacts that variation in human behaviors upon response to flood warnings have on the efficiency of disaster management performance. In order to understand how evacuation processes are affected under various flood warning scenarios, consider their behaviors in the face of risk and flood warning. The agent-based simulation for evacuation was estimated for four scenarios to explore the variation of parameters such as population size, agents' resistance to evacuation, agents' locomotion speed, considering different ages and sizes, and flood speeds. Then, the results can assist researchers, monitoring agencies, public managers, and other decision-makers in planning more efficient and effective flood risk management actions.

12:10
Flood risk assessment for pressure equipment

ABSTRACT. In Europe, economic losses due to floods have steadily increased in recent years. Floods are natural phenomena that cannot be prevented, but increasing human settlements and economic assets in floodplains and the reduction of the natural water retention by land use together with climate change contribute to increase the likelihood of adverse impacts of flood events. In light of the recent hydrogeological instability phenomena throughout the European territory, with particular reference to the exceptional events that occurred on the Italian territory in the Lombardy and Sicily regions, we conduct an in-depth study on the aspects related to the management of flood risk in workplaces with pressure equipment. In the presence of pressure equipment, the flood risk can lead to the release of dangerous substances, concomitant events such as explosions, toxic dispersions, surface pollution of water bodies and aquifers. For a correct assessment of flood risk, we considered three factors: H (Hazard): probability of occurrence of a flood event in a fixed time interval and in a certain area; V (Vulnerability): probability of equipment damage related to maximum water speed (s) and maximum water height (h); E (Exposure): extent and severity of the damage to the receptors (people, goods, infrastructures, services) potentially involved by the effects caused by the flood event. The purpose of this work is to propose an index method for a preliminary flood risk assessment for pressure equipment (Steam Generators, Reactors, Pressure Vessels, Piping, etc.) present in industrial plants. After defining the level of risk, if it is not acceptable, the main corrective actions are proposed.

11:10-12:30 Session 3F: Risk Assessment I
Location: Room 100/5017
11:10
JURIDICAL SIDE OF ALARP: THE MONTE BIANCO TUNNEL
PRESENTER: Emin Alakbarli

ABSTRACT. When the ALARP “as low as reasonably practicable” principle is considered in judgments, this always comprehends a proportionate cost-risk analysis of protection measures: minimum risk has to mean level of safety maximization conditional to a given equitable profit, and maximum profit given a minimum sufficient level of safety. In 1949, Lord Asquith's definition of “Reasonably practicable” in its judgment in Edwards v. National Coal Board, as well as the whole judgment, became the legal basis of a requirement for risk assessments. Since then, ALARP has been officially endorsed and safety measures implemented in governments and enterprises in order to mitigate and manage risks. The study aims to analyze the failures in the Monte Bianco tunnel’s accident – which occurred on March 24, 1999 - from a logical perspective in order to develop a higher level of safety based on past experience and that played a central role in generating the current Directive 2004/54/EC on minimum safety requirements for tunnels. This article reveals the consequences of ignoring the value of ALARP and conducts error analysis with Forensic Engineering and discusses what mistakes cause by carrying them to legal dimensions from different perspectives.

11:25
Research on Construction method of Change Risk Database in Chemical Industry
PRESENTER: Yuan Zhang

ABSTRACT. Non-standard management of change and inadequate risk identification may lead to catastrophic accidents. In view of this, this paper proposes a construction method of change risk base based on multiple safety evaluation theories. Through the analysis of typical change cases and accident cases of domestic and foreign chemical industry, the subject words that can characterize the risk characteristics of various changes were determined, and the framework of multi-level and multi-chain change risk base was established by different devices. The method of fault tree analysis, analytic hierarchy process and set pair analysis is used to determine the key event inducing factors and control measures, complete the construction of the device change risk base, and provide basic data support for the comprehensive risk identification of the change process and the subsequent risk intelligent reasoning and push technology research.

11:40
Dangerous Goods in Maritime Transport: Assessment of Container Scanning as Means of Risk Mitigation
PRESENTER: Arto Niemi

ABSTRACT. Maritime accidents caused by misdeclared dangerous goods have resulted in significant losses over recent years. We study if the amount of these accidents could be reduced by scanning the cargo containers in a port before they are loaded to ship. A combination of methods was used to address this question. We present a summary of findings for our review of accidents caused by dangerous goods. We used this review as a basis for a risk assessment that consisted of risk identification and a failure mode and effect analysis. The operational implications of a scanner were further assessed using a single server queue model. This study considers a novel muon scanner technology that could mitigate the risk of accidental radiation exposure. The exact operational parameters of these scanners are not public. So, we performed a sensitivity analysis with different scanning parameters. Our results and conducted expert interviews show that scanning the containers can reduce the risk. However, this practice may create new operational challenges regarding managing detected misdeclared containers.

11:55
Asymmetries in the electrical power supply dominating PSA results for nuclear power plants

ABSTRACT. In the more recent past, several events reported from nuclear power plants indicated that failures caused by asymmetries in the electric power supply of a single component can trigger correlated failures of systems of different redundancies important to safety. Examples for such events occurred e.g., in the nuclear power plants Forsmark (Sweden, 2006 and 2013), Grohnde (Germany, 2011) and Byron (Illinois, United States of America, 2012). In order to investigate the risks from these incidents, an existing RiskSpectrum® Level 1 PSA plant model of a generic PWR has been extended. For this extension, a new approach to include correlated failures caused by asymmetries in the electric power supply has been developed by GRS. Asymmetries can occur in different scenarios, e.g., as static overcurrent in the emergency power supply, as static asymmetries in the external power supply, or as transient asymmetries in the electric power supply. Only the latter scenario has been considered in the PSA plant model so far. Five different approaches have been developed by GRS to model the failures caused by asymmetries. One approach implemented earlier only considers single failures caused by asymmetries. The four new approaches take into account correlated failures and use either operating experience directly or include the operating experience by means of a hierarchical model. Two of the new approaches use the common cause failure groups of RiskSpectrum®. The other two new approaches are more complex and need different subsequent steps and different computer programs. Several modelling assumptions have been made in these approaches. They have been scrutinised in a sensitivity analysis with regard to their potential for deviations, their sensitivity on the results, and their background knowledge. The evaluation of the five different approaches shows the following results. First, the four new approaches modelling the correlated failures led to similar core damage frequencies. Second, their results are significantly higher than those by the earlier single failure approach (up to a factor of 102). This clear difference is caused by the high number of correlated failures of different redundancies due to the asymmetries. And third, the core damage frequency caused by correlated failures due to asymmetries is clearly higher than the overall core damage frequency from internal initiating events. This outcome can be explained to some extent by the focus of the analysis on the single scenario. Finally, the results, including those of the sensitivity studies, allow the following conclusions: - The consideration of correlated failures due to asymmetries in PSA models seems to be highly relevant. - The choice of a single of the four different approaches for modelling correlated failures has only a minor effect on the PSA results (up to a factor of four). Therefore, it is not necessary to apply one of the complex approaches. - More scenarios with asymmetries have to be considered in order to realistically reflect their effects. For this purpose, the background knowledge on the different scenarios has to be increased.

12:10
Risk Assessment of Domino effects, Approaches and Methods Analysis in European Union and Slovak Republic

ABSTRACT. When a major industrial accident occurs in installations covered by European Directive 2018/15/EU (Seveso III), there is a probability of a specific consequence (phenomenon) called the "domino effect". European Directive 2012/18/EU itself defines this phenomenon as "The risk of a major accident or its consequences could be exacerbated because of the geographical location and proximity of lower-tier and upper-tier establishments or groups of establishments and their stocks of dangerous substances (Directive 2012/18/EU of the EP and of the Council)". In 2015 most of the EU member states implemented the new requirements of the SEVESO III Directive to their legal environment. There were changes in the area of classifying the hazardous substances, critical infrastructure protection and civil protection. From the point of view of the industrial accidents prevention the domino effects and methodologies for identifying and assessing them are the main challenges for the Slovak Republic. This article presents the currently used methodologies of the company assessment with the potential of the domino effects escalation. The particular methodology for the Slovak Republic that took into account its need, advantages and also shortages of already used methods was created in 2015. It is not exactly defined which procedures are to be used for identifying and assessing the domino effects therefore every member country defined its own methodology. The presented procedure differs from the others by its simplicity, understandability and clear approaches in the form of classical questionnaires. This methodology is not that exceptional in comparison with other similar methodologies, however, it is specific. It is simplified and understandable and solves only the primary domino effects because it has to take into consideration the real situation in this area in Slovakia. First of all it is necessary to understand that the initial conditions for the SEVESO establishments in Slovakia were different than the conditions in western and southern Europe. The transition of the ownership relations (the state-owned companies) to the private ownership had not any sufficient support of the public but also of the competent bodies of the state and public administration.

11:10-12:30 Session 3G: Maritime and Offshore Technology I
Chair:
11:10
Risk assessment of cryogenic fuels in marine transportation

ABSTRACT. Marine industry has been forced to move towards sustainable fuels. Cryogenic gases as Liquefied Natural Gas (LNG) and ammonia (LNH3) can be the solution for fuel storage and transportation even for remote reservoir. Besides liquefied hydrogen (LH2) seems to be a long-term solution, but several studies are addressed to this new opportunity. To deal with intensive use of these three fuels, a detailed comparison for the technical, economic and environmental point of view is strongly needed. Nevertheless, so far, a full understanding on the complex phenomena characterizing the accidental release of LNG, LH2 and LNH3 in harbour environment has not been assessed. In this paper, a comparison of these three fuels for the safety perspective, by performing risk assessment during bunkering, by all possible modes such as, truck to ship, ship to ship and tank to ship for these three alternative fuels. In addition, the sources of various uncertainties on the estimation of individual risk are presented and discussed.

11:25
Adopt or adapt? Seafaring communities of practices faced with increased automation

ABSTRACT. This study explores how we can understand seafarers’ continuous development of local work practice in the face of new technology and discusses potential safety implications. Maritime transportation is undergoing rapid developments within maritime autonomous surface vessels (MASS), remote-control, and resulting in increasingly “smart ships”. Maritime professionals and communities will remain crucial in the safe operation of these systems; however, the seafarers must learn new roles and work practices simultaneously with major changes in the sociotechnical systems. It is necessary to consider the impact of new technology from a social perspective, as emerging safe work practice is a collective accomplishment rooted in the context of interaction, situated in a system of ongoing practices, and adapted or adopted through participation in a community. The paper is based on a qualitative study that includes interviews with crew and participant observation on six car ferries using state-of-the-art automated systems and battery-electric propulsion. The findings show that seafarers adapt their work and learning practices through their physical and virtual community of practices. The automated technology was applied in ways that were discrepant to “imagined” and can be seen as practical drift. We discuss how these adaptations were developed and the potential safety effects, as well how we can understand seafarers’ social system considering the increasing technological development in maritime transportation.

11:40
Agent-based modelling and analysis of search and rescue (SAR) operations in the Barents Sea
PRESENTER: Behrooz Ashrafi

ABSTRACT. Recently, there has been a significant increase in maritime activity in the Barents Sea, which is expected to grow in the coming years. This has intensified the potential for maritime accidents. For instance, there have been 580 reported accidents between 2007-2018 in the Arctic waters, with the capsizing of “Onega” being one of the most recent cases in the Barents Sea, where two out of the nineteen crew members were rescued, and the rest presumed missing and dead. Remoteness of the Arctic offshore environment combined with harsh meteorological and oceanographic conditions make SAR operations in such regions challenging. In particular, the SAR operations may be hindered by high waves and strong winds, heavy snow showers, heavy fog and polar low pressures. Moreover, in general, there is scarcity of port infrastructure along the Arctic coastline that may negatively affect the SAR operations. Different studies have been conducted for maritime SAR modelling such as assessing the reliability of SAR systems using Bayesian belief networks, evaluating the accessibility and response time at sea by employing GIS-based cost-distance techniques, and resource scheduling and task allocation using genetic simulated annealing algorithm. Gaussian mixture models and Fourier transforms, and agent-based models have also been used for search optimizations. However, modelling and analysis of SAR operations in the Arctic waters is under-researched. This study develops an agent-based modelling (ABM) framework to model and analyze the SAR operations in the Barents Sea, while considering the uncertainties related to metocean parameters and constrains related to the SAR infrastructure. To this aim, the proposed ABM framework will simulate different scenarios of SAR operations (e.g., different seasons, different size of vessels, different locations etc.) and evaluate the performance of the SAR system given such scenarios subject uncertainties related to metrological and oceanographic conditions. The performance of the SAR system in this model will be measured as the total rescue time from the time of starting the operation until all the people on the distressed ship are rescued. The agents representing different components of the SAR operations (e.g., helicopters and vessels) and the vessel under distress and will be able to make decisions based on the information available to them (e.g., location of the accident, weather conditions, availability of resources, etc.). The framework also takes into account the constraints related to the SAR infrastructure (e.g., limited number of ports and helicopter bases in the region). The results of the simulations will be analyzed to identify potential bottlenecks in the SAR system and to propose strategies to improve the efficiency and effectiveness of SAR operations in the Barents Sea.

11:55
Managing the Hazards of Ammonia in Seaports as a Potential Alternative Fuel for Green Shipping
PRESENTER: Karin Reinhold

ABSTRACT. The article discusses the importance of safety management systems for seaports, especially in the context of the use of ammonia as a potential alternative fuel for the shipping industry. The article highlights the need for constant monitoring to detect non-conformities and reduce accidents, emphasizing the hazards of simultaneous operations (SIMOPs) and the lack of knowledge on the safety, security, and environmental risks associated with ammonia storage and loading in port operations. The paper focuses on the safety management system of the Port of Sillamäe in Estonia, which has the largest ammonia storage in Europe and three berths for ammonia loading to ships. The study suggests that only refrigerated ammonia should be used for bunkering ships to minimize accident risks. No major accidents or severe injuries have been reported among personnel handling ammonia since the beginning of operations at the port. The article also discusses the production of green ammonia from renewable energy sources as a way to decarbonize ammonia production, while noting the need for high safety standards in its use as a hazardous chemical.

11:10-12:30 Session 3H: Mechanical and Structural Reliability
11:10
Fatigue damage prediction and reliability modeling of subsea wellhead system based on multi-factor coupling
PRESENTER: Shengnan Wu

ABSTRACT. Subsea wellhead system services for long periods of time at hundreds and thousands of meters below the ocean floor, as the vital equipment to provide an access to casing hanger, sealing the locking face of the subsea BOP and riser, structural resistance and pressure-bearing interfaces during drilling and production process. However, due to the complexity of the marine environment and operating condition, this kind of structure is prone to the failure caused by cumulative effects of the fatigue and degradation of the components subject to loads from currents, waves, internal solitary waves, and soil, as well as large tension and bending moments caused by platform movement. This paper presents an integrated approach to comprehensively predict the fatigue damage and reliability of subsea wellhead systems and diagnose the underlying root cause during its service life. Multi-factor impacts on system performance are considered for modeling. A multistate transition model of wellhead components is proposed to analyze system degradation. A finite element model is established for fatigue damage prediction of key components of subsea wellhead to explore the mechanical property change law and failure influencing factors under the different operational scenarios. Reliability evaluation of the critical component is performed and verified by introducing the Monte Carlo simulation-based method which is also used to solve the problem of insufficient data for subsea wellhead fatigue prediction. By embedding multi-factor effects and multistate transition into DBN, the system state can be updated in an effectively means. The effects of multi-factors coupling, fatigue damage, degradation and material aging on the subsea wellhead system are considered. An example of subsea wellhead system demonstrates the application of the approach, through which the system reliability during its service life is predicted, and the most vulnerable components and the greatest contribution factors to the system reliability as well as deserve special attention are identified.

11:22
Probabilistic finite element-based reliability of corroded pipelines
PRESENTER: Abraham Mensah

ABSTRACT. The structural reliability of corroded pipelines subjected to internal pressure is generally assessed with explicit Limit State Functions. However, such closed-form burst pressure models lead to conservative reliability estimates, resulting in significant challenges in maintenance and risk management. This study presents a pathway for an implicit limit state approach that employs probabilistic numerical modelling, surrogate modelling, and a sample-based reliability method to provide computationally efficient probability of failure estimates for corroded pipelines. Machine learning approaches such as polynomial chaos-Kriging, sector vector machine regression, and Kriging methods were employed to develop a surrogate model based on the design and response points from the generated design of experiments. The reliability estimates from this approach are compared with simulation-based reliability methods to evaluate the efficiency and computational cost of these approaches. It is observed from the sensitivity studies, that the failure pressure of the corroded pipe depends more on the pipe’s tensile strength properties than the yield strength. It is worth noting that the corrosion defect length and depth have greater influence on the failure pressure than the defect width. The insignificant contribution of pressure loading is ignored in the development of the surrogate model as it confirms Det Norske Veritas' explicit burst pressure formulation. The proposed approach improves the probability of failure estimates while reducing the simulation cost, thereby enhancing the opportunities for efficient risk considerations.

11:34
MITIGATING THE RISKS OF ENERGETIC FACILITIES BY CLEANING INTERNAL SURFACES
PRESENTER: Dana Prochazkova

ABSTRACT. An important asset of any State is energy infrastructure, which consists of energy facilities of various types and their interconnection. In order for energy facilities such as boilers, turbines, engines, generators, heating and cooling systems and many others to ensure the safe operation of critical energy infrastructure, a specific maintenance must be carried out to mitigate specific risks. In this article, we focus on the problems of heating and cooling systems. Dirt and deposits on the inner surface of these facilities are a problem, because they impede heat transfer; a reduction in the efficiency, an increase in energy and pressure losses, a reduction in the possibility of regulation and an overall decrease in the efficiency. In article, we focus on mitigating the internal risks, such as corrosion, erosion, fouling and mechanical damage to monitored energy equipment. We follow clogging in detail. Clogging is divided into several types: crystallization and precipitation, clotting, particle silting – sedimentation and alluvial particles, corrosion clogging, silting due to chemical reaction, biological clogging, frost silting, or a combination of previous types of clogging. The safety, reliability, durability and sustainability of the working parameters of industrial and energy facility of entire demanding systems, therefore, depend on the quality of maintenance of internal surfaces of monitored facility. Maintenance is carried out by cleaning the internal surfaces, which are divided into mechanical and chemical. When applying online methods (dry cleaning and sound cleaning), there is no need to shut down the facility. For off-line methods (manual mechanical cleaning, light blasting, high-pressure water cleaning, projectile cleaning and other special cleaning methods) the facility must be taken out of service. The maintenance by cleaning the internal surfaces of these facilities is generally difficult, but can be done with necessary information and appropriate methods. The design of industrial and energy equipment is always made of a number of different materials (steel, cast iron, brass, copper, plastics), and therefore, it is necessary to determine such cleaning methods and cleaning agents that none of the individual materials of the facility will damage. Therefore, for each material, a suitable method must be chosen. To select a suitable method of cleaning energy facility, especially heating systems, which will not damage the material of the energy facility, we conduct experiments in a special laboratory. During the experiments, we monitor both, the condition of material and the heat flow, heat transfer and heating time. The aim of experiments is to determine not only the appropriate method of cleaning, but also the cleaning procedure depending on local conditions so that the operation corresponds to the requirements placed on it. We use a checklist to determine the risks of monitored energy facilities and data from relevant standards for risk assessment. Experiments have shown that in some cases it is necessary to carry out cleaning repeatedly in order to achieve the required level of heating time. Therefore, proposals for maintenance programs have been established on a case-by-case basis to guarantee the safe operation of facility in question under the given conditions.

11:46
Segmenting without Annotating: Crack Segmentation and Monitoring via Post-hoc Classifier Explanations
PRESENTER: Florent Forest

ABSTRACT. Monitoring the cracks in walls, roads and other types of infrastructure is essential to ensure the safety of a structure, and plays an important role in structural health monitoring. Automatic visual inspection allows an efficient, cost-effective and safe health monitoring, especially in hard-to-reach locations. To this aim, data-driven approaches based on machine learning have demonstrated their effectiveness, at the expense of annotating large sets of images for supervised training. Once a damage has been detected, one also needs to monitor the evolution of its severity, in order to trigger a timely maintenance operation and avoid any catastrophic consequence. This evaluation requires a precise segmentation of the damage. However, pixel-level annotation of images for segmentation is labor-intensive. On the other hand, labeling images for a classification task is relatively cheap in comparison. To circumvent the cost of annotating images for segmentation, recent works inspired by explainable AI (XAI) have proposed to use the post-hoc explanations of a classifier to obtain a segmentation of the input image. In this work, we study the application of XAI techniques to the detection and monitoring of cracks in masonry wall surfaces. We benchmark different post-hoc explainability methods in terms of segmentation quality and accuracy of the damage severity quantification (for example, the width of a crack), thus enabling timely decision-making.

11:58
A New Approach for Fault Diagnosis of Rolling Bearings Based on Adaptive Batch Normalization and Attention Mechanism

ABSTRACT. This paper proposes a single branch transfer learning method with the noise reduction attention mechanism for cross-domain fault diagnosis of rolling bearing. First, adaptive batch normalization is added to the model to ensure its domain adaptation capability. Furthermore, to improve the model's ability that suppresses noise-related features in a noisy environment, the noise reduction attention mechanism is introduced. With sufficient experimental verifications carried out, the results support that our proposed method has satisfying performance.

11:10-12:30 Session 3I: Mathematical Methods in Reliability and Safety I
11:10
Bayesian inference for the bounded transformed gamma process

ABSTRACT. Very recently, a new degradation model, named the bounded transformed gamma (BTG) process, has been proposed to describe bounded degradation phenomena, where the degradation level can not exceed a given upper bound, due to inherent features of the degradation causing mechanism. In this paper, a Bayesian estimation approach is developed and illustrated for such a stochastic process, on the basis of prior information on the upper bound and on other physical characteristics of the degradation phenomenon under observation. Several different prior distributions are then proposed to model different degrees of knowledge of the analyst and to convey them into the inferential procedure. A Monte Carlo Markov Chain technique is adopted to estimate the process parameters and some functions thereof, such as the mean degradation level and the residual reliability of a unit, as well as to predict the future degradation growth. Finally, the proposed approach is applied to a real dataset consisting of the wear measures of the liners of an 8-cylinder Diesel engine for marine propulsion.

11:25
Analysis of the use of field data under variable conditions to develop lifetime models for electrical distribution devices.
PRESENTER: Roman Mukin

ABSTRACT. Lifetime models have been predominantly developed using constant but accelerated conditions to assess their lifetime and the acceleration factor under different conditions. Especially for highly reliable devices, as found in electrical distribution systems, this approach is expensive and time-consuming. On the other hand, online monitoring provides a large amount of data on the conditions and failures of the fleet of devices. However, constant conditions are not generically present. Therefore, developing efficient methods to estimate parameters from field data is of interest.

Proportional hazard (PH) and accelerated failure time (AFT) models are commonly used to describe the failure of devices under time-varying stress factors. This work analyses how these can be used efficiently to estimate reliability models' parameters, focusing on real-world electrical distribution devices.

The reliability function of a highly reliable device is challenging to acquire, as failure will generally only happen after a long time, and most of the time, devices are not run until failure. In addition, the dependency of the failure rate on environmental conditions in which the device is operating requires one to make a series of experiments to infer the acceleration factors in the classical setting. Therefore for such devices, accurate reliability curves or hazard rates are often not known, which limits the application of lifetime models, e.g. for maintenance or service planning. Up to now, mainly the ``average'' reliability of a type of device was used, meaning that the environmental conditions were often unknown. For this, most often, field or fleet data was already used. Where even this was not possible, the reliabilities of whole classes of devices were studied. Overall the effect of an aggregation of failure data over a diverse population will lead to a spread of the curve compared to the one using a specific device type or specific environmental conditions, hindering a precise prediction of its failure. It is therefore of interest to find ways to make use of all available information to improve this. We explore this in this study for two different models and using simulated failure data coming from real environmental conditions.

11:40
Remaining useful life estimation of gamma degrading units characterized by a bathtub-shaped degradation rate in the presence of random effect and measurement error

ABSTRACT. This paper propose a new gamma process with bathtub shaped degradation rate function that allows to account for the presence of random effect and measurement error. The main features of the model are illustrated. The maximum likelihood estimation of its parameters is addressed. The likelihood function is not available in closed form and has a complex structure. Thus, since its direct maximization poses serious computational issues, estimates are retrieved by using an ad hoc procedure that combines an expectation-maximization algorithm and a particle filter method. The same particle filter algorithm is also adopted to compute the probability distribution function of the remaining useful life, which constitutes the core prognostic tool in condition-based maintenance. The probability distribution function of the remaining useful life is formulated by using a failure threshold model. As a motivating example, the proposed model is applied to a set of real degradation data of MOS Field-Effect Transistors. which give clear evidence that the empirical degradation rate is bathtub shaped. Obtained results demonstrate the utility and affordability of the proposed model.

[1] Giorgio M, Piscopo A, Pulcini G, Remaining useful life estimation of units characterized by a bathtub shaped degradation rate in the presence of random effects, In Proc. of the 8th Intl. Symp. on Reliability Engineering and Risk Management (ISRERM 2022), 4-7 September 2022, Hannover Germany, Research Publishing, Singapore, 2022. ISBN:978-981-18-5184-1. doi: 10.3850/978-981-18-5184-1_MS-14-054-cd. [2] Esposito N, Mele A, Castanier B, Giorgio M. A new gamma degradation process with random effect and state-dependent measurement error. Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliability 2022. https://doi.org/10.1177/1748006X211067299 [3] J.C. Lu, J. Park, Q. Yang, “Statistical inference of a time to failure distribution derived from linear degradation data,” Technometrics, vol. 39, no. 4, pp. 391 400, 1997. [4] G. Yang, Life Cycle Reliability Engineering, Hoboken, New Jersey: John Wiley & Sons, Inc., 2007.

11:55
Application of Probabilistic Risk Analysis in the Overhaul of Aero-engines using a combination of Bayesian Networks and Fuzzy Set Theory - A Case Study

ABSTRACT. As technology advances over the years, its complexity increases proportionally, and these advances bring new risks. In aircraft maintenance activities, identifying and responding to risks is fundamental since an engine failure during a flight may cause a forced landing and, tragically, cause deaths. This reality makes it essential to monitor, identify, and prioritize risk treatment during aero-engine maintenance. What determines the essentiality and complexity of prioritizing risk treatment is the high number of risks identified - this is what usually happens in most repair stations. This fact was observed in a major aero-engine repair station. In the last European Congress for Reliability and Safety held in 2022 in Dublin, the author presented a method for Probabilistic Risk Analysis in the Overhaul of Aero-engines using a combination of Bayesian Networks and Fuzzy Set Theory aiming at meeting Brazilian National Civil Aviation Agency Regulations and the requirements AS9100 Rev. D. The method allowed sensitivity analysis and prioritization of preventive and corrective measures to minimize the probability of failure to maintain a safe operation. This study complements the one presented in Esrel 2022 and focuses on demonstrating the method application. The objective is to apply the model using Bayesian networks and Fuzzy Set Theory (FST) to prioritize actions regarding the risks that affect the operation of a repair station. As a result, a combination of the Bayesian network modeling method integrated with the Fuzzy Set Theory, referred to as Fuzzy Bayesian Network (FBN), proved to be a more effective and precise method to combine risks generated from different sources. The contribution is significant since the proposed method allows process optimization and risk reduction in the repair station and permits decision-makers to assign funds for critical activities to implement actions that can impact the safety of the process and system reliability. The present study will augment the knowledge of the process, maintenance, and safety engineers/managers and help in process improvement. It can impact the company's risk management processes and help understand performance and safety during engine overhaul. Although conducted in a specific repair station, it can be generalized to other industries and fields of work whose safety is affected by risks resulting in waste, rework, and unnecessary energy consumption. The study can change the practice and thoughts of professionals dealing with safety in companies' operations.

12:10
Analysis of Efficiency in Response Surface Designs Considering Orthogonality Deviations and Cost Models

ABSTRACT. In both science and engineering, experimentation takes an essential part of the operational method especially when it comes to functional or reliability testing. Statistical design of experiments refers to the process of planning tests so that appropriate data can be collected and analyzed using statistical methods, leading to valid and objective conclusions. The statistical approach to experimental design is necessary to derive reliable evidence from the data: quantifying the significant influences of factors on a variable of interest, that is, the significance as well as the main effects and the interactions as an equation that can be characterized in linear and/or polynomial terms. The state of the art already offers widely used test designs for this purpose, such as the well-known full factorial and the central composite design (CCD). These designs generally require the main factors to be set orthogonally in order to obtain uncorrelated and unmixed or only partially aliased (main) effects – and therefore definite conclusions. However, the orthogonality of test designs either cannot or does not always want to be strictly maintained. Inevitably this results in aliased effects and altered test power in effect estimation depending on the choice of type-I error. [1] Consequently, in practice there are cases in which not all test runs are performed as planned, or deviate randomly from their nominal values. Nevertheless, these cases are often still conducted and evaluated or, as intended here, manipulated on purpose to find quantifiably more efficient test designs. This results in a trade-off that represents, on the one hand, potentially saved costs that would have occurred if these real cases of orthogonality deviations had been avoided and, on the other hand, covers opportunity costs that would correspond to a realized benefit from the actual increased model accuracy. In the context of this paper, this trade-off is quantified. Therefore, first relevant trends of the impact on model quality due to orthogonality deviations are presented as recently found. [2] These are then contrasted with cost models for full factorial test designs and CCDs that have been developed for the present study. Based on a market review, cost sources such as energy, time, hardware, test-setup quality, and measurement accuracy are contained in a standardized set. In conclusion, a price tag is generated for the same factors as already examined for the deviations from orthogonality, whereby percentage changes in testing costs can be compared with the relative changes in performance indicators of statistical testing such as power and regression quality. Thus, a statistical as well as economic description of orthogonality deviations is achieved, whose optima are presented in conclusion. References: [1] D. C. Montgomery. Design and Analysis of Experiments - 10th ed, 2020, ISBN: 9781119722106. [2] M. Arndt, P. Mell and M. Dazer. Generic effects of deviations from test design orthogonality on test power and regression modelling of Central-Composite Designs, PSAM 16, 2022, Honolulu, Hawaii.

12:40-13:40Lunch Break
13:45-14:25 Session 4: Plenary talk: Stephen Porter - VWay, Silver Sponsor

Stephen has over 30 years’ experience in software product development, from deeply embedded safetycriticalsystems to large scale industrial operations. Currently focused on enabling improvements in thesafe and (cyber) secure operation of complex systems in zero-emission smart-transportation, cleanenergyproduction, and ‘patient-outcome driven’ health solutions. Key to meeting this objective isproviding design, development, and deployment tools needed to orchestrate operations, methods, andprocesses amongst relevant stakeholders. STPA has emerged as the new paradigm that underpins theseefforts, helping innovators deliver safer, cleaner, and more secure solutions.Stephen is a scale-up expert, having assisted many software product companies including Polarion(acquired by Siemens in 2016), Jama Software (acquired by Insight Partners in 2018) and more recentlyassisting Intland Codebeamer SDC, (Intland were acquired by PTC, 2022). In earlier years at Wind River(pre/post IPO), he led global activities in the instrumentation, communications, and controls sector; aswell as having successfully founded, grown and divested several private companies along the way.

14:30-16:15 Session 5A: Maintenance Modelling and applications II

Maintenance and Modelling and applications II.

Location: Room 100/3023
14:30
A phase-type maintenance model considering condition-based inspections and delays before the repairs
PRESENTER: Tianqi Sun

ABSTRACT. Markov models are widely used in maintenance modelling and system performance analysis due to their computational efficiency and analytical traceability. However, these models are usually restricted by the use of exponential distributions, which are the base of the Markov modelling. Phase-type distributions provide a tool to approximate an adequate distribution, such as Weibull, log-normal and so on, by means of Markov processes. Our earlier work proposes a phase-type maintenance model considering both condition-based inspections and delays before the repairs, where extra matrices are defined in the modelling of repair delays to keep track of the probability masses to repair. The model provides quite good estimations but is complex and requires good knowledge in its implementation. This paper aims to get rid of the extra matrices and investigate the modelling of the repair delays with phase-type distributions. An illustration case of road bridges is presented to demonstrate the modelling process and the results.

14:45
Fault Prediction in a Smart Building Lighting System
PRESENTER: Anas Hossini

ABSTRACT. In order to meet the challenges (economic, energy, user comfort, security…) in Smart Buildings (SB), maintenance is crucial. With advances in many fields such as sensing technologies, new connectivity options, improved IoT architectures, predictive maintenance is proposed as a new type of paradigm in the field of operational safety, allowing maintenance operations to be performed after the prediction of certain failures or degradations. Several models of failure prediction have been proposed for general use cases such as in N. Muthumani (2010). When considering a SB, most of them are data-based approaches, such as in A.G. Susto et al. (2015) and Y. Bouabdallaoui et al. (2021). However, these approaches are not always applicable because they require a large amount of failure data, which is generally not available for SB. Moreover, SB can be seen as a system of systems where failures in one system can propagate and impact other systems, making maintenance decisions difficult. Designing a process that optimizes operating costs through maintenance is therefore a very complex task. Such a decision-making process requires the development of predictive models that can predict the failure of each SB subsystem and integrate their various interactions. Such failure prediction models must be able to evolve according to maintenance actions performed over time. Thus, a hybrid method that considers both prediction and decision-making optimization is realistically better suited in the SB context. In this paper, we propose a predictive model that fits into this hybrid approach. This model is based on a Bayesian Network that is scalable according to the operating condition of a system component and works with a small amount of data. With the lack of failure data of these systems, we solely rely on manufacturer’s data that characterize each component to build their failure probability distribution. As a case study, we consider the SB lighting system and its interactions with the energy system. These distributions are integrated into the Bayesian Network that calculates the lighting system failure probability. This model then allows us to calculate the lighting system reliability and availability. These metrics can be used to manage maintenance decisions since if a maintenance action is performed, the Bayesian Network is updated according to the operating status of each component.

References:

N. Muthumani (2010). A Survey on Failure Prediction Methods. ACM Computing Surveys 42(3). A.G. Susto, A.Schirru, S.Pampuri, S.McLoone and A.Beghi (2015). Machine Learning for Predictive Maintenance: A Multiple Classifiers Approach. IEEE Transactions on Industrial Informatics (Volume: 11, Issue: 3). Y.Bouabdallaoui, Z.Lahfaj, P.Yim, L.Ducoulombier and B.Bennadji (2021), Predictive Maintenance in Building Facilities: A Machine Learning-Based Approach, Sensors (ISSN 1424-8220).

15:00
An adaptive prescriptive maintenance policy for a gamma deteriorating unit
PRESENTER: Nicola Esposito

ABSTRACT. In this paper, we propose an adaptive prescriptive maintenance policy that generalizes the one proposed in [1]. As in [1], the policy consists in performing a single inspection aimed at measuring the degradation level of the unit at a predetermined inspection time. Based on the outcome of this inspection, it is decided whether to immediately replace the unit or to postpone its replacement to a later time. In case of postponement, the usage rate of the unit may be changed if deemed convenient. The main novelty with respect to [1] is that, in this paper, in case the replacement is postponed the value of the usage rate in the remainder of the maintenance cycle (i.e., the time elapsing between the inspection time and the replacement time) is determined based on the measured degradation level at the inspection time. The optimal maintenance policy is defined by maximizing the long-run average reward rate. After each replacement the unit is considered as good as new. The lifetime of the unit is defined by using a failure threshold model. It is assumed that failures are not self-announcing and that failed units can continue to operate, albeit with reduced performance and/or additional costs. Maintenance costs are computed considering the cost of preventive replacements, corrective replacements, inspections, logistic costs, downtime costs (which account for time spent in a failed state), and costs that account for the change of the unit working rate. This latter costs also include the possible penalty determined by failure to comply with contract clauses.

[1] Esposito N., Castanier B., and Giorgio M., 2022. A prescriptive maintenance policy for a gamma deteriorating unit. Proceedings of the 32nd European Safety and Reliability Conference (ESREL2022), Research Publishing Services, Singapore.

15:15
Integrated Planning of Usage-Based Maintenance and Load-Sharing under Resource Dependence

ABSTRACT. In many systems, functionally interchangeable units are used as a fleet to meet a common demand or production target. Such examples include parallel machines in production facilities, generators in power plants, engines of a vessel, and fleets of ships, airplanes, or trucks. These units are typically maintained dependent on their usage and, therefore, the timing of their maintenance is directly affected by the policy that determines how the total demand is allocated to the units. We assume that there is a limit on how many units can get large-scale maintenance, such as overhauls, at the same time because of the resources that are involved (e.g., a dry-dock, hangar, or specialized workforce) and/or because the demand needs to be met at all times. In this study, the problem of integrated planning of usage-based maintenance and load-sharing (i.e., the allocation of total demand to different units) for multi-unit systems is analyzed analytically. The aim of the study is to determine (near) optimal policies that minimize the total maintenance costs during the finite lifetime of the components (which is generally in the order of 10-40 years).

15:30
Dependencies and resource constraints in opportunistic maintenance modeling: a systematic literature review
PRESENTER: Lucas Equeter

ABSTRACT. Opportunistic maintenance (OM) allows for reducing downtime and reducing costs through performing several maintenance actions together thanks to the dependencies between components. Indeed, dependencies (economic, stochastic, structural, etc.) between system components impact the benefits of OM, and the number of variables in dependence modeling they create induce strong but scattered hypotheses in the literature. Existing reviews either do not explore the variety of hypotheses, including dependencies [1], [2], or are not specific to OM modeling [3]–[5]. The present work reviews the current advancement in hypotheses related to dependencies and resource constraints, including human resources and workers’ skills, in OM modeling and optimization, using a systematic literature review protocol [6]. The review is based on four relevant research questions that allow the selection of publications pertinent to the topic. The questions pertain to how workers’ skills, dependences and resource constraints are taken into account in OM modeling; how OM is defined in the corresponding literature; how economic dependency is modeled; and what the optimization objectives of the corresponding literature are. The results show the predominance of the structural and stochastic dependence in the corpus, contrasting with the scarcity of workers’ skills modeling. The current approaches, therefore, tend to lack a global view of possible hypotheses, which may deter industrial applications due to limited hypotheses. Further research could focus on more comprehensive models that could better adjust to the varieties of the industrial world.

[1] H. Ab-Samat and S. Kamaruddin, ‘Opportunistic maintenance (OM) as a new advancement in maintenance approaches: A review’, J. Qual. Maint. Eng., vol. 20, no. 2, pp. 98–121, 2014, doi: 10.1108/JQME-04-2013-0018. [2] R. Dekker, ‘Applications of maintenance optimization models: a review and analysis’, Reliab. Eng. Syst. Saf., vol. 51, no. 3, pp. 229–240, Mar. 1996, doi: 10.1016/0951-8320(95)00076-3. [3] H. Wang, ‘A survey of maintenance policies of deteriorating systems’, Eur. J. Oper. Res., vol. 139, no. 3, pp. 469–489, Jun. 2002, doi: 10.1016/S0377-2217(01)00197-7. [4] A. Van Horenbeek, J. Buré, D. Cattrysse, L. Pintelon, and P. Vansteenwegen, ‘Joint maintenance and inventory optimization systems: A review’, Int. J. Prod. Econ., vol. 143, no. 2, pp. 499–508, Jun. 2013, doi: 10.1016/j.ijpe.2012.04.001. [5] B. de Jonge and P. Scarf, ‘A review on maintenance optimization’, Eur. J. Oper. Res., vol. 285, no. 3, pp. 805–824, Sep. 2020, doi: 10.1016/j.ejor.2019.09.047. [6] B. Kitchenham, O. Pearl Brereton, D. Budgen, M. Turner, J. Bailey, and S. Linkman, ‘Systematic literature reviews in software engineering – A systematic literature review’, Inf. Softw. Technol., vol. 51, no. 1, pp. 7–15, 2009, doi: 10.1016/j.infsof.2008.09.009.

15:45
A new maintenance efficiency model and inference method for interval censored failure data

ABSTRACT. GRTgaz owns and operates the longest high-pressure natural gas transmission network in France. Its industrial assets include more than 32,500 kms of pipeline and 26 compressor stations. The R&D center (RICE) of GRTgaz is developing tools to model these assets for optimizing their management, particularly in terms of maintenance policies. These tools are based on reliability distributions that consider equipment aging and maintenance models.

The intrinsic aging is modelled using probability distributions (e.g. Weibull) for the operating time of an unmaintained system. Maintenance effects are supposed to be imperfect, between no effect and renewal. Many imperfect maintenance models exist in the literature. Among them, GRTgaz selected the ARA virtual age models as relevant for its industrial applications [1].

Equipment reliability estimation is based on statistical methods for analyzing failure and maintenance data. However, these data are not always precisely known. Here, instead of observing a failure time with certainty, the only information available is that it occurred between two maintenance dates. This is a type of interval censoring. The treatment of censored data is classic when the data are realizations of independent random variables and follow the same distribution . It is less so for random point processes, and especially for virtual age models.

In our use case, preventive maintenances (PM) are planned at deterministic times. When a failure occurs, the associated corrective maintenance (CM) is performed only at the time of the next PM. Therefore, PM effects will be different depending on whether or not there has been a failure since the previous maintenance. Both maintenance effects are imperfect. A first contribution of this study is to propose a maintenance model for this situation, based on ARA assumptions.

In the GRTgaz case, failure times are not observed, they are detected at the time of next PM. Two situations are considered: either the number or failures in each interval is known, either we only know if at least one failure has occurred or not.

A first experiment presented in ESREL 2022 [2] proposed to generate pseudo failure times to replace the censored times, in order to get back to the complete data case. A better method is to compute the likelihood associated to both censored observations situations. A second contribution of this study is to estimate the model parameters in each of these censoring cases. The quality of the estimations is assessed on simulated data and compared to the complete data case. Thanks to these methods, we are able to evaluate the ageing and maintenance efficiency of GRTgaz industrial assets.

[1] L. Doyen, O. Gaudoin : « Modelling and assessment of aging and efficiency of corrective and planned preventive maintenance », IEEE Transactions on Reliability, 60 (4), 759-769, 2011.

[2] T. Cousino, L. Doyen, O. Gaudoin, F. Brissaud, L. Marle : « Estimation of ageing and maintenance efficiency of industrial tools, considering interval censored data», Proceedings of the European Safety and Reliability Conference, 2022.

16:00
MAINTENANCE METHODE EVALUATION OF CASING ALIGNEMENT USING LASER MEASUREMENT TECHNIQUE: A CASE STUDY OF GAS TURBINE GENERATOR 100 MW CLASS

ABSTRACT. In the asset management system, the management of maintenance activities includes both preventive and corrective maintenance management methodologies. It is defined as maintenance specifications and schedules, procedures for maintenance execution and missed maintenance, inspection measurements and results. During the Major Inspection Overhaul the casing alignment activity on rotating equipment, especially the gas turbine generator (GTG) is a form of maintenance activity that is grouped into life cycle delivery. This is a follow-up on asset performance and health monitoring. This activity involves aligning or leveling the turbine casing in the X and Y axes so that the GTG unit can operate reliably. During the life cycle of an asset from the construction stage to operation, there will be a change in the structural characteristics of the GTG foundation resulting in a change in the position of the turbine casing, both the internal casing and the external casing. Changes in casing position and casing deformation can result in misalignment and vibration when the unit is operating. This work aims to provide an understanding of the effectiveness of using laser alignment on the GTG casing. The method used is taking points on the bearing, diaphragm, compressor casing, Compressor Vane Carrier (CVC), Turbine Casing, Turbine Vane Carrier (TVC), Combustor casing, inlet casing, exhaust casing, and all pedestal bearings. In case alignment work is generally done by 2 methods, top on and top off. Top On is done by attaching the upper casing to the lower casing, while the top-off is done without installing the upper casing, both methods are carried out without the rotor being attached to the turbine casing. Based on the case study of the GTG 100 MW class has been proven to be able to speed up measurement time compared to the piano wire method for 3 days and get accurate results where the measurement results can reach 0.001mm in carrying out internal alignment on GTG. This method benefits unit owners with reduced maintenance time and increased unit performance.

16:15
Imperfect maintenance policy in a degrading system with two dependent components

ABSTRACT. Due to the rise of increasingly complex systems, research on degradation modelling does not only focus on univariate models but also considers multivariate models. This allows the maintenance evaluation of industrial systems in a more realistic way. Some models for multi-component degradation systems assume that the components degrade independently. Although such as assumption allows tractable mathematical models, it remains unrealistic for system where stochastic dependence is indeed present. This work is focused on the study of a system consisting of two dependent components. The dependence is modelled with the so-called trivariate reduction method as follows: two dependent degradation processes are created from three independent degradation processes. A maintenance policy considering imperfect repairs is implemented. Maintenance actions are performed at periodic times and the maintenance effect is modelled using the ARD model (Arithmetic Reduction Degradation) of infinite order. This repair model reduces the accumulated degradation level of each component from its installation in a fixed percentage. The analytical cost model is developed including a reward, which decreases as the component degradation increases. The optimization of this maintenance strategy is performed considering the repair efficiency and the time between repairs as decision variables.

14:30-16:15 Session 5B: Human Factors and Human Reliability II

Human Factors and Human Reliability II

14:30
THE EFFECTIVENESS OF ADAPTIVE AUTOMATION IN HUMAN-TECHNOLOGY INTERACTION
PRESENTER: Mina Saghafian

ABSTRACT. In this paper we conduct a systematic literature review to investigate how adaptive automation has been used as interventions in the design of human-technology interactive systems and what the results of these interventions are in human performance and overall system performance outcome. Technology evolves rapidly and our systems and organizations are not always prepared to deal with unexpected challenges of new technologies. This is aligned with the notion of unruly technology as noted by Dekker (2011). Our perspective has been to focus on the human agent in the system as part of the meaningful human control (MAS) project. This implies thinking about how humans will be accounted for in the new technology evolutions such that the technology will adjust to the human agent, facilitate, and enhance the human performance. Even with fully autonomous systems, there will still be a human involved in the system to make decisions in unforeseen situations. The division of tasks, decision authority, and the extent of automation are amongst the challenges introduced by new technological developments in safety critical systems. Amongst the strategies mentioned in the research literature in recent years is adaptive automation. This literature review is an attempt to (1) define these interventions, (2) find out how they have been applied in the design of human-technology interactive systems, (3) summarize the results of applying these interventions if any, and finally (4) highlight what the future implications are based on the gaps in the current literature. The outcomes of these interventions are considered in the light of desired outcomes of human factors applications, which include improved safety, performance and satisfaction (Lee et al., 2017). This paper is a part of a broader systematic literature review on the successful design principles in automation, in accordance with the PRISMA model. The search is conducted in five scientific databases and limited to articles from the last 10 years. After screening and selecting the articles based on a set of inclusion criteria, a total of 44 articles including the search terms ‘adaptive automation’ are identified. After abstract screening for relevance, a total of 25 articles are selected for further analysis. These articles will be analysed using semantic thematic analysis (Braun & Clark, 2006) and the four aforementioned questions will be answered. The results will clarify state of the state of art research on adaptive automation as a safety design intervention. We expect that the effects of these intervention will vary across studies and contexts and that a standardized application of these interventions will require more empirical evidence and integration before best practices emerge and the practices can be implemented into guidelines.

References

Braun, V. and Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3, 77–101. Dekker, S. (2011). Drift into Failure: From Hunting Broken Parts to Understanding Complex Systems. CRC Press, Taylor & Francis Group. Lee, J.D., Wickens, C.D, Liu, Y., & Boyle, N.Ng. (2017). Designing for People: An Introduction to Human Factors Engineering, 3rd Edition. CreateSpace, Charleston, SC.

14:45
Evaluation of workload for operators in the aeronautical sector

ABSTRACT. The workload affects the physical and mental aspects of the individual and consequently their performance. The concept comes from the interaction between task requirements and human achievement capacity. The objective of this study is to evaluate the mental and operational workload of operators in the aeronautical sector. This experiment involved the collaboration of four engineers (Structure, Stress and Manufacturing Designer; with 9 to 17 years of experience in the area) and four operators (between 18 and 35 years of experience in the area) who work in aircraft production. Developed by Hart and Staveland in 1988,the NASA TLX is a multidimensional rate procedure that provides a global score of the Workload based on a weighted average of evaluations in six subscales: Mental Demand, Physical Demand, Temporal Demand, Performance, Effort and Frustration. This method was used to apply and evaluate four tasks with different levels of difficulty. For the group of engineers, the tasks were part design and nailing. For the group of production operators, the task was drilling and driving. As a result, in the first group, the mental demand was greater, as calculations and analyzes are required. In the second group, the effort demand was the highest, showing that physical and mental demand need to be applied in the same measure.

15:00
Assessment of Soldiers' Resilience to Cognitive Attacks of Russian Hybrid Warfare

ABSTRACT. Aspects of modern warfare are moved into the information technology space before kinetic military operations to affect, disrupt, mess up, or influence an adversary's decision-makers. Integrating military, non-kinetic, and information warfare tools into joint operations with the goal of achieving military advantage. According to researchers, the concept of information threats can be based on the concept of information confrontation - a war without declared front lines, where it is practically difficult or impossible to detect ongoing information operations, but informational and technical, as well as informational and psychological components of informational threats can be distinguished (M. Kitsa et al., 2019). Thus, the concept of an information attack is to cause as much damage as possible to the opponent's infrastructure not by kinetic means but by informational subversive activities, provocations, disinformation, or media manipulation. Information warfare attacks are cheaper and less risky than using lethal weapons, and the damaging effect can be of strategic importance. Analyzing the concept of information threats, and based on scientists, we can notice that concepts such as hybrid threats, hybrid war, information war, asymmetric threats, non-conventional war, psychological operations (PSO), information operations (INO) are elements of the war doctrine of this decade and the connective part of the concepts of information threats. According to Y. Firinci (2020), as technological progress increases, the impact on the military also increases. Thus, based on the identified means and methods of information warfare, it can be assumed that one of the most effective methods of information warfare is to influence human psychological and cognitive behavior by means of INO and PSO. The goal of this research is to assess the response of a military unit to information threats and the ability of soldiers to identify information threat attacks. The study hypotheses are tested on a sample of 152 soldiers of the military unit. To evaluate the resistance of the military unit to informational threats and to identify the best ways to strengthen the resistance of the soldiers, this study examined: (1) the ability of military unit soldiers to identify information threat attacks; (2) factors that determine whether a military unit is resilient to information threats; (3) informational threats that have the greatest impact on soldiers. The SPSS v29.0 statistical software package was used and the data collected were analyzed by structural equation modeling. The results of the study showed that respondents react differently to information threats depending on their experience and education. The simulation results allow us to assume that properly prepared soldiers of a military unit are able to identify information threats and take countermeasures, that is, manage the effects of the information environment. The ability to purposefully train personnel and apply preventive measures would increase the resistance of a military unit to informational threats.

15:15
Is the education of driver instructors in Norway in line with technological development
PRESENTER: Jan Petter Wigum

ABSTRACT. The accident rate in Norway has been declining for many years. Improved training, better infrastructure, targeted control has an impact on the numbers of accidents. This, as well as a technological development that has made cars safer, has led to fewer people dying in road traffic accidents. Even though, road traffic is still considered a high-risk context.

Nowadays, there is a rapid development of cars with new advanced automated technology which can be divided into different levels. From level 0 where the driver performs all operations to level 5 where the car is fully autonomous and the person in the car has changed status from driver to passenger (SAE 2021). Because research in human factors reveal that new technological solutions often move the human errors and change the risk factors, rather than eliminating them (Sætren & Laumann, 2015), it need to be looked into which new skills and competence current and future drivers need and how they should be acquired (Sætren, Wigum et al. 2018).

To become an authorized Norwegian driver instructor, a two-year university degree is required. Thus, the education of driver instructors are of uttermost importance regarding gaining sufficient competence in this industry, but. previous studies have shown that the trained driver instructors has a lack of knowledge about new automated technology and has to a small extent tried out different technological systems during their education (Wigum&Sætren 2022). The study will explore how Nord University curriculum is interpreted when it comes to the car's technological equipment. The project will further aim to examine how the teaching is organized in relation to current curriculums and various legal aspects of using new technology (Helde 2019).

Thus, our research question is:

How is technological development a part of the driver instructor education?

The study will look at current driver instructor education in regard to the use of ADAS (Advanced Driver Assistance Systems) in Norway, and 4-6 informants with key roles in the education will be interviewed Reflexive thematic analysis will be used for analysis.

REFERANSER

Helde, R. (2019). Juss i veitrafikk og trafikkopplæring. (Law in road traffic and driver training. Our translation). Bergen: Fagbokforlaget.

SAE (2021). Automated Driving Levels, SAE International Standard J3016. Last modified 30 April 2021.

Sætren, GB & Laumann, K (2015). Effects of trust in high-risk organizations during technological changes. Cognition Technology & Work, 17 131-144

Sætren, GB. Wigum, JP, Bogfjellmo, PH., Suzen, E. (2018). The future of driver training and driver instructor education in Norway with increasing technology in cars.  In: Safety and Reliability – Safe Societies in a Changing World. Proceedings of ESREL 2018,

Wigum, J.P., Sætren, G.B. (2022). Exploring how automated technology and advanced driver-assistance systems (ADAS) are taught in the Norwegian driver-training industry. A qualitative study. In: Safety and Reliability Safe Societies in a Chaning World. Proceedings of ESREL 2022,

15:30
Development of detailed questions for investigating the status of human and organizational factor (HOF) issue identifications from event analysis processes

ABSTRACT. One of the most effective and direct ways to strengthen the defense-in-depth concept of nuclear facilities including nuclear power plants (NPPs) is to investigate and share learnings from the diverse incidents and accidents experienced during their operation [1]. The identification and analysis of human and organizational factors (HOFs) in incident and accident investigations are particularly important because of their impact on the operational safety of nuclear facilities. In this regard, in 2009, the OECD Nuclear Energy Agency (NEA) convened a workshop for subject matter experts in order explore any opportunities to improve the consideration of HOFs in event investigations in the nuclear industry [2]. One of the interesting results available from the proceedings of this workshop is a list of barriers and recommendations with respect to three categories that focus on the effective consideration of HOFs in event investigation processes [2]. The risk of incidents and accidents in nuclear facilities is low and gradually declining. Indeed, the number of reports submitted to the IAEA/NEA Incident Reporting System shows the trend of a steady decreasing since 2014 [3]. Still, there is evidence that when operational events do occur and require investigation, focus is mainly given to the identification and analysis of technical and procedural factors. One explanation for this tendency is that it is still not clear to event investigation teams how they should identify and analyse HOFs in practice [4]. In order to address this issue, the NEA initiated a project named ‘Good practices for investigators on identifying HOF issues from event analysis processes’ in 2022. The aim of this project is to compile a catalogue of good practices that are useful for identifying HOFs during the event investigation process. To this end, a series of cooperative activities including a comparison study will be conducted with the collaboration of 17 countries. In this paper we introduce the first step taken in this project: the list of detailed questions that was outlined by project collaborators for capturing practices related to the identification and analysis of HOFs in event investigation processes in different countries. [1] https://www.iaea.org/newscenter/news/importance-of-sharing-safety-incident-outcomes-emphasized-in-panel-discussion-new-publication [2] https://www.oecd-nea.org/jcms/pl_18948/proceedings-of-specialists-meeting-on-identifying-and-overcoming-barriers-to-effective-consideration-of-human-and-organizational-factors-in-event-analysis-and-root-cause-analysis [3] https://www.oecd-nea.org/jcms/pl_53449/nuclear-power-plant-operating-experience-from-the-iaea/nea-incident-reporting-system-2015-2017 [4] Anna-Maria Teperi, Vuokko Puro, and Henriikka Ratilainen. Applying a new human factor tool in the nuclear energy industry, Safety Science, 95, p. 125-139, 2017

15:45
On The Use Of Simulators To Gather Human Performance Data Of Remote Maritime Operations

ABSTRACT. The maritime industry is witnessing profound changes in the way that their assets are operated. With the rapid advance of new generation telecommunication technologies, it is practicable to design systems that are deployed on the sea and operated remotely from a shore control center (SCC). This tendency has several advantages such as eliminating the human exposure to harsh environments and reducing transportation costs of workers from/to the workplace. By doing so, the physical systems tend to have a high level of automation, but the human operators are not removed from the loop; instead, they are moved to a different location and interact with the systems in a different manner. To achieve high standards of safety, recent research focused on understanding the operators’ tasks on SCCs and assessing the potential errors and the corresponding risks. However, as in other industries, obtaining adequate human performance data is a challenging task. One promising alternative is to use training simulators for this purpose. Aiming at contributing to the knowledge of these applications, this paper presents a discussion on the main challenges and interesting solutions regarding the use of simulators to collect human reliability data.

16:00
Commentary driving: exploring a method for operative safety reflections

ABSTRACT. Commentary driving is a prevalent part of the toolbox in Norwegian driving instructor education. The fundamental idea being that a running commentary while operating the vehicle is a way to develop awareness of one’s thinking, perceptions and assessments of road traffic scenarios. It is a method to develop understanding, driving skills, and teaching ability. Commentary driving is applied to emphasize how traffic situations are interpreted and acted upon. It is used as a method for developing an analytical mindset in the operative, with stringent attention to the language and terminology used to describe concepts and phenomena. The premise for this paper is that although the method is widely applied, the learning potential of commentary driving is to an extent left unspecified. There is a need to further specify, describe, and apply the learning potential of commentary driving. The aim of this paper is to report our exploration and development of a methodological guideline that we name ‘Operative Safety Reflections:’ a framework aiming to systematically bring the safety potential into the practical and applied field of commentary driving.

16:15
The design of effective safety training courses and differences in practice: an Italian study
PRESENTER: Gaia Vitrano

ABSTRACT. Awareness does not arise from passive learning of rules and procedures, but from proper education to a culture of safety. Safe behaviors are driven by the motivation and knowledge of the workers, and the participation and involvement of workers ensure optimal performance. This work, with a reference to educational learning theories, investigates how to make workers’ training more effective by studying the impact of educational factors on adult learning. New approaches have been developed in recent years with the active involvement of the participants. A framework was built, starting from the literature, to connect different teaching methodologies and trainers’ roles to educational factors. It identifies seven constructs for teaching methodologies, two for the trainer’s role, and four for educational factors as drivers of effective training. A questionnaire was distributed to Italian trainers in companies offering safety training courses and the results confirmed the framework's structure. However, unlike the framework, role playing and group work teaching methodologies, which presume higher effectiveness of training, were not prevalent among trainers who preferred traditional methodologies like frontal lessons, participatory lessons, and personal experiences. These findings suggested a need for improvement in safety training activities. This study, developed before the spread of Covid-19, provides a starting point for further analyses to evaluate how things have changed over time and propose further improvements in the design of safety training activities.

14:30-16:15 Session 5C: Energy Transition to Net-Zero Workshop on Reliability, Risk and Resilience - Part II
14:30
Risk management at various stages of project management of a Hydrogen facility

ABSTRACT. Hydrogen is regarded as a potential future energy source. Various hydrogen facilities are being constructed worldwide, and research is advancing fast. Proper and systematic risk management procedures should be prioritized during project development to avoid future mishaps. The present study identifies risk management procedures at various project stages of a hydrogen facility. Risk management problems should be considered at various levels of project management. Risk management entails determining the scope, risk assessment, communication, risk treatment, monitoring, and review. Specific risk management strategies are identified during distinct engineering stages of hydrogen facilities, from conceptual to operational. The usefulness of various methodologies at various project stages is described. Areas and procedures that require further investigation are also highlighted. This study examines existing methodological flaws and current advancements in establishing risk-informed decision-making and highlights the hurdles to initiating hydrogen-related activities. Work has progressed in several areas, including the examination of the consequences of an unintentional incident, the identification of risks, and the comparative investigation of several hydrogen concepts, such as grey, blue, and green. Future research should examine other green hydrogen approaches, such as solar or wind power, to determine their potential regarding safety, resilience, and sustainability. Inherent safety is regarded as the most proactive risk mitigation option. However, this method has received much too little attention on hydrogen safety. The application of various methodologies at various engineering phases should also be investigated. Even though significant work has been done on quantitative risk assessments of hydrogen plants, several specific elements should be investigated further. The information gained from the oil, gas, and LNG (liquified natural gas) sectors can be helpful in this prospect. However, specialized research should be conducted to gather specific knowledge to aid decision-making.

14:45
Task Analysis and Human Error Identification to Improve the Liquid Hydrogen Bunkering Process in the Maritime Sector

ABSTRACT. Recently, international concern around global warming issue is growing rapidly. Authorities and organizations are implementing strategic tasks towards climate change effects mitigation in different economic areas. Among the various energy solutions, hydrogen has been recognized as a valid alternative to pursue ambitious climate policies. However, hydrogen energy sector is considered as an emerging one. Therefore, the risks that it may pose against specific targets may not be negligible. In the context of maritime shipping, liquid hydrogen (LH2) adoption is a challenging topic since the little is known stems from a parallelism with the well-established use of liquified natural gas (LNG). The unexplored risks and lack of operational experience associated with such infrastructures entail the need to investigate the LH2 value chain, focusing on the bunkering unit, given its crucial role in determining the feasibility of the designed system. In this regard, Human Reliability Analysis (HRA) has been applied to the ship-to-ship bunkering configuration with the aim of identifying the most critical stages of the bunkering process and analyzing how the human contribution affects the operations. The findings show that the transfer unit proved to be the most time significant and human failures led to three main consequences: RPT, icing and operational delay. This work will contribute to lay the foundations for a safe and efficient implementation of H2 technologies in the maritime sector.

15:00
Calculation of the Damage Factor for the Hydrogen-Enhanced Fatigue in the RBI Framework

ABSTRACT. Hydrogen is a clean and sustainable energy carrier, which has the potential to reduce human impact on the environment, thus mitigating the issue of global warming. It has been largely indicated as a promising long-term solution for energy transport and storage, thanks to its near-zero environmental impact at the end-use site. Nevertheless, hydrogen is a highly flammable and explosive substance. Moreover, it can permeate and embrittle most metallic materials, thus resulting in sudden component failures in the hydrogen industry. In this perspective, inspection and maintenance activities have a prominent role in incident and accident prevention. The risk-based inspection (RBI) methodology is a highly beneficial approach for planning predictive maintenance activities in the chemical and petrochemical industries. However, it has never been adopted for hydrogen technologies or equipment operating solely in a gaseous hydrogen environment. RBI aims at prioritizing the inspection and maintenance of high-risk components to minimize the overall risk of the plant. The risk is given by the product of the probability (PoF) and the consequence of failure. While the consequences of undesired hydrogen releases might be studied through experimental and numerical approaches, the determination of the probability of failure for hydrogen components is mainly based on event data such as incidents and accidents. The PoF may also be based on the use of damage factors, which account for the damage mechanisms likely to occur, depending on the material, the operating conditions, and the history of the component in terms of the number and effectiveness of previous inspections. In the existing RBI standards, hydrogen-induced damages are mostly neglected or considered for environments other than gaseous hydrogen. Hence, the application of the RBI methodology to hydrogen technologies entails a high level of uncertainty. This study proposes a methodology to determine the damage factor for hydrogen-enhanced fatigue crack growth. This damage mechanism is considered the most dangerous hydrogen-induced degradation in pipelines since it is due to the cyclic loading associated with the pressure fluctuations within the pipe. The environmental severity is estimated based on the operating conditions (i.e., temperature, pressure, and hydrogen purity), while the material's susceptibility depends on its microstructure, chemical composition, strength level, and presence of post-weld heat treatments. The working conditions are considered through the frequency and amplitude of the cyclic loading. This study could allow for the application of the RBI methodology for hydrogen transportation systems, such as pipelines. Hence, it will facilitate risk-informed decisions regarding inspection and maintenance of industrial components used in the entire hydrogen value chain, thus stimulating an increasingly widespread rollout of hydrogen as a clean and safe energy carrier.

15:15
Resilience of Net-Zero Energy Systems and Infrastructure: Metrics and Measurement Methods

ABSTRACT. The net-zero energy systems and infrastructure are considered as a vital part of the pathway towards a low carbon, sustainable and nature-friendly future. The net-zero energy infrastructure provides essential service of “clean” and “green” power supply to all the other critical infrastructure sectors such as telecommunications, water supply systems, transportation, government services, and public health. Disruptions and breakdowns of the net-zero energy systems from natural disasters, technical failures or man-made accidents can affect large segments of the population and cause significant damage to the environment and large-scale economic and social harm. This paper aims to provide a comprehensive overview of resilience definitions used across different energy-using sectors, followed by an in-depth analysis of resilience assessment and quantification in net-zero energy technologies (including renewable fuels for power generation, long-duration energy storage, and battery electric transportation systems). The current state-of-the-art in resilience assessment methodologies is examined and major gaps and potential research areas for future advancements are identified.

15:30
Comparative Accident Risk Assessment of Energy System Technologies for the Energy Transition in OECD Countries
PRESENTER: Matteo Spada

ABSTRACT. In our modern society, energy is one of the most important prerequisites to produce goods and services, enabling sustainable industrial, social, and economic development. However, the need to reduce green-house gas (GHG) emissions to limit the global rise in temperatures to 1.5°C above pre-industrial levels, calls for a deep decarbonisation of the power sector [1]. Under a sustainable development perspective, different technologies, such as, for example, solar photovoltaic (PV), wind, hydrogen (H2) as energy carrier, geothermal, biomass, carbon-capture and sequestration (CCS), and so on, are thus requested to avoid environmental problems through harmful emissions and other impacts.

In the broader context of the energy transition and the goal to decarbonize electricity and heat production, it is of major interest to have a comparative perspective of risk related to accidents for a broad range of energy technologies. This is useful in evaluating safety performances of technologies, but it is also essential to support stakeholders in complex decision-making processes to plan, design and establish supply chains that are economic, efficient, reliable, safe, secure, resilient, and sustainable. Accidents in the energy sector can occur because of the exposure of people and their socio-economic activities to technological failures, human errors, natural events, and intentional attacks. In the past, comparative risk assessment of accidents in the energy sector has been based on historical data for fossil energy chains (i.e., coal, oil, natural gas), hydropower, and only to some extent to new renewables, e.g., wind [2].

Based on these premises, in this study a comparative accident risk assessment based on historical observations for different energy technologies, e.g., fossil fuels (incl. CCS), hydropower, hydrogen, biomass and new renewables, is presented for the Organization for Economic Co-operation and Development (OECD) countries. In particular, the current analysis is based on the historical observations collected in PSI’s ENergy-related Severe Accident Database (ENSAD) in the period 1970-2020 [3]. In contrast, for nuclear, previous estimations based on a simplified level-3 Probabilistic Safety Assessment (PSA) are considered [2]. For each energy technology, risk indicators, e.g., fatality rate and maximum consequences, are estimated to allow for comparison. The current study provides a consistent methodological approach that comprehensively covers accident risks. Furthermore, it allows comparisons between different energy sectors, which can support decision-making processes by authorities, industry and other stakeholders.

References [1] IPCC, 2018. Summary for Policymakers. In: Global Warming of 1.5°C. An IPCC Special Report on the impacts of global warming of 1.5°C above pre-industrial levels, Masson-Delmotte, V., P. Zhai, H.-O. Pörtner, D. Roberts, J. Skea, P.R. Shukla, A. Pirani, W. Moufouma-Okia, C. Péan, R. Pidcock, S. Connors, J.B.R. Matthews, Y. Chen, X. Zhou, M.I. Gomis, E. Lonnoy, T. Maycock, M. Tignor, and T. Waterfield (eds.). World Meteorological Organization, Geneva, Switzerland, 32 pp [2] Burgherr P, Hirschberg S., 2014. Comparative risk assessment of severe accidents in the energy sector. Energy Policy, 74(Suppl. 1). https://doi.org/10.1016/j.enpol.2014.01.035 [3] Kim W., Burgherr P., Spada M., Lustenberger P., Kalinina A., Hirschberg S., 2019. Energy-related Severe Accident Database (ENSAD): cloud-based geospatial platform. Big Earth Data, 1-27, doi: 10.1080/20964471.2019.1586276

15:45
Rule-based deep reinforcement learning for optimal control of electrical batteries in an energy community

ABSTRACT. This work investigates rule-based controllers (RBCs) and reinforcement learning (RL) agents for managing distributed electrical batteries in a net-zero energy community (NZEC), reducing electricity costs and emissions for the community. The RBCs are driven by deterministic rules, hence, may fail to adapt to new scenarios and uncertainties. On the other hand, RL agents learn from direct interaction with uncertain environments and can better adapt to new conditions [1]. A novel RL approach is proposed, combining MaskPPO and a deep neural network, to avoid the exploration of unsafe/unprofitable actions and enhance control efficacy through accurate predictions of future demand. These new approaches are demonstrated on the \textit{NeurIPS 2022 CityLearn challenge} where real-world data from a district in California are embedded within a simulator for distributed battery control [2]. Points of strength and limitations of the different tools discussed. For comparison sake, an oracle-driven controller is also considered as it gives a reference best-achievable optimum for the challenge problem, i.e., lower bounds on costs and emissions reduction scores. Based on the results, RL agents generally offered robust control over the distributed batteries and often outperformed the rule-based controllers. Additionally, the combination of action masks and neural forecasters significantly improved the performance of the RL agents, bringing them very close to the scores achieved by the global optimum. A study of the model's robustness to seasonality changes concludes this work and further illustrates the generalization ability of RL-based controllers.

[1] Schulman, J., F. Wolski, P. Dhariwal, A. Radford,and O. Klimov (2017). Proximal policy optimisation algorithms. arXiv preprintarXiv:1707.06347.

[2] Kingsley Nweye, Siva Sankaranarayanan, Zoltan Nagy, MERLIN: Multi-agent offline and transfer learning for occupant-centric operation of grid-interactive communities, Applied Energy, Volume 346,2023,121323, ISSN 0306-2619,https://doi.org/10.1016/j.apenergy.2023.121323.

16:00
Requirements for quantitative risk assessment of hydrogen facilities: An Irish use case

ABSTRACT. The present work summarises the requirements for a Quantitative Risk Assessment (QRA) of a hydrogen facility. This is done within a framework of the recently announced National Hydrogen Strategy in the Republic of Ireland. The proposed framework for risk assessment is based on a probabilistic method called Bayesian networks. This framework is expected to provide permitting authorities with a decision support system and guidance for QRA hydrogen facilities.

14:30-16:15 Session 5D: S.35: Scenario Analysis for Decision-Support
Location: Room 2A/2065
14:30
On the Impact of Epistemic Uncertainty in Scenario Likelihood on Security Risk Analysis
PRESENTER: Dustin Witte

ABSTRACT. Physical protection against deliberate attacks is an essential part of critical infrastructure protection. However, attacks are difficult to predict and evidence is rarely available. A challenge in security analysis is therefore a high degree of complexity and uncertainty regarding the scenarios that may occur, including possible attack sequences. The objective evaluation of physical security requires a sophisticated risk analysis. For an analysis of the security risk, threats must be identified, the effectiveness of security measures must be examined, and possible impacts must be evaluated. The quantification of risk is then subject to aleatoric and epistemic uncertainties. With the approach presented here, we intend to make the influence of uncertainties visible. The approach considers uncertainties regarding threats by a wide range of possible scenarios. In each scenario, uncertainties regarding the effectiveness of security measures are considered in a vulnerability model, taking into account possible attack sequences. The vulnerabilities are then weighted by likelihood of scenario occurrence. In a case study, we investigate the impact of epistemic uncertainties under the assumption of different levels of available information about possible attack scenarios and their likelihoods. The results show that risk quantification differs across scenarios, which would probably have an impact on the design of security measures.

14:45
STRATEGIC DECISIONS UNDER UNCERTAINTY: A ROLE FOR REQUISITE RISK MODELS?

ABSTRACT. This talk reflects on several projects – past, present, and planned – where there is a commonality in the methodological modelling challenge although diversity across the real-world problems addressed. The problems include making strategic choices about novel technological concept design to be robust to future uncertainties; making strategic decisions to manage risk and enhance resilience in Arctic search and rescue; and informing effective strategies to be able to respond to future malicious attacks in urban areas. All problems can be characterized by different types of uncertainty, decisions to be taken in a context of multiple stakeholders and choices to be made that have long-term implications for a socio-technical system. Framing the problems as one of making choices under uncertainty to manage risk, we examine an analytical approach that that allows us to create requisite models. That is, models whose “form and content that are sufficient to solve a particular problem” (Phillips, 1984). In turn, we argue such models can enable us to create solutions that, according to the economist John Maynard Keynes are “roughly right rather than precisely wrong”.

A particular challenge is how we appropriately mix relevant methods to create a defensible methodology. We have faced/are facing this challenge in the context of the motivating problems where we mix methods ranging from scenario planning (Cairns and Wright, 2017) to stochastic modelling (Aven and Jensen, 2013). On the face of it these methods are more usually associated with different types of uncertainties, supporting analysis of risk to inform decisions at different organizational levels and planning horizons. However, by structuring models at an appropriate level of the unit of analysis for the problem it can be feasible to provide a meaningful bridge between the deep uncertainties surfaced through foresighting with those uncertainties grounded in experience. We share examples of how we have achieved this and discuss the practice benefits achieved as well as the scientific challenges remaining.

15:00
Scenario-based Failure Analysis of Product Systems and their Environment

ABSTRACT. During the usage phase, a technical product system is in permanent interaction with its environment. This interaction can lead to failures that significantly endanger the safety of the user and negatively affect the quality and reliability of the product. Conventional methods of failure analysis focus on the technical product system. The interaction of the product with its environment in the usage phase is not sufficiently considered, resulting in undetected potential failures of the product that lead to complaints. For this purpose, a methodology for failure identification is developed, which is continuously improved through product usage scenarios. The use cases are modelled according to a systems engineering approach with four views. The linking of the product system, physical effects, events and environmental factors enable the analysis of fault chains. These four parameters are subject to great complexity and must be systematically analysed using databases and expert knowledge. The scenarios are continuously updated by field data and complaints. The new approach can identify potential failures in a more systematic and holistic way. Complaints provide direct input on the scenarios. Unknown, previously unrecognized events can be systematically identified through continuous improvement. The complexity of the relationship between the product system and its environmental factors can thus be adequately taken into account in product development.

15:15
Selecting combinations of reinforcement actions to improve the reliability of distribution grids in the face of external hazards

ABSTRACT. Distribution grids constitute critical infrastructure for delivering electricity to industries and residential customers. While society expects a fully reliable system providing uninterruptible supply, this is impossible in practice due to the inherent failure rates of its components, uncertainty regarding loading conditions, and exposure to multiple external hazards. Moreover, once the system has attained a high level of reliability, incremental improvements become more expensive. Thus, the main task of a power system is delivering energy as economically as possible, fulfilling specified safety and reliability criteria.

Planning processes are essential to prepare the grids and guarantee a desired reliability level. In these processes, the reliability of single or multiple grids, the risks posed by potential hazards, and the costs and effectiveness of reinforcement actions are characterized. These characterizations must be integrated to develop systematic and cost-effective measures to protect the grids in the context of uncertainty in failure events and operation conditions. After these characterizations, the decision-maker selects combinations of reinforcement actions to ensure the required system reliability within a given confidence interval while minimizing the investment and operation costs.

Choosing between alternative combinations of reinforcement actions to improve or maintain the reliability of distribution grids can be tackled as a portfolio problem; in particular, Portfolio Decision Analysis (PDA) is a well-founded approach to support decision-makers when making multiple selections of alternatives, considering preferences, constraints, and uncertainties. PDA methods help build decision models that account for the impacts of hazards and reinforcement actions on the reliability of single or multiple grids, as measured by failure probabilities.

In this paper, we develop a systemic framework to support decision-makers at a distribution system operator level who seek to reinforce and protect multiple distribution grids. We address the problem of choosing between alternative combinations of reinforcement actions under budget constraints, minimum reliability standards, and different hazards. The proposed framework builds on the PDA approach whereby the reinforcement problem is represented as an influence diagram in which different scenarios are considered simultaneously. The optimization problem is formulated as a mixed integer-linear problem. Risk measures such as conditional value at risk (CVaR) are incorporated to enhance the formulation in order to curtail possible budget overruns and enforce reliability constraints. We showcase the framework's usability with an illustrative case study in which two adjacent distribution grids need to consider several alternative combinations of reinforcement actions to mitigate the risks posed by adverse weather conditions.

The proposed approach is novel in that it combines the strengths of the PDA with existing reliability and weather models to account for interactions between hazards and reinforcement actions. It offers a systemic approach which is shown to be cost-efficient in exploiting synergies between alternative combinations of reinforcement actions. A further advantage of the framework is that it can readily be adapted to account for different kinds of hazards, including those caused by the malicious actions of adversaries, provided that relevant information about how reinforcement actions impact the reliability of the system can be quantified through the elicitation of expert judgements, for instance.

15:30
Risk Adjusting of Scoring-based Metrics in Physical Security Assessment
PRESENTER: Thomas Termin

ABSTRACT. Scoring-based systems are used worldwide to assess safety and security risks. Due to their ease of use, qualitative and semi-quantitative metrics are very popular. However, there may be the possibility that these scores do not accurately reflect the real risk, as was e.g. shown by Braband (2008) or Krisper (2021). In the worst case, this can lead to a misguided investment in measures. To avoid this, an adjustment of the scoring to a quantitative metric is required. The examples of the semi-quantitative Harnser metric and the quantitative intervention capability metric (ICM) from physical security are used to show in this paper how to transfer a well-defined performance mechanism for quantitatively calculating physical vulnerability into consistent scores. To enable the transfer, this paper performs a metrical analysis. The results of the Harnser metric are extended by estimated probability intervals and compared to the results of the ICM. Different types of score linkage and scales are used. Subsequently, we analyze measures to align the results of these two metrics, such as modifying the assignment of scores to scale categories or adjusting the probability intervals behind the scores. As an output, the metrical analysis generates rating scales for the Harnser scoring system that can be used to replicate quantitative vulnerability values. The results contribute to making more risk-appropriate decisions. Finally, we critically evaluate possibilities and limitations of metrical adaptability and summarize results.

References:

(Braband 2008) Braband, J. "Beschränktes Risiko." QZ. Qualität und Zuverlässigkeit 53.2 (2008): 28-33.

(Krisper 2021) Krisper, M. "Problems with Risk Matrices Using Ordinal Scales." arXiv preprint arXiv:2103.05440 (2021).

15:45
Value personas based quantitative decision support: An approach to multi-facetted decision problems
PRESENTER: Ingo Schönwandt

ABSTRACT. Resilience management and the planning of critical infrastructure is subject to contested problem framings. Herein, parties to a decision may disagree on the nature of the problem and possible solutions so that finding clear-cut strategies is difficult. Common policy analysis methods, such as cost-benefit analysis, are designed to aid decision making but are limited in the face of multiple contested problem framings. Recent advances propose the use of worldviews to imitate these framings in decision support. This work further refines the approach by replacing the proposed worldviews with a range of 49 unique value personas to represent an extended variety of problem framings with a broader spectrum. Leveraging the human-nature coupled lake model as a use-case to simulate a decision problem, robust decision making (RDM) is employed to evaluate the decision problem resulting from the different framings of the introduced value personas. Compared to using worldviews, the results show that applying societal values to construct value personas benefits the decision support with the ability to evaluate even marginal changes in individual values rather than differing abstract worldviews and enables a more fine-tuneable analysis.

16:00
Propagation process of information physical attack in oil and gas intelligent pipeline system based on SEIRS
PRESENTER: Yuhuan Li

ABSTRACT. The normal operation of oil and gas intelligent pipeline system depends on the coordination of sensor and control components. Incorrect sensor data and logic errors of control components will lead to incorrect response of the system. However, the impact of an attack on a system varies depending on the importance of a component, and the attacker usually does not attack only once. In order to study the propagation mechanism of information physical attack in the oil and gas intelligent pipeline system and improve the security of the system, the SEIRS model of the components in the oil and gas intelligent pipeline system is constructed by using the propagation dynamics method, and the security of the system is analyzed from two aspects: attack detection and attack defense. Firstly, the SEIRS model is constructed for the components with attack detection and attack defense in the system to describe the propagation process of the attack in the system. Secondly, the numerical simulation of the model is realized by setting the time period and initial parameters of the attack in Matlab. Finally, the system security under different parameters is studied.

14:30-16:15 Session 5E: S.01: Climate Change and Extreme Weather Events Impacts on Critical Infrastructures Risk and Resilience I

The aim of this Special Session is to provide an opportunity for the researchers to share and exchange their knowledge and experience on fields relevant risk and resilience assessment of critical infrastructures while accounting for climate change. Related topics are listed as, but not limited to: 1)  Natural hazards modelling as stressors for critical infrastructures; 2) Spatial/temporal modelling and simulation of extreme climate events; 3) Climate change and their impact on critical infrastructure networks resilience; 4) Climate adaptation; 4) Natural hazards risk and susceptibility maps; 5) Extreme spatial hazards and risk of disruptions of multiple infrastructure systems; 6) System-of-system approach to risk and resilience assessment of interdependent critical infrastructure networks; 7) Cascading failures.

Location: Room 100/4013
14:30
Implications of Climate Change in Life Cycle Cost Analysis of Railway Infrastructure
PRESENTER: A.H.S Garmabaki

ABSTRACT. Extreme weather conditions from climate change, including high or low temperatures, snow and ice, flooding, storms, sea level rise, low visibility, etc., can damage railway infrastructure. These incidents severely affect the reliability of the railway infrastructure and the acceptable service level. Due to the inherent complexity of the railway system, quantifying the impacts of climate change on railway infrastructure and associated expenses has been challenging. To address these challenges, railway infrastructure managers must adopt a climate-resilient approach that considers all cost components related to the life cycle of railway assets. This approach involves implementing climate adaptation measures to reduce the life cycle costs (LCC) of railway infrastructure while maintaining the reliability and safety of the network. Therefore, it is critical for infrastructure managers to predict "How will maintenance cost be affected due to climate change on different RCP's scenarios?" The proposed model integrates operation and maintenance costs with reliability and availability parameters such as mean time to failure (MTTF) and mean time to repair (MTTR). The proportional hazard model (PHM) is used to reflect the dynamic effect of climate change by capturing the trend variation in MTTF and MTTR. A use case from a railway in North Sweden is studied and analyzed to validate the process. Data collected over a 20-year period is analyzed for the chosen use case. As a main result, this study has revealed that climate change may significantly influence the LCC of switch and crossing (S&C) and can help managers to predict the required budget.

14:45
Climate change and its weather hazard on the reliability of railway infrastructure
PRESENTER: Ahmad Kasraei

ABSTRACT. Due to the accumulated greenhouse gas (GHG) effect, climate change will affect infrastructure networks regardless of different climate mitigation strategies. Our current investigation reveals an apparent increasing trend in the number of climatic-based failures in Swedish railway infrastructure from 2010 until 2020. Switch and crossing (S&C) is a critical part of the railway infrastructure network, which plays a key role in adjusting the railway network capacity and dependability performance. Due to the structure of S&C, it can be affected more by extreme climate change impacts, e.g., abnormal temperature, ice and snow, and flooding. Clearly, the reliability and hazard function of infrastructures will be affected by age and environmental conditions. Therefore, it is essential to analyze the effect of different climate change features / explanatory variables called “covariates” on the reliability of S&Cs. The proportional hazard model (PHM) is a practical approach to assess and prioritize the impact of various environmental covariates on S&Cs’ reliability. This paper aims to integrate climate change data with infrastructure asset health. This integration can be developed by utilizing proportional hazard methodology to assess the effect of different covariates on the reliability function. The proposed methodology has been verified through a number of S&Cs located in northern Sweden. As a main result, this study has revealed that the operational environment covariates significantly influence the reliability of S&Cs and profoundly affect the availability and capacity of railway tracks. The study indicates the need for effective climate adaptation options to reduce climate change impacts and risks to achieve resilience and climate-neutral railway infrastructure asset.

15:00
Vulnerability Scenario Characterization in an Industrial Context using a Natech Indicator and a Territorial Multi-risk Approach.

ABSTRACT. A growing number of natural hazards triggering technological accidents (Natech) has been duly reported from all around the world. However, the multi-hazard and multi-stakeholder character governance of Natech risk is challenging, it requires a comprehensive territorial approach to elucidate the possible simultaneous scenarios and to address the protection of industrial installations and their possible safety-relevant interactions with neighboring critical infrastructures, environment, and communities. Consequently, the goal was to establish a protocol for the vulnerability characterization between the mutual interdependencies of the industrial and the surrounding multi-risk contexts where the industry is located. A previously validated Natech Indicator was implemented as an early warning system, while a multi-risk tool previously validated, was used for the territorial vulnerability characterization in case of an alert. Spatial analyses using the Geographical Information System (GIS) were developed from multiple indicators nested in a systemic vulnerability index, represented on a homogeneous grid. Risk scenarios were generated for the industrial context of interest highlighting the vulnerability to suffering disruptions from natural hazards and pressures. The results showed that industrial infrastructures might represent a double territory threat, one regarding their technical characteristics and hazardousness, and the other when their technological items collide with natural hazards and territorial stressors and provoke cascading events. In addition, the results increase the awareness of the industrial operators and the planners regarding a set of vulnerabilities only sometimes analyzed holistically. Consequently, this approach may contribute to enhancing the preparedness of risk governance and risk reduction, of both industries and territories. Further research is required to implement this approach in different industrial contexts addressing the time course of natural disruptions, within a framework to increase resilience.

15:15
Vegetation: a risk influencing factor for Natech scenarios

ABSTRACT. Ensuring the availability of critical lifelines, such as power grids, is of utmost importance for industrial risk management, as the efficient operation of vital process safety barriers (e.g., pumps and temperature control systems) may rely on off-site power to prevent or mitigate the release of hazardous materials. Moreover, power accessibility is an enabling factor for the effective deployment of first respondents in case of an emergency. These issues gain specific importance in view of technological accidents caused by natural hazards involving the release of hazardous chemical substances—known as Natech. Natech accidents, such as the fires and explosion at the Arkema plant (Texas, US) induced by Hurricane Harvey (2017), emphasise the need for robust and reliable power grids in the pursuit of minimising the consequences from—increasingly frequent due to climate change—extreme hydrometeorological phenomena. In this context, we bring attention to an underlying risk factor that can potentially jeopardise the reliability of power grids during Natech: vegetation.

Studies have shown that vegetation is among the major hazards power grids are exposed to. There are two main ways vegetation can impact power grids: either when an entire tree (or a tree branch breaks and) falls directly on the power line and severs it, or when plants growing underneath or alongside the power line come in contact with the line, short-circuiting the network as a result. Even if the consequences of these interactions are usually limited to few power outages to downstream users, their severity may escalate considering emergency conditions in Natech accident scenarios. Apart from that, damaged power lines and short-circuits due to vegetation may trigger wildfires, putting nearby lives, property and other critical infrastructure at risk, thus further exacerbating an emergency situation by overwhelming the response mechanisms with multiple fronts.

In this study, we argue that inadequate vegetation management along power grids can be a Natech risk influencing factor. We then examine the interaction between vegetation and power grids considering the case of Norway, where vegetation was the primary cause of power outages in 2018. We propose a preliminary risk-based decision support framework for grid operators aimed at enhancing decision-making for vegetation management along power lines. Through delineating potentially vulnerable areas and performing a high-level risk assessment for vegetation-related hazards on a regular basis, grid operators could potentially plan and carry out necessary vegetation management tasks (e.g., clear-cutting) well before the risk reaches a critical threshold. Moreover, this risk-based tool would allow operators to identify potential ‘hotspots’ prone to be affected by vegetation-related outages and consider the subsequent consequences in conjoint disaster scenarios. Apart from gaining precious time for restoring power grid operation during emergencies by identifying bottlenecks, anticipating developments and quickly pinpointing vulnerable locations, findings can serve as valuable input to support Natech risk management overall.

15:30
Topography-based Fuzzy Assessment of Burning Area in Wildfire Spread Simulation
PRESENTER: Laurence Boudet

ABSTRACT. Wildfires have always been a natural component to regulate ecosystems. However, they are becoming more destructive and less predictable, especially due to human activities and climate change that interact with fire dynamics, e.g. the vegetation distribution. To help fire fighters, some pieces of software propose to gather all the data in Geographical Information Systems.

They are useful for prevention but they lack information for crisis management. Wildfire propagation simulation is thus a key asset to help first responders prioritizing the tasks during crisis management or prevention.

Two main families of methods can be found for fire propagation prediction, one relying on physics modeling of chemical and physical processes of fire, and the other, more recent, using Artificial Intelligence. Even if these methods differ on the approach, they nevertheless agree on the difficulty of the problem due to a strong uncertainty [1], and the importance of the slope and the wind velocity and direction.

In this paper, we use fuzzy logic to cope with this uncertainty and use a knowledge-based approach to predict the spread of wildfires on a given terrain. Our approach consists in an iterative process that computes the three possible states of the land subject to a fire, namely unburnt, burnt and fire areas, as 2D fuzzy sets. Their membership values represent the possible spread of a wildfire considering its natural evolution and allow considering the uncertainty of the propagation. To propagate the fire, we selected carefully from the literature [2], [3] the possible parameters for modelling the individual effects of wind and topography and compute their combined effect at each iteration based on previous work [4]. This gives an approximation of the fire propagation that can be used worldwide even when historic data are not available. Without loss of generality, we illustrate this assessment with some examples from an area of interest in Southern France.

REFERENCES [1] M. P. Thompson and D. E. Calkin, “Uncertainty and risk in wildland fire management: A review,” Journal of Environmental Management, vol. 92, pp. 1895–1909, 2011. [2] R. S. McAlpine, “The acceleration of point source fire to equilibrium spread,” MSc thesis, 140p., 1988. [3] J. Adou, Y. Billaud, D. Brou, J. Clerc, J. Consalvi, A. Fuentes, A. Kaiss, F. Nmira, B. Porterie, L. Zekri, and N. Zekri, “Simulating wildfire patterns using a small-world network model,” Ecological Modelling, vol. 221, pp. 1463–1471, 2010. [4] D. X. Viegas, “Slope and wind effects on fire propagation,” International Journal of Wildland Fire, vol. 13, pp. 143–156, 2004.

15:45
Uncertainty Management at Atmospheric Icing Management in Power Line
PRESENTER: Abbas Barabadi

ABSTRACT. Climate change will provide a lot of new challenges for infrastructure management where they need to deal with different sources of uncertainties. Most of the decisions regarding the operation, maintenance and safety analysis of infrastructures are based on historical data such as failure rate, repair rate etc. Climate change can affect the quality of historical data significantly. Even, some researchers point out, due to the associated uncertainties, the historical data cannot be used anymore in some cases. In practice, the historical data style has great value but needs to be analyzed considering all associated uncertainties. These uncertainties in broad categories can be grouped as Epistemic uncertainties, Ontological uncertainties, Aleatoric uncertainties, and Stochastic uncertainties. Icing and hazards can affect the infrastructure dramatically, especially in the cold region. Ice disaster management (IDM) has been developed to provide a systematic approach to IDM in the Arctic. It includes preparedness, response, recovery, learning, risk assessment, and prevention. Using a systematic literature review this paper analyzes the relevant research on a different part of the disaster management (DM) cycle regarding ice disasters and investigates gaps in uncertainties analysis in each of the different steps of IDM. Finally, the paper provides a guideline to handle different uncertainties in practice.

14:30-16:15 Session 5F: Risk Assessment II
Location: Room 100/5017
14:30
Optimized inspection plans for subsea equipment

ABSTRACT. Safety is crucial for many industries where a failure may impact the environment and human lifes, such as in oil and gas production. Risk-based inspection has been used for years in this industry aiming to identify risk levels in operations and to design inspection and maintenance programs to evaluate the most critical failure modes of the equipment. These programs usually are designed considering two main and clearly conflicting objectives, which are to maintain operations at acceptable risk levels while struggling to keep the costs associated with inspections manageable. Therefore, studies have handled this issue as a multi-objective problem and used heuristics (e.g. genetic algorithms and particle swarm optimization) to optimize equipment inspection programs regarding risk and the cost. However, in real-world problems, material and human resources restrictions proved to be quite crucial when creating inspection plans of a set of equipment. The present paper aims to present the early stages of a methodology to optimize inspection plans considering the availability of resources over time. Thus, this methodology is suitable to manage the inspection of oil and gas systems of multiple wells. The proposed approach was developed observing the appropriateness of inspection techniques for this industry and their frequency of use to achieve tolerable risk levels while optimize resources, reducing costs. A comprehensive review of the literature supported and guided the experiments carried out to evaluate a real case of a subsea equipment where the risk index is estimated iteratively over time.

14:45
Risk identification and Bowtie analysis for risk management of subsea pipelines
PRESENTER: Marcelo Alencar

ABSTRACT. Subsea pipelines are critical transport modes subject to failures that can impact the environment, people, and organizations. Some of these failures can produce critical consequences and must be analyzed. In this sense, efficient risk management is relevant to prevent subsea pipeline failure accidents. The risk management process is a crucial issue, aiding the decision-making process in industrial systems and transport modes with the application of tools and methodologies to support relevant activities. One of these methodologies is the Bowtie. In this paper, a risk identification based on the Bowtie analysis is structured, exploring a subsea pipeline system's main causes and consequences, helping to structure the problem and consequently monitoring the effectiveness of preventive and mitigating barriers, allowing risks are better understood and managed over time, recording causes, consequences preventive and reactive controls to better monitoring.

15:00
Hybrid and Information Warfare: Challenging Topics for Risk Communication

ABSTRACT. This study seeks to enhance current understandings of the risk communication challenges associated with the information-related means employed in cyber space to exert inappropriate influence, which frequently focus on paralyzing people and undermining stability. These means can be part of a hybrid warfare strategy located in the gray zone between the poles of peace and war, which indicates the long-term and subtle character of such a strategy. Maneuvers to conceal invasive interventions that mask activities, confuse responses, and disguise actual intentions constitute particular challenges. This paper presents the results of a literature review that applied a snowball sampling approach and concentrated on uncovering the threat landscape and recognition of the gray zone. The results highlight the emergence of information and cyber conflicts, including state operations and discreditation, along with specific techniques, such as trolling and bots, in the Swedish context and beyond. The findings illustrate some challenges for risk communication in information and cyber warfare and their implications for research and practice.

15:15
Dynamic and Classic PSA Model Comparison for a Plant Internal Flooding Scenario

ABSTRACT. Not only classical and dynamic PSA methodologies have different capabilities, but also the classical PSA methodologies themselves. However, only few comparisons between different methodologies have been published. Hence, GRS is comparing two classical PSA codes, RiskSpectrum® and SAPHIRE and the Crew Module for human interactions within the dynamic PSA approach MCDET (Monte Carlo Dynamic Event Tree) of GRS. These methodologies are used for analysing an assumed plant internal flooding scenario resulting from a leakage in the fire water supply within the reactor building annulus. After the initial event of the leakage, a human procedure is needed to interrupt the outflow of the fire water supply. This human procedure also comprises various manual actions outside buildings, which may be hampered by severe weather conditions. Such conditions were therefore additionally included in the scenario. They are assumed to be below the initiating event level but have not been specified in more detail. If the water supply cannot be interrupted, the reactor building annulus may be flooded resulting in a manual reactor scram. The scenario is based on an existing event tree in a RiskSpectrum® PSA model. It was also modelled by the Crew Module splitting up the human procedure into different subsequent steps, each with various options for the next steps. Furthermore, the event tree was automatically transferred from RiskSpectrum® to SAPHIRE applying the GRS tool pyRiskRobot. In this context, pyRiskRobot was enhanced to make the transfer feasible taking several differences in the classical PSA codes into account. The comparison of the classical and dynamic PSA codes shows the following results. First, analyses with the transferred event trees in SAPHIRE lead to nearly identical results as in RiskSpectrum®. This indicates that the PSA is not significantly affected by which classical PSA code is used. However, the codes have different capabilities. For example, RiskSpectrum® allows the use of exchange events to switch between different basic events representing the same aspect, e.g., a human action fails more likely under severe weather conditions. Such a switch has to be modelled by fault trees in SAPHIRE. Also, both classical PSA codes include different types of probability distributions. Second, the Crew Module allows for a more detailed modelling of the scenario, particularly with respect to complex time dependent aspects. For this reason, additional ways to detect and diagnose the leakage were included and reduced walking speeds of humans outside buildings during severe weather conditions were considered in the model. Although the activity has not been completed yet, some preliminary conclusions have been drawn. The GRS tool pyRiskRobot is meanwhile capable to transfer entire PSA plant models facilitating a comparison of results by different codes, e.g., if a PSA reviewer wants to use another code. Moreover, a dynamic PSA approach applying the GRS Crew Module can lead to more detailed results in comparison to a classical PSA approach.

15:30
Logics of justification within communication of risks

ABSTRACT. On September 26, 2019, a large-scale fire affected the Lubrizol industrial site in Rouen, France. The management of the consequences of the accident gave rise to the implementation of an original body within the variety of existing mechanisms of citizen participation on industrial risks: “the Transparency and Dialogue Committee (mentioned hereafter as TDC)”. This committee brought together various actors concerned by the consequences of the fire: residents, elected officials, industrialists, environmental associations, representatives of the agricultural world, trade unions, economic actors, state, and Health services... Its intended purpose was to monitor over time all the issues related to the consequences of the accident and to share all the information available. Following an analysis of the expression of the public concerns (Foussard & al., 2022), our attention is now focused on this ad hoc instance. Ten sessions of the TDC were held between October 11, 2019, and December 10, 2021. The content of these minutes is considered under the background of the economy of worth (Boltanski & Thevenot, 2006) which provide a theoretical framework of the dynamics linked to the presence of multiple rationalities. An economy of worth stipulates what things count, how, and in what ordered hierarchy, thereby offering a coherent space within which people interact. Instead of seeking for the determinants of the behavior of individuals, the objective of the study is rather to see how they construct and use argumentative resources in situations where they are led to justify their claims. The TDC as a specific structure for citizen participation on industrial risks provides materials to analyze the potential foundations of agreement and the necessity to justify through this “test of justification “. In that kind of test, the actors refer to specific principles (i.e. a form of common good, a general value based on a conception of what is right in this circumstance). This paper presents how the TDC can be analyzed through the different "logics of justification" used, in what extend these logics respond to public concerns and which "common superior principles" would be valuable candidate to resolve conflicts and help to reach an agreement between the stakeholders.

Boltanski, L., & Thévenot, L. (2006). On justification: Economies of worth (Vol. 27). Princeton University Press.

Foussard, C., Van Wassenhove, W., & Denis-Rémis, C. (2022). Taking public concerns into account as a risk management criterion. A case study. In 32nd European Safety and Reliability Conference (ESREL 2022).

15:45
An uncertain risk concept? Perceptions of uncertainty among risk professionals in Norwegian petroleum companies

ABSTRACT. Background: In the petroleum industry, risk has traditionally been described as a combination of the probability of an event, and the consequences of the occurrence of the event. In 2015 the Petroleum Safety Authority of Norway changed the risk definition underlying its regulation, where risk is described as “the consequences of the activities, with associated uncertainties”. With functional regulations the internal control systems of the companies are a fundamental element of safety. This means that how individuals understand risk will influence risk mitigation. The study aims to understand how uncertainty in risk is perceived and described by risk professionals working in the Norwegian petroleum industry. Method: Semi-structured interviews with 12 risk professionals in Norwegian oil and gas companies, analyzed with thematic analysis. Results: Descriptions of uncertainty in risk ranged on a scale ranging from more traditional risk definitions to uncertainty as the fundamental aspects of risk. Little practical impact of a changed risk definition was described beyond greater awareness and legitimacy to communicate uncertainty. Conclusion: There was no unified perception on how to view and describe uncertainty in risk among risk professionals. The present study indicates that, more often than not, uncertainty was perceived as an important and fundamental aspect of risk. Greater legitimacy to communicate uncertainties to decision makers may be a practical impact of the definition change.

16:00
Understanding risk-taking propensity. An investigation examining differences between members of the general population and pilots

ABSTRACT. Background: Aviation is considered a high-reliability industry with safety being one of its highest priorities. How pilots make decisions, and their propensity for risk which can affect the safety of a flight is of importance to airlines and the aviation industry more broadly. Research has identified differences in personality traits between pilots and members of the general population (Chapelle, et al, 2010). Whether these differences extend to risk propensity remains unknown, and hence is the primary focus of the present research. The current research is also interested in the extent to which existing personality scales examining risk propensity predict pilots’ risky flight behaviour.

Aim: The primary objective of the research is to investigate if pilots differ in terms of risk propensity from members of the general population. The secondary objective is to assess the validity of known risk propensity scales in predicting risky flight behaviour with pilots, using a low-level risky flight scenario in a flight simulator.

Method: Two groups were recruited, 100 general population participants and 17 pilots. Both groups completed personality and risk scales such as the Big 5, Sensation Seeking Scale (SSS), with additional aviation specific scales for pilots, including Hunter’s Risk Perception Scale 1 and 2. Risky flight behaviour was measured through a low-level flight task. Minimum altitude descended featured as the dependent variable (visual flight rules state minimum altitude permitted over non-populated areas is 500ft). Pilots who flew higher were deemed to be more risk averse than pilots who flew lower.

Results: The results of the comparison between members of the general population and pilots indicated the general population had a lower propensity than the pilots to engage in risky behaviour, as identified through the Thrill and Adventure seeking factor on the SSS. The results relating to the predictive validity of the risk scales and flight performance revealed the Evaluation of Visual Analogue Risk (EVAR) scale total score factor and the Openness factor on the Big 5 scale account for 64% of pilots’ risky behaviour, meaning pilots who were more risk prone on the scales spent more time at a higher altitude compared to pilots who measured low on these two factors. These results suggest pilots who were riskier on the psychometric scales were risk averse in their flight behaviour than pilots who were risk averse on the scales.

Significance: The results of the study join a small group of research which found in high risk-taking sports, or safety critical professions, that propensity to engage in risk is inversely related to risky behaviour. The results also indicate there could be different types of risk takers such as calculative and impulsive. In a practical application for the aviation industry, the findings have important implications for training and recruitment of pilots throughout aviation.

References

Chappelle, W. L., Novy, M.P.L., Sowin, C.T.W., & Thompson, W. T. (2010). NEO PI-R Normative Personality Data that distinguish U.S Air Force Female Pilots. Military Psychology, 22(2), 158-175. DOI: 10.1080/08995600903417308

14:30-16:15 Session 5G: Maritime and Offshore Technology II
14:30
Method for FTA (Fault Tree Analysis) combined with FMECA (Failure Mode and Criticality Analysis) on vessels focused on improving Reliability in Pneumatic Equipment Maintenance – A Case Study

ABSTRACT. Effective maintenance of marine equipment is crucial to increasing ship operability rates. Therefore, it is essential to invest in maintenance to ensure service quality. Developing efficient maintenance that manages the survival of the equipment and minimizes its failure can maximize the quality of services and meet the expectations of companies. Using FTA and FMECA can help assess the system's reliability and identify failures and risks in the project from a vision focused on reliability and criticality. This study aims to propose a method for FTA (Fault Tree Analysis) combined with FMECA (Failure Mode and Criticality Analysis) on vessels focused on improving Reliability in Pneumatic Equipment Maintenance. Through a case study, it demonstrates that the combination of FTA and FMECA can help identify, analyze, evaluate, and treat risks. It identifies and classifies the critical pneumatic equipment of the engine, presenting the failure modes in terms of safety, environment, production losses, and maintenance costs. A case study was conducted on a vessel with a team of experienced stakeholders working together and sharing experiences in elaborating first on the FTA and then the FMECA. As a result, risks and response actions were identified. One of the risk response actions was the reduction of condensate in the compressed air networks resulting from installing the Air Dryer system, generating savings in engine parts, and increasing the vessel's operability. The study shows that using the Air Dryer helps maintain pneumatic equipment and remove condensate. It was observed that some vessels do not use this system, demonstrating the need for improvements in maintenance and design. The contribution is relevant for maritime support companies in general and the academy. In addition to ensuring the continuous improvement of the maintenance program, its results can serve as a basis for future studies. The study can also impact vessel maintenance processes and help understand performance and safety during operation. Although carried out on a specific vessel, it can be generalized to other vessels and fields of work whose safety is affected by similar risks, resulting in waste, rework, and unnecessary energy consumption. The study may change the practice and thinking of professionals who deal with PFMEA on ships.

14:45
O&M of Crew Transfer Vessels against Floating Wind Turbines – modelling the water run-up effect during personnel transfer in waves with short periods

ABSTRACT. Relevance and novelty of the proposed work

For floating wind farms in areas like in Mediterranean Sea where waves with short periods occur, it is important to make sure that the Crew Transfer Vessel main deck does not get flooded.

Method description

Water elevation at boarding point: • Calculate the water elevation downstream of the floating wind turbine boarding point, which is by accounting for the wave masking by its floater. Vessel heave at boarding point: • Calculate the Crew Transfer Vessel heave at the floating wind turbine boarding point. Relative range between ship heave and wave elevation at berthing point: • Calculate the wave height and periods where that relative range gets lower than zero, which is when the vessel deck becomes flooded by the waves. Main results and findings

The benchmark is another reference which calculated the flooding risk while berthing the vessel against a fixed wind turbine monopile: it compares satisfactorily with the present calculation. The present method also shows potential developments for a more precise calculation: a berthing calculation based on a friction coefficient not only kinetic, but this time either kinetic or static, whether the vessel grips or not onto the wind turbine boat landing.

15:00
Challenges and Opportunities of Implementing the Operators' Perspective: Experiences from Automated Drilling Projects

ABSTRACT. In this paper we highlight experiences from two cases of drilling automation, emphasizing the importance of understanding work and human factors in the design and implementation of automated systems. Automation is advancing as a means to increase efficiency, quality and safety in various industries, including petroleum. However, there has been limited sociotechnical studies of automation in the petroleum industry. Experiences from other domains indicate that gradual automation in collaboration with users has improved efficiency, safety and user satisfaction. Using thematic analysis of interviews with technology providers, consultants, drilling operators, and project leaders, we found that from the outset of the projects, a balance between technology optimism and understanding of human limitations and experiences was critical. Furthermore, we identified several challenges and potential remedies in areas such as user involvement, system integration and alarm handling, use of appropriate methods and standards, sensemaking of automated systems, and competence and training of operators. The case studies illustrate a need for improved management of human factors in the development and implementation of automated technology in the petroleum industry. The concepts of work as imagined (WAI) and work as done (WAD), and the potential gap between these, are useful to highlight the importance of applying the appropriate human factors expertise and methods for such development.

15:15
A New Model for Fuel Transfer Leak Frequencies
PRESENTER: John Spouge

ABSTRACT. The safe transfer of liquids and liquefied gases between transport units and fixed installations is a vital part of the fuel supply infrastructure. Leaks during transfer endanger people working at transfer facilities or living nearby. Large or frequent leaks could undermine the public acceptability of the fuel supply, which is particularly important for new low-carbon fuels such as ammonia and hydrogen.

Risk assessments of fuel supply operations need to estimate the likelihood of such leaks. The difficulty and uncertainty of such estimates has been well-known since the first risk assessments of LNG nearly 50 years ago. These uncertainties are still critical in current fuel transfer risk assessments. In the Netherlands, standard transfer leak frequencies in the Reference Manual Bevi come from judgements or unknown data sources from the 1960s, which have been copied from study to study ever since. A better model is highly desirable.

This paper presents a new model of the frequencies of leaks during fuel transfer. The model covers loading and unloading of bulk liquids and liquefied gases on marine, road and rail tankers through flexible hoses and articulated loading arms. It is based on a review of 35 sources containing original data on transfer leaks and associated activity. After evaluating the quality of each source, the model was based on a detailed analysis of 6 sources that had high quality ranking. These were used to develop a leak frequency model that takes account of site-specific operational characteristics and safety measures. The model estimates the frequency of transfer leaks of different sizes, their frequency-quantity distributions and causal breakdowns.

The paper explains the model’s methodology and presents preliminary results for standard transfer scenarios. Being traceable to documented analyses of recent leak experience in actual fuel transfer operations, these results are much higher in quality than any previous estimates of transfer leak frequencies. Once validated by industry, they will greatly improve the validity of quantitative risk assessments of fuel transfer operations.

15:30
Including the human factor and environmental conditions in the reliability estimation of an LNG bunkering operation supply
PRESENTER: Antonio Miranda

ABSTRACT. The roadmap towards decarbonization necessarily includes the maritime sector. The current regulatory framework sets the pathway to follow by the ship companies. In 2020 forcefully entered a resolution from the International Maritime Organization (IMO), which has forced shipping companies to reduce the SO2 emissions below 0.5% from the previous 3.5% allowed. The IMO Strategy will be revised in 2023, possibly strengthening its emission-reduction ambitions. This regulatory framework points to a diverse future energy mix of carbon neutral and fossil fuels, with the latter gradually phased out by 2050. Among the alternative fuels, LNG has been taking the lead taking over other fuels such as ammonia, methanol, hydrogen and biofuels. The earliness at ports adopting LNG as an alternative fuel for shipping has allowed gathering relevant data and with these, make the subsequent first analysis. The behaviour of cryogenic equipment at bunkering operations remains still partially unveiled. As a result, the new specific failure data obtained from the field is received as gold mine for ulterior analysis. There are, however, some generic reliability data for some ground components already used with LNG on industrial configurations such as valves or hoses. Recent research conducted on actual LNG bunkering operations have analysed the reliability level of an LNG bunkering operation on different types of ships. The reliability model for the TTS configuration is analysed using the RBD (Reliability Block Diagram) technique. The reliability model can alternatively be represented by the Fault Tree Analysis (FTA) technique, (Rausand & Hoyland, 2003). Continuing the previous findings, the following research incorporates the effect of the human factor and the environmental conditions to complete a full reliability study of the LNG bunkering operations. We aim to answer the question, what the chances are that one specific ground configuration, performed by qualified professionals under certain surrounding environmental conditions might end successfully with the LNG delivered on time? Results are expected to help understanding current configurations at port for LNG bunkering and these findings will also help configurate more reliable solutions for other alternative fuels.

14:30-16:15 Session 5H: Civil Engineering
14:30
Stochastic Analysis of RC Rectangular Shear Wall Considering Material Properties Uncertainty
PRESENTER: Ali Ibrahim

ABSTRACT. It is widely acknowledged that concrete’s mechanical properties exhibit some degree of uncertainty due to many factors (e.g., the mix proportion, placing, curing, etc.). Therefore, it is crucial to consider such uncertainty when assessing the performance of real-world engineering structures. So, the main objective of this study is to perform a series of probabilistic analyses on one of the critical structural components that widely utilized in high rise buildings that is rectangular shear wall. The material properties uncertainty is represented by random field generator based on covariance matrix decomposition incorporated with the Generalized F-discrepancy-based point selection strategy to efficiently generate the samples. The multiaxial concrete damage plasticity model combined with the multilayered shell element are adopted in the simulation of the shear wall. The stochastic analysis results revealed that the material spatial variability caused a random distribution of initial damages and the subsequent stochastic nonlinear evolution. Moreover, the probability density evolution method (PDEM) is employed to perform the reliability assessment for the considered shear wall. The proposed framework could well capture the stochastic inelastic behavior of the considered shear-wall and can further implemented to capture the stochastic response and safety assessment of high-rise buildings.

14:45
Probabilistic Analysis of RC Doubly Curved Shells under strong ground motions
PRESENTER: Doaa Makhloof

ABSTRACT. Recently there has been a growing trend for using RC thin shell structures such as doubly curved shells (DCS) to cover large column-free spaces. These shell roof structures are not only essential to cover large span areas with high efficiency but also converted into shelters to protect peoples who lost their homes during and after earthquakes. However, according to the investigation of the effects of the recent earthquakes, considerable damage was observed in these structures, which negatively affected their functionality and could not serve as shelters. Also, there is an apparent lack of a general understanding of the structural response of these shells, where the previous studies mainly focused on the linear dynamic response. However, the linear dynamic analysis is insufficient to provide a complete understanding of the structural response of these structures, and general conclusions can not be drawn. An intensive investigation of the shells’ dynamic response would need to consider a number of significant factors, such as reinforcement and material nonlinearity. Additionally, due to several uncertainty sources that significantly influence the response of these structures, the deterministic analysis neglects these uncertainties; therefore, it cannot quantify the reliability and failure probability of these structures. Thus, in this study, deterministic and probabilistic analyses have been performed to investigate the structural response and the reliability of these structures under strong earthquakes. Due to the lack of the design of these structures in the design codes, an innovative automatic finite element-based design algorithm is formulated based on the equilibrium basis and developed by Python Script linked with ABAQUS software to automatically obtain the steel reinforcement in preparation for the deterministic and stochastic analysis. The multi-axial plasticity damage constitutive model is adopted through the VUMAT subroutine to reproduce the concrete nonlinear behavior. Then, a complete framework is developed to investigate the stochastic response and quantify the reliability of these structures based on the developed design procedure, concrete plasticity damage model, and the probability density evolution method where both the stochastic response and the instantaneous probability density function (PDF) could be attained, and a reliability of 70.78 is obtained for the DCS under Northridge event.

15:00
Reliability analysis of the ancient Nezahualcoyotl’s dike: Investigating failure due to overflow using an improved hydrological model.

ABSTRACT. Investigating the reliability of ancient hydraulic structures constructed without modern probabilistic criteria allows an understanding of why and how the structure fails. In this paper, we present an extended method, firstly introduced by Torres-Alves & Morales-Napoles (2020), to perform the reliability analysis of the Nezahualcoyotl’s dike that was designed (most likely) without probabilistic criteria. The dike was built around 1450 by the Aztec empire dividing Lake Texcoco from north to south (present-day Mexico City). We estimate the probability of failure due to overflow. By using a discrete time-state Markov chain and bi-variate copulas to generate large synthetic observations of the environmental variables precipitation and evaporation. In addition to the previous methodology, two sources of uncertainty were taken into account (i) the characterization of the environmental conditions during the dry season to estimate initial water levels on the lake and (ii) the influence of surface runoff and subsurface seepage losses on the water levels. The extended method allows for better characterization of the lacustrine system. Therefore an improved extent of the hydrology of the system and a more reliable estimation of the probability of failure of Nezahualcoyotl’s dike are presented.

15:15
Main Factors that Compromise Resilience in Concrete Structures
PRESENTER: Robson Gaiofatto

ABSTRACT. Although concrete structures are traditionally regarded as having great durability, it has been possible to notice, more and more, a higher vulnerability of such structures through new pathologies that compromise their durability, which elevates expenses with maintenance; a condition that can, not unusually, be seen on structures at a very young age. Because of that, the present article intends to discuss the main causes that compromise the resilience of such structures when facing the aggressions to which they are submitted, starting with project related problems, going through the issue of the executive quality and finally, discussing the environmental attacks generated either by more aggressive environments or because of the lack of protection of these structures, compromising their capacity to resist to small aggressions considered normal in the environment to which these structures are exposed. After the due discussions, this paper will try to provide me of the paths that should be followed by engineers, designers, builders, and other professionals who are responsible for the maintenance of these structures in a way that they will effectively present capacity for resilience that is more compatible with their functional importance and the financial cost spent for their effectiveness.

15:30
Risk Assessment in the manufacture and transportation of concrete products and precast products using PFMEA

ABSTRACT. Statistics show the existence of 6.4 million companies in Brazil; 99% are micro and small enterprises and employ 52% of the private workforce. In these companies, it is typically observed the inexistence of a fault detection process; lack of assignment of employees responsible for each process; higher cost, production errors, and a high rate of defective parts. Such companies are less profitable and competitive, and few can satisfy customers. This problem was observed in a small company where 4% of products are non-conforming, there is no failure detection process, and a high rate of losses and rework. This study aims to discover risks in the process and implement responses to overcome material preparation, manufacturing, and transportation difficulties. The objective is to identify the risk factors and responses to the risks identified that can impact the rejection of materials and the company's image. As a methodological approach, qualitative data was obtained from stakeholders such as engineers, supervisors, and production operators from the company's production area. The authors used tools such as Process Map and PFMEA. They consulted Internal documentation such as Warranty Records, Non-conformity Records, Functional Test Records, Projects, Procedures, Instructions, Drawings, Specifications, and Schematics. Reviewing scientific papers, records, process maps, and surveys allowed the authors to identify risks and responses. As a result, the most critical steps of the design and manufacturing process's macro flowchart were mapped out, and the failure modes for the steps of the micro-flowchart were identified. Finally, actions for the risks with high NPR (Risk Prioritization Level) were defined and implemented. The contribution is significant since the actions taken for improvement led to engagement; Process and quality improvement; customer satisfaction and reduction of failures; reworking; reduction in quality costs, and customer dissatisfaction. It can impact the business's success and help understand performance and quality. Although conducted in a specific company in Brazil, the study can be generalized to other companies involved with the production of concrete products and precast products, whose profitability is affected by risk issues resulting in waste, rework, and unnecessary energy consumption. The study can change the practice and thoughts of professionals dealing with risk assessment in the production of concrete products and precast products

15:45
Management of health and safety across supply chain in construction industry
PRESENTER: Karin Reinhold

ABSTRACT. Construction work is known as the high-risk industry due to temporary workplaces at construction sites and a variety of hazards and risks (such as working on unstable surfaces which may cause slips, trips and falls as well as working at heights, exposing to noise, vibration, dust and hazardous substances and harsh climate conditions, working in awkward positions etc). The aim of study is to identify supply-chain instruments that can be used in a contractor-subcontractor relationship to improve OSH in construction sites. Supply chains in the construction sector are rather atypical. They are characterized by companies at different levels of the supply chain who do simultaneous work in the same geographical location. Large companies act usually as main contractors, managing long chains of subcontractors where most of the work ends up being done by smaller companies and independent workers. Such subcontracting practices can lead to problems with OSH management, especially the strong competition for building contracts can lead to a reduced attention for safety and health. According to Choe et al. (2020), in OSH management, the distribution of roles can be unclear and subcontractors miss the information about essential OSH policies and procedures. In addition, there is a higher risk associated with subcontractors due to longer working hours, economic stress, work intensity, and their concentration in high-risk work of the supply chain (James et al., 2015). In our case-study, three types of evidence were collected. First, we analysed the documents such as companies’ webpages, codes of conducts, annual reports etc. Second, we organized on-site visits to construction sites in order to get an overview of the application of OSH practices through supply-chain. And third, we conducted semi-structured interviews with key informants of the main contractor as well as the sub-contractors. We made sure that each company was represented by at least one employer representative and one workers’ representative. Additionally, we had interviews with stakeholders (Labour Inspectorate, external auditors etc). The field work was done during Summer and Autumn 2022. According to the results from the case study, it is possible to say that there are two forms of practices employed by the main contractor (contractual governance and relational governance) in order to influence OSH in the construction site. The main contractual governance practices identified in the study were sustainable tendering and purchasing practices, reserved funds for specific work environment related delays or problems, clear agreements, specific practices, processes or materials required or prohibited in the contract, regular monitoring, auditing, certification schemes etc. The main relational governance practices identified in the study were the previous experiences and positive long-term cooperation, effective communication and open dialogue, organisational culture, called ‘a people-centric culture’, focusing on OSH, people, job organisation and design, cooperation, learning (including OSH professionals) and safety knowledge sharing, leading/managing by good examples. This study gives new knowledge about the possibilities to improve OSH in construction sites.

16:00
New guidelines for the quality control of risk analyses of critical hydraulic structures
PRESENTER: Alexander Bakker

ABSTRACT. The efficacy of risk models and risk analyses critically hinges on sufficient model evaluation. Nevertheless, the usefulness for the intended purpose is rarely systematically assessed. Poor or even lacking model evaluation of the applied risk models and analyses also troubles the asset management of storm surge barriers in the Netherlands. In practice, obvious flaws, missing failure modes and use that deviates from the original purpose regularly lead to unpleasant surprises, unnecessary costs and avoidable risks. Here, we introduce new guidelines for the quality control during the development, testing, maintenance and usage of risk analyses of critical hydraulic structures. First responses among stakeholders are rather positive since the guidelines help developers to better understand critics and independent reviewers to structure their comments. However, the efficacy of the guidelines itself also need rigorous evaluation in the coming years. This may prove challenging as the application of the guidelines may also reveal that the operating organization is currently not well equipped for the rigorous quality control of risk models and risk analyses.

14:30-16:15 Session 5I: Mathematical Methods in Reliability and Safety II
14:30
A new efficient method to find a suboptimal allocation of components in a series-parallel structure

ABSTRACT. A method that optimizes the allocation of two-state independent components in a series-parallel structure is presented. Such a structure is composed of parallel substructures arranged in series and is operable if at least one component in each substructure is operable. The components are assumed to have different failure probabilities, so the system reliability depends on how they are allocated to places in the structure. The optimal allocation minimizes the system failure probability or, alternatively, maximizes its reliability. Interestingly enough, while the optimal components allocation problem for a parallel-series structure (i.e. series substructures arranged in parallel) has a well-known simple solution, the same does not hold for a series-parallel one. The considered problem has been investigated by several researchers who proposed quite elaborate solutions. This paper presents a recently developed, simple, and efficient procedure for finding a nearly optimal allocation, and sometimes the optimal one. The presented approach is based on a theorem specifying a threshold value that cannot be exceeded by a series-parallel system’s reliability. Starting from some random allocation and using pairwise exchanges of components between different parallel substructures, the algorithm finds successive allocations that yield system reliabilities oscillating towards the value specified by the above theorem. In this way, a suboptimal (or, many a time, optimal) allocation is obtained. An important feature is that the method’s accuracy is expressed by the easy-to-compute upper bound of the difference between the optimal reliability and the obtained suboptimal value. The performed tests show that the method allows finding a (sub)optimal solution in a relatively small number of steps. An illustrative example is given demonstrating the method’s modus operandi.

14:45
Development of a Bio-Mathematical Crew Fatigue Model for Business Aviation Operators
PRESENTER: Leonardo Baldo

ABSTRACT. Against the backdrop of the current aviation accident trend which poses human factor as their main cause, pilot fatigue is a pivotal issue which may jeopardize mission safety. In a broader perspective, workers fatigue can be intended as a psychological state of reduced mental and physical performance which could lead to situational awareness (SA) loss. In the aviation industry, fatigue is even more critical since it may affect maintenance crews while performing maintenance checks, the pilots in critical flight phases and the crew during safety checks. At current times, the development of a Fatigue Risk Management System (FRMS) for Business Aviation Operators (BAOs) is not required by EASA, albeit already mandatory according to UK and FAA regulations (Line Operation Safety Assessment (LOSA)). The aim of this paper is to present a Bio-Mathematical crew fatigue model which can assist BAOs in the creation of a proper FRMS, based on objective science-based biological models. The already challenging conditions linked to the evaluation of fatigue (alteration of circadian rhythms, sleep deprivation, substantial number of consecutive work hours) are compounded by the typical conditions which BAOs are affected by: 24-hour-a-day activities, night flights, irregular and unpredictable flight schedules (often placed in the Window of Circadian Low (WOCL)), extended wakefulness and changes of time zone. Factors such as these challenge human physiology and can have adverse effects on performance-impairing fatigue and increased risks to safety. The lack of instrumental examination that provides an objective value of fatigue status makes it difficult to get a complete picture of the crew's mental and physical state in preparation for the upcoming mission. This paper describes the main underlying elements employed in the model creation and how the model has been adapted for BAOs specific requirements. The model is backed up by simple however effective algebraic relationships which take into consideration several influences leveraging pre-existing scales: prediction of alertness, cumulative fatigue duty time and sub-standard sleep quality.

Caldwell, J. A. (2005). Fatigue in aviation. Travel medicine and infectious disease, 3(2), 85-96. Federal Aviation Administration; 14 CFR Parts 117, 119, and 121; Docket No.: FAA-2009- 1093; Amdt. Nos. 117-1, 119-16, 121-357; RIN 2120–AJ58; Flightcrew Member Duty and Rest Requirements. https://www.faa.gov/regulations_ policies/rulemaking/recently_published/media/ 2120-AJ58-FinalRule.pdf, accessed January 2016 Belding, H. S., & Hatch, T. F. (1955). Index for evaluating heat stress in terms of resulting physiological strains. Heating, piping and air conditioning, 27(8), 129-36. Hart, S. G., & Staveland, L. E. (1988). Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In Advances in psychology (Vol. 52, pp. 139-183). North-Holland. Gander, P., Hartley, L., Powell, D., Cabon, P., Hitchcock, E., Mills, A., & Popkin, S. (2011). Fatigue risk management: Organizational factors at the regulatory and industry/company level. Accident Analysis & Prevention, 43(2), 573-590. Morris, M. B., Howland, J. P., Amaddio, K. M., & Gunzelmann, G. (2020). Aircrew fatigue perceptions, fatigue mitigation strategies, and circadian typology. Aerospace Medicine and Human Performance, 91(4), 363-368.

15:00
Exact and asymptotic results for connected (r,2)-out-of-(m,n):F systems
PRESENTER: Christian Tanguy

ABSTRACT. The interest in (r,s)-out-of-(m,n):F systems has never dwindled since their introduction by Salvia and Lasher (1990), because of the ever increasing number of real-life applications: reliability of electronic devices, X-ray and disease diagnostic, security of communications and property, pattern search systems, etc.

Computing the exact availability of such systems, addressed by numerous authors, has been, in the general case, deemed a numerically complex task by Nashwan (2018) and Zhao et al. (2011). Only a few configurations have allowed simple solutions.

The special case of (2,2)-out-of-(m,n):F systems has been studied in detail by Malinowski and Tanguy (2022), in which exact solutions where provided for 2 <= m <= 10 and arbitrary n through recurrence relations, the order of which increases drastically with m. Based on these results, an analytical, asymptotic expansion was given for large m and n, which was shown to be in excellent agreement for m as low as 4.

In this paper, we generalize our previous work to (r,2)-out-of-(m,n):F systems. We have obtained the exact expressions of the availability for 3 <= r <= 8 and several values of m, while n remains arbitrary. A analytical, asymptotic expansion has again been inferred for arbitrary (and large) m and n, which allows quick numerical evaluations. We have also calculated the Mean Time To Failure of such systems, assuming that all elements are identical and obey a exponential lifetime distribution. These approach and results could appeal to reliability practitioners in various fields, as well as theorists.

15:15
On the Use of Control Theory to Enhance Systems Towards Resilience
PRESENTER: Tobias Demmer

ABSTRACT. This contribution explores the potential of control theory for improving system resilience. It is essential that critical systems are able to withstand adversarial attacks and other forms of disruption. We discuss how this can be achieved through the use of control theory to allocate resources. In this work, control theory – as an established mathematical framework – is used to analyse the behaviour of a generic system in order to ensure resilience. Finally, this contribution provides an example of a resilient system design that uses control theory and we discuss the advantages and disadvantages of the approach, and how it may be implemented to achieve optimal system resilience.

15:30
Inferring Piping and Instrumentation Diagrams from Fault Tree Models
PRESENTER: Matthias Volk

ABSTRACT. Piping and Instrumentation Diagrams (P&IDs) are a graphical representation of the design of industrial plants [3]. They describe, among other things, the mechanical components, the process control instrumentation and the process piping. The use of P&IDs increases consistency between systems and allows early detection of design errors.

When performing a probabilistic safety assessment, P&IDs are often transformed into fault trees (FTs) - a graphical model that provides a comprehensive understanding of risks and mitigation strategies in the modelled system [2]. Typically, reliability experts build and update FTs manually. This process combines the information from system engineers, including system P&IDs, with failure logic, reliability data and safety goals. An alternative to this manual process is an automatic generation of FTs by tools like KB3 or RiskSpectrum ModelBuilder. Failure logic, reliability data and safety goals are encapsulated in a Knowledge Base that offers users a graphical language corresponding to P&IDs. Engineers draw safety-relevant parts of P&IDs in the tool. The reliability information in the knowledge base makes it then possible to generate FTs automatically. This approach maintains the connection between the system description and the safety analysis.

This work investigates the possibilities and limitations of inferring the safety-relevant part of P&IDs from manually built FTs, given definitions of P&ID components with their reliability information in a Knowledge Base in RiskSpectrum ModelBuilder. The goal is to minimise the expert intervention needed to re-establish the connection between the safety analysis based on FTs and the system design. The resulting P&IDs can then lead to improved consistency between system models, identification of design flaws and overall more efficient and safer operation of industrial systems such as nuclear power plants.

To achieve this, we use and alter a formalisation of P&IDs proposed by Bayer and Sinha [1]. They formalise P&IDs as graphs in which the vertices represent P&ID components and the edges represent pipes. Based on this formalisation, we develop a model-to-model transformation from FTs to P&IDs. This transformation starts by creating the P&ID components (including label and type) from the labels of the basic events in the FTs. We also exploit systematic naming schema – used in large studies, especially nuclear power plant PSAs.

In the second step, the topology of the P&ID – including the pipe connections – is inferred from the structure of the FTs and their minimal cut sets. We implement this complete approach in a prototype Python tool.

References: [1] J. Bayer and A. Sinha: “Graph-Based Manipulation Rules for Piping and Instrumentation Diagrams”. OSF Preprints. (2020) https://doi.org/10.31219/osf.io/dynqj [2] E. Ruijters and M. Stoelinga: “Fault tree analysis: A survey of the state-of-the-art in modeling, analysis and tools”. Computer science review, 15, pp. 29-62 (2015) https://doi.org/10.1016/j.cosrev.2015.03.001 [3] M. Toghraei: “Piping and Instrumentation Diagram Development”. John Wiley & Sons. (2019) https://doi.org/10.1002/9781119329503

15:45
A Formal Verifcation Framework for Model Checking Safety Requirements of a Simulink Landing Gear Case Study
PRESENTER: Hannes Stützer

ABSTRACT. The request for computer-aided system verification approaches increases with the rising system complexity. Models become too complex to be verified easily by human system and safety engineers. So, integrating formal verification approaches, e.g., model checking, into the typical engineering workflow could help keep up with rising system complexity. For reaching this integration, however, the analysis of a modeled system must cover widely applied design specification languages, e.g., Matlab Simulink or Modelica.

Unfortunately, there exist, several challenges to integrating formal, automated verification techniques into the typical system engineering workflow. For example, this is the scalability of the underlying tools, i.e., what model size a formal verification tool can handle. Another one is the applicability of the verification tools for a non-expert. These tools often provide a scientific interface requiring a formal input language and several verification parameters. In general, the entry hurdle of providing the system model in a particular formal language and finding the optimal verification setup is too high. From an engineer's perspective, however, being able to use a typical modeling language, e.g., Matlab Simulink, Modelica, etc. would be more beneficial.

This paper shall provide an approach for tackling the challenge of applicability. Therefore, we modeled a widely applied case study of an aircraft landing gear system [1] in Matlab Simulink and provide a complete and automatic translation into the System Analysis and Modeling Language (SAML)[2]. SAML is a formal intermediate language that can target several qualitative and quantitative model checking tools. The set of Matlab Simulink elements we provide a translation for is derived from the model itself. To prove that the translation preserves the model's semantics, we also define a formal representation of the modeled Simulink elements. The translation enables us to apply several model checking tools for formal verification. This is, in particular, an advantage since this overcomes the verification capabilities of the Matlab Simulink's Model Checker, the Simulink Design Verifier (SLDV), which imposes several restrictions on the system model (e.g., the integration of real numbers).

In addition to the transformation, we verify the given safety specifications by model checking the translated Simulink system model. Thereby, we show that a realistic-sized system model is verifiable with available model checking tools.

[1] https://doi.org/10.1007/978-3-319-07512-9_1 [2] https://doi.org/10.1109/HASE.2010.24

16:00
Failure domain analysis using Sliced-Normal distributions

ABSTRACT. Sliced-normal (SN) distributions enable characterization of parameters exhibiting complex dependencies with minimal modeling effort. We leverage the semialgebraic nature of SN distributions to identify the most likely points of failure (MLPs) corresponding to a given failure domain. When this domain is semialgebraic, Sum of Squares (SOS) optimization is used to guarantee that no MLPs are missed within a region of interest. The MLPs not only enable the identification of all the critical points of failure, but also the efficient estimation of failure probabilities using Importance Sampling (IS). The IS density is constructed as a Gaussian Mixture (GM) model with means at the MLPs and covariances equal to the weighted empirical covariance of sample sets drawn in the vicinity of the MLPs.

16:15
Safety of Complex Technical System Impacted by Its Operation Process

ABSTRACT. An innovative approach is proposed to safety analysis of multistate ageing systems impacted by their operation processes. A safety function and other safety indicators are defined for a complex multistate ageing system changing its safety structure and its components safety parameters during the operation and determined under the assumption that its components have piecewise exponential safety functions. Results are applied to examine safety of port and maritime transportation systems.

16:15-16:45Coffee Break
16:45-18:00 Session 6A: Maintenance Modelling and applications III
Location: Room 100/3023
16:45
A Study on the Estabishment of Virtual System for Efficient Operation of OWC Wave Power Generator
PRESENTER: Jung Hee Lee

ABSTRACT. This study was conducted to establish a system for the efficient operation of OWC-type wave power generators in actual operation by applying digital twin technology. The OWC wave power generator is a system that generates power by rotating a turbine by the pressure difference between water column air pressure and external air pressure. The pressure difference is generated while repeating compression/expansion of the air in the water column room according to the change in the height of the water column in the oscillating water column room due to the wave height of the ocean wave. In addition, the generator controls the rotational speed of the turbine using a power converter to maximize efficiency. Ocean waves are characterized by high variability. If the height of the wave suddenly increases significantly, excessive pressure in the water column may cause fatal damage to the turbine. When the pressure above the critical point acts on the water column chamber, the flow path is changed by the bypass valve and air is discharged to the outside to reduce the water column pressure. If it is slightly smaller than the critical point, the flow rate introduced into the turbine is reduced by the flow control valve in order to reduce damage to the turbine and increase the efficiency of the turbine at the same time. In order to adjust the water column pressure by applying the digital twin technology, it is necessary to develop a system that can control the flow control valve in real time before excessive pressure is applied. In this study, a virtual integrated flow analysis physical model including multidisciplinary, which can control the real model in real time, was established. The real-time integrated flow analysis physical model was constructed through reasonable assumptions from the energy conservation equation by setting the control volume including the oscillating water column chamber, turbine, and generator. Turbine flow rate and efficiency required for physical model analysis were obtained by functionalization using flow analysis results analyzed according to the angle of the flow control valve. In order to verify the accuracy of the analysis result using actual operation data, the accuracy of the two signals measured in the time domain was compared using the correlation function. The pressure difference between the inside and outside of the turbine was found to be 98% similar, and the rotational speed of the turbine was confirmed to be 93.5% similar. In addition, based on 30 minutes of motion data from actual sensors, the height of water column by waves was calculated through deep learning. So far, we have been able to secure about 93% accuracy. The electrical torque according to the rotational speed of the turbine was constructed using the Kriging metamodel. The rotational speed of the turbine was obtained using the mechanical and electrical torques, and the pressure in the water column and the rotational speed of the turbine were obtained by repeatedly performing the integrated physical model until convergence.

17:00
Optimisation of maintenance by PDMPs under conditions of population heterogeneity.

ABSTRACT. The relevance of a maintenance decision depends in some way on the ability of the model to estimate the current health of a system and its capacity to predict its evolution according to the available information. It is in this context that we will conduct this work when, in addition, the accessibility to degradation data remains very limited. Physic-based approaches offer an alternative to the scarcity of data. However, the domains of validity of these models remain generally confined to very specific regions which can be far from a field reality [1]. Moreover, degradation phenomena can present highly heterogeneous behaviors leading to inadequate maintenance decisions when optimized on population average degradation performances. We propose here to study the potential of PDMPs [2] applied to a condition-based maintenance policy when the available sample presents some disparities in behavior. Our approach will be applied in the context of fatigue cracking on data from the literature [3]. The first phase of our demonstration consists in modeling by PDMP type approaches based on the identification of the limits of validity of the physical models directly from the data [3] while highlighting the heterogeneous behaviors of evolution of cracks. Through, the PDMP, the dynamics of fatigue crack growth is well-captured and the transition between both regimes of propagation (stable and unstable) is well identified. Moreover, a robust criterion based on the mean crack growth rates in the first regime of propagation has been proposed to distinguish between the two well identified populations. After having applied a first classification algorithm, we will define and evaluate a condition-based maintenance strategy tailored to each population. Numerical analyses will be conducted to justify the potential of our approach compared to more classical approaches.

[1] Pugno, Nicola, et al. "A generalized Paris’ law for fatigue crack growth." Journal of the Mechanics and Physics of Solids 54.7 (2006): 1333-1349. [2] Davis, M. H. A. Markov Models & Optimization. Vol. 49. CRC Press, 1993. [3] Virkler, Dennis Andrew, Brnm Hillberry, and Prem K. Goel. "The statistical nature of fatigue crack propagation." (1979): 148-153.

17:15
Condition-based maintenance optimization of a large-scale system with a POMDP formulation: evaluation of a heursitic policy

ABSTRACT. This study investigates the question of the evaluation of a heuristic condition-based maintenance policy applied to a distributed multi-unit system. In particular, the system is composed of many units which function and degrade independently. We call it distributed since the system's total output is the sum of the individual output of each unit, where the failure of one unit has no impact on the functioning state of other units (i.e., no series/parallel structure). Taking into account the large-scale nature of the problem, which consists of a large number of units, is crucial because a good maintenance policy should coordinate its decisions at the scale of the system. Said differently, maintenance decisions cannot be taken independently unit by unit for the following reasons. First, the maintenance resource is limited and should be wisely allocated across the system. Second, as deploying on-site a maintenance crew is expensive, a good maintenance policy should also try to limit the number of deployments and group maintenance interventions. A CBM policy relies on condition monitoring information, which we assume to be imperfect. We model this imperfection by assuming that remote sensors inaccurately estimate the true degradation state of the units. The decision-maker should then choose, at each time step, whether a maintenance operation or an inspection must be performed based on the information one has collected. We formulate the problem as a partially observable Markov decision process (POMDP). However, due to the curse of dimensionality, it cannot be solved via well-known approximate dynamic techniques (like SARSOP or PERSEUS). We then propose a heuristic algorithm based on a decomposition of the problem. The contribution of this work is to propose a framework to evaluate and validate this algorithm. We first validate the approach on a realistic-sized instance and show that the obtained policy has the expected properties (in terms of structure or value of information for different qualities of condition monitoring). Second, we validate the design of our procedure by showing that in a variety of scenarios, our heuristic performs better than its simpler (and more naive) alternatives.

17:30
Data-Driven Condition-Based Maintenance Optimization: Application of Bootstrap in Conservative Maintenance Decisions with Limited Data
PRESENTER: Yue Cai

ABSTRACT. Unexpected failures of operating systems can result in severe consequences and huge economic loss. To prevent these unexpected failures, preventive maintenance based on continuously collected condition data can be performed. And conservative preventive maintenance is typically considered to further decrease the probability that unexpected failures occur. We study a single-unit system with unknown non-decreasing deterioration process for which the conservatively preventive maintenance threshold is determined through condition data analysis. Bootstrapping method is applied to create addition condition data to enlarge the decision space for preventive maintenance threshold, which is designed for solving the over optimistic maintenance policy resulted from purely data-driven approach under small dataset setting. Also, we introduce a sigmoid function to recognize whether there is a failure occurs when creating one run-to-failure data of the system. Numerical results for a gamma deterioration process show that the maintenance threshold resulting from the data-driven approach converges to the optimal threshold faster when using augmented data combining original and additional runs-to-failure. If only few runs-to-failure are available, it is always beneficial to generate synthetic runs-to-failure. Whereas if we already have enough runs-to-failure data, then there is no need to generate synthetic data.

17:45
Preventive Risk-based Maintenance Scheduling using Discrete-Time Markov Chain Models
PRESENTER: Joachim Grimstad

ABSTRACT. The seemingly exponential increase in technological advances and increased globalization forces companies to optimize their maintenance and production activities to remain competitive. This paper proposes a novel risk-based maintenance (RBM) and production decision-making support methodology for manufacturing assets, emphasizing just-in-time manufacturing. The purposed methodology utilizes historical machine log data to construct a Discrete-time Markov Chain model (DTMC). The model is then used to evaluate production risk and consider preventive maintenance during the production setup. Probabilistic model checking is applied for the DTMC evaluation. The applicability of the developed method is demonstrated in a real-life case study, where production logs from the semi-automated cutting- and crimping machine are evaluated.

18:00
Joint Optimization of Reallocation and Maintenance for 1-out-of-2 Pairs: G Balanced System
PRESENTER: Xiaofei Chai

ABSTRACT. In a system consisting of multiple functionally exchangeable components, differences in component degradation levels, can significantly affect the system’s performance. Dynamic reallocation of these components can improve performance and prolong the lifetime of the system. In this study, we quantify the benefit of incorporating condition-based reallocation into a condition-based maintenance framework for a 1-out- of-2 pairs: G balanced system, where the system balance is functioning if at least one pair has two working components. Component degradation is modeled as a Gamma process and is inspected periodically. A Markov decision process model is developed to determine the optimal joint reallocation and replacement strategy that minimizes the long-run total expected cost. A numerical study is investigated to illustrate the cost-saving and unit utilization effectiveness of the proposed joint policy. Key insights into the structure of the optimal policy are obtained, including the fact that reallocating two units, particularly when the degradation level of the forced-down units is relatively low, is often more cost-efficient than replacing failed units. Policy comparison shows that the proposed joint reallocation and replacement policy can significantly outperform other policies, including reallocation-only and replacement-only policies.

16:45-18:00 Session 6B: Human Factors and Human Reliability III

Human Factors and Human Reliability III

16:45
Human Performance Improvement Tools and Situation Awareness in Nuclear Power Plant Outage Work

ABSTRACT. Carrying out nuclear power plant (NPP) maintenance work is associated with several risks that must be perceived, understood, and attended to by maintenance staff for safety to be upheld. A number of human performance tools (HPTs) are available to support the safe execution of NPP maintenance work [1]. Several can be conceived as “situation awareness” tools, as they are intended to help maintenance workers perceive and form an accurate understanding of their task and surroundings and the associated risks [2]. Yet, despite the extensive use of HPTs in NPP maintenance work, their benefits for improving situation awareness and, in turn, the safe execution of work, remains unclear [1].

Particularly problematic is the fact that “procedural use and adherence” is regarded as a primary HPT in NPP maintenance work [3]. Yet, the complexity and dynamic nature of this work means that procedures are unlikely to ever be fully complete and error free [4]. Indeed, research finds the use of and adherence to incomplete or erroneous work procedures contributes to a large proportion of the human errors made during maintenance periods [5]. [5] posit this is because deficient procedures lead to errors in understanding what task and situational information is important to attended to and why.

In this paper, we draw on Endsley’s taxonomy of situation awareness errors [6] to outline a conceptual model of how the HPTs used in NPP maintenance work should influence situation awareness by reducing the likelihood of making different types of situation awareness errors at different stages of work execution. In developing our conceptual model, we consider how certain HPTs could help to detect errors in work procedures, and thus indicate when using the primary HPT of procedural use and adherence should be re-evaluated. However, we also question how less-than-adequate situational awareness could negatively influence the use of HPTs to form an accurate understanding of one’s task and surroundings or identify risks. Other factors that could influence the extent to which different HPTs are applied in NPP maintenance work are also identified and discussed.

References 1. Oedewald, P., et al., Human performance tools in nuclear power plant maintenance activities: Final report of HUMAX project. 2015. 2. U.S. Department of Energy, Human performance improvement handbook: Human performance tools for individuals, work teams and management. 2009, U.S. DOE: Washington, DC. 3. Gotcheva, N., et al., Final report of MoReMO 2011-2012. Modelling resilience for maintenance and outage. 2013, Nordisk Kernesikkerhedsforskning. 4. Bourrier, M., Organizing maintenance work at two American nuclear power plants. Journal of contingencies and crisis management, 1996. 4(2): p. 104-112. 5. Solberg, E., E. Nystad, and R. McDonald, Situation awareness in outage work–A study of events occurring in US nuclear power plants between 2016 and 2020. Safety Science, 2023. 158: p. 105965. 6. Endsley, M.R., A taxonomy of situation awareness errors, in Human factors in aviation operations, R. Fuller, N. Johnston, and N. McDonald, Editors. 1995, Ashgate: Aldershot, England. p. 287-292.

17:00
Which human reliability analysis methods are most used in industrial practice? A preliminary systematic review
PRESENTER: Caroline Morais

ABSTRACT. Human reliability analysis (HRA) is the most acknowledged methodology to assess the probability of human errors depending on the tasks and its contextual factors. There are many methods available, but some exploratory research show that only a few of them are frequently cited in research papers, or even accepted by safety regulators. This paper uses a systematic approach to understand which HRA methods are the most cited by country and by industry sector. The research methodology has considered not only research papers, but also regulations and consultancy companies’ portfolios. The aim is to understand in which level the industrial practice follows the pattern observed in academia. The result discusses if the level of HRA practice per country and per industrial sector are influenced by regulations.

17:15
Evaluation of human performance in the operation of a UAV in a joint operation scenario with troops on the ground

ABSTRACT. Unmanned Aerial Vehicles (UAV) were applied more actively in the defense environment because of the attack on the twin towers in 2001, precisely in the fight against terrorism (SINGH, 2014). This environment was conducive to developing the Unmanned Aircraft System (UAS), generating the basis for the concept of the so-called mosaic warfare (HAYSTEAD, 2020). This concept is the main guide for the design and future use of UAS, determining the design of user interfaces in Ground Control Stations (GCS).

Due to the change in the piloting paradigm caused by the increase in the operational capacity and safety of the crew, questions arise related to the suitability of the Human-Machine Interface (HMI) to increase situational awareness. The objective of this research is the investigation questions related to the pilot's workload and the possibility of the total operation of a UAS by only one individual. To answer these questions, an HMI prototype was built with the emulation of the operation of a UAS, in which scenarios and tasks are defined (HOBBS, 2016).

The physical simulation environment built has manual operating interfaces such as touch monitors, the HOTAS piloting system, and a voice command for UAV navigation (CONTRERAS, 2020). A simulated scenario of operation in a combat environment is defined for the investigation. This scenario is defined with two different variations and three ways of execution: 1) Complete system with an operator using the manual interfaces; 2) Complete system with one operator using voice command; 3) Complete system with piloting assistance from another operator in the navigation part. Starting from the definitions, each operator during the experiment executes the two variations with two different execution ways.

To measure the pilot's performance during the execution of the mission, it is fundamental to evaluate the effects of information overload through his workload. For this, it is necessary to analyze their physiological responses, making it possible to monitor changes in these parameters and then identify different behavioral responses (ALAIMO, 2018). Based on this process, it is expected that human performance in critical operating conditions can be evaluated and generate the best HMI solutions to reduce workload and increase situational awareness, aiming at UAV operation by a single pilot.

References

ALAIMO, Andrea; ESPOSITO, Antonio; ORLANDO, Calogero. Cockpit pilot warning system: a preliminary study. In: 2018 IEEE 4th International Forum on Research and Technology for Society and Industry (RTSI). IEEE, 2018. p. 1-4.

CONTRERAS, R.; Ayala, A.; Cruz, F. Unmanned Aerial Vehicle Control through Domain-Based Automatic Speech Recognition. Computers 2020, 9, 75.

SINGH, R. Defensive Liberal wars: the global war on terror and the return of illiberalism in American foreign policy. Revista de sociologia e politica, 2014, V.23, n 53, p99-120. DOI 10.1590/1678-987315235306

HAYSTEAD, J.: DARPA´S Mosaic: Moving to address the ever-more-rapidly-paced. Journal of electronic defense Feb 2020, p. 21-25.

HOBBS A. and LYALL B. Human factors guidelines for remotely piloted aircraft system remote pilot stations. NASA’s Unmanned Aircraft Systems, 07 2016.

17:30
Combining control room operator task load analysis and subjective workload assessment

ABSTRACT. Extended opportunities for remote control provide opportunities to accelerate the centralisation of control room tasks. In later years several industrial actors have expressed their interest in utilising the opportunities provided by remote control to extend existing control rooms and operation centres to take on a wider portfolio of both well-known and new control tasks. Concurrently, the transition from fossil to green energy production entails increasing complexity in the energy system with associated requirements for process and system understanding by operators controlling the production.

This study presents and compares results from task load and subjective workload assessment in an offshore petroleum control room and a hydro and wind power electricity production control room, combining methods for subjective and analytical estimation of task load. The petroleum control room had to decide whether the staffing was sufficient to keep control of topside and subsea production units. For the electricity production control room, the staffing consideration was similar, but in addition they evaluated an extension of their responsibility for controlling additional wind farms and production of hydrogen from electricity.

Using the IO MTO mapping technique [1] all work tasks performed during a "normal" working day were describe for both the petroleum and the electricity production control room, encompassing all operator positions and shifts. In the period August to November 2022 a total of approximately 500 work tasks was mapped and described by 25 different parameters with time estimates and task type as the most central parameters. For subjective workload assessment, the dimensions of the Subjective Workload Assessment Technique (SWAT) [2] were translated into Norwegian and used together with scenario descriptions. Further, the time load dimension of SWAT was used to establish a time load baseline throughout the day.

Results from the task load estimates for both control rooms are coherent and indicates that the amount of planned work should not occupy more than maximum 75% of the time available on a shift, which is in line with observations from previous studies and literature. The SWAT measures added additional insight regarding what type of new tasks the control rooms could undertake, depending on the control room staff's composition of competence and experience. The results from the studies were used as a judgment criterion to decide whether it was necessary to increase the control room staff or not.

Findings from the study are compared to recommendations from the literature, and the validity of the specific method use is discussed. Further, the application of the method is discussed against the use of more renowned methods for workload measurement.

References [1] Drøivoldsmo, A., Nystad, E. and Lunde-Hanssen, L. 2022. “Analytical estimation of operator workload in control rooms: How much time should be available for surveillance and control?”. Paper presented at ESREL 2022.

[2] Reid, Gary B., Clark A. Shingledecker, and F. Thomas Eggemeier. 1981. Application of conjoint measurement to workload scale development. In Proceedings of the Human Factors Society Annual Meeting, vol. 25, no. 1, pp. 522-526. Sage CA: Los Angeles, CA: Sage Publications

17:45
Natural Language Processing Tool for Identifying Influencing Factors in Human Reliability Analysis and Summarizing Accident Reports
PRESENTER: Karl Johnson

ABSTRACT. The development of a tool based on Natural Language Processing (NLP) models is presented. The presented tool is an improvement on the original virtual human factors classificator developed to assist experts with extracting the organizational, technological, and individual factors that may trigger human errors. To identify the performance shaping factors, the approach proposed is based on classifying text according to previously labelled accident reports by human experts. Making use of BERT (Bidirectional Encoder Representations from Transformers), a popular transformer-based machine learning model for NLP. In addition, a method to provide a summarization of each accident report is presented. This provides further detailed context alongside with the identified performance shaping factors, without the need of reading the entire report which is generally a significant task. The tool performs abstractive summarization as it aims to understand the entire report and generate paraphrased text to summarize the main points. In this work, BART (Bidirectional and Auto-Regressive Transformers), which is a denoising autoencoder for pre-training sequence-to-sequence models, has been used as the basis for the text summarization model.

16:45-18:00 Session 6C: Energy Transition to Net-Zero Workshop on Reliability, Risk and Resilience - Part III
16:45
MBSA model to evaluate and analyze the production availability of an offshore wind farm

ABSTRACT. Preparing for the energy transition is one of the major concerns of the French government, which enacted the Energy Transition Law for Green Growth [1] (LTECV) on 17 August 2015 to limit global climate change. To meet this climate challenge, the French energy supermajor Total became TotalEnergies in 2021 to make its company a major player in the energy transition and to achieve, jointly with society, carbon neutrality by 2050.

To achieve this, the company has set itself the following objectives: 1.Reduce greenhouse gas emissions as much as possible, primarily at the sites in Europe and elsewhere in the world for which it is directly responsible. 2.Offset all remaining emissions, for example through CO2 capture projects. 3.At the same time, propose an energy mix that is less and less carbon intensive with the development of renewable energies (solar, hydrogen, onshore and offshore wind, etc.).

To enhance energy mix, from 2022 onwards, new offshore wind farm projects will emerge close to consumption grids and raise new questions which the Offshore Wind industry is preparing for in order to meet the objectives of the LTECV law. To this end, RAM (Reliability Availability Maintainability) studies can help in the decision-making process of the best Operation and Maintenance (O&M) strategy to apply, to obtain an optimum between OPerating EXpenditure (OPEX) and production availability [3] for a given farm.

The complexity of the O&M model of an offshore wind farm relies on the accuracy of the simulation model, especially on the way the weather impact is considered. Indeed, all effects induced on intervention vessel mobilization times, power of turbines and maintenance strategy are to be addressed properly as they can have a significant impact on final results.

In this context, TotalEnergies has taken part into an offshore wind farm project in America North Sea and wants to estimate the production availability of this asset and identify the main contributors to evaluate its O&M strategy and assess the weather impact on its efficiency. To meet this objective, a Model Based Safety Analysis (MBSA ) approach based on Petri nets [5] modeling language combined to Monte-Carlo simulation using the module Flex of the software suite GRIF [6], a technology of TotalEnergies, is under construction. The purpose of this new tool is to consider in a same integrated model (non-exhaustive list): •Curative maintenance interventions further to random failures, •Planned events (inspection, preventive maintenance, testing...), •Weather impact (wind and swell) and all associated effects on the system, •System architecture and design capacities, •Logistics (maintenance resources, intervention vessels, mobilization times, spare part strategy, procurement times…).

The objective of this paper is to describe the methodology put in place through a business application case.

References: [1]: https://www.ecologie.gouv.fr/loi-transition-energetique-croissance-verte [2]: DOI:10.3390/jmse10071000 - "Availability Analysis of an Offshore Wind Turbine Subjected to Age-Based Preventive Maintenance by Petri Nets" [3]: ISO 20815:2018 [4]: A Better World of Energy | SSE Renewables [5]: IEC 62551:2012 [6]: GRIF is registered trademark owned by the TotalEnergies Company and used under license, grif.totalenergies.com

17:00
Wind Turbine Installation Vessel Mission Reliability Modelling Using Petri Nets
PRESENTER: Rundong Yan

ABSTRACT. Global offshore wind power installation is growing rapidly every year, and the total offshore capacity in 2021 has reached 57GW [1]. The wind industry is increasingly focusing on offshore wind due to stronger and more stable wind speeds, less noise and visual pollution, less turbine size restrictions and greater freedom of site location than onshore wind. This has further accelerated competition for larger turbines to maximise power generation and reduce levelized cost, with many projects now set to install 12MW+ offshore wind turbines (OWTs) in the 2020s, such as Dogger Bank Wind Farm in the UK and Vineyard Wind 1 in the US [1]. However, the significant expansion of the offshore wind industry and the rapid increase in size and weight of turbine components will certainly amplify the risk issues during the installation and transport of large OWTs. In addition, currently, there are less than 20 vessels globally that can support the installation of such turbines. These vessels are multi-functional, comprising highly integrated specific-designed systems and components. The failure of any of these could cause catastrophic damage to the vessel and turbine, even life-threatening. Furthermore, in offshore wind farm (OWF) projects, if such a vessel needs to be repaired due to a failure, it is almost impossible to find another vessel on the market as a replacement. This will result in significant project delays or even indefinite halts of the project. In this context, this paper aims to develop a mathematical model using Petri nets (PNs) to assess the risk and reliability of the mission of a wind turbine installation vessel (WTIV) designed for installing 12MW+ OWTs. The mission of the WTIV is divided into consecutive phases, each of which accomplishes a specified task. The impact of marine weather and sea conditions on the turbine installation is considered in the model. In addition, the impact of adopting different installation methods/strategies described in [2] on the mission reliability are studied using the model developed. Critical phases and components can be identified, and their failure probability can be obtained. It is deemed that the PN model developed in this paper can be used effectively to assist installation decision-making for future OWF projects.

References

[1] Global wind energy council. GWEC Global Wind Report 2022. Available online: https://gwec.net/global-wind-report-2022/. [Accessed 10 December 2022]. [2] Guo Y, Wang H, Lian J. Review of integrated installation technologies for offshore wind turbines: Current progress and future development trends. Energy Convers Manag 2022;255:115319. https://doi.org/10.1016/j.enconman.2022.115319.

17:15
Bearing Health and Safety Analysis to improve the reliability and efficiency of Horizontal Axis Wind Turbine (HAWT)

ABSTRACT. Renewable energy has grown in the recent decade, particularly solar and wind energy, both of which are abundant in the region of south and north Asia. Despite significant advances in recent years in health monitoring of complex machines, the bearing failure rate in wind turbines remains very high, up to 76% according to the National Renewable Energy Laboratory (NREL). Due to bearing, gear, and other failures, HAWT reliability is currently insufficient for the harsher offshore environment. In the event of a severe failure of bearing and other replacements of Wind Turbine, the reduced accessibility will significantly diminish energy harvest, the costs of specialized maintenance likely to be increased as a part & parcel, e.g., installation equipment, rigging plans, and trained workmanship. However, bearing faults cannot be adjusted using analytical techniques like reconfigurable control, but by the early detection is crucial to repair or replace before time to failure. Current studies indicate that the vibration monitoring of wind turbines is clearly on the rise, according to surveys of current conditions monitoring (CM) systems of WTs. Further, Artificial intelligence, also known as machine learning, has made machines more complex than older iterations. Many deep learning methodologies are gaining popularity in the age of automation and complex machines. This paper investigates bearing health state assessment by using deep learning. One of the most crucial parts of Wind Turbines (WT) is the bearing, and the condition of the bearing has a direct impact on how safely the power production is carried out. The estimation of the Remaining Useful Life (RUL) and the time to start prediction point are two specific components of the health state assessment. In case of bearing failures, the maintenance would be difficult and time taking process. Mostly power plants are using Computerized Maintenance Management System (CMMS) which schedule the maintenance tasks, create work orders and track their maintenance status. In order to evaluate the performance of bearing, this prognosis model of Reliability, Health and Safety Analysis (RHSA) are employed on the dataset via artificial neural networks. The reliability model can assess the useful life and also avoid the downtime through early detection.

17:30
Modeling the Effect of Environmental and Operating Conditions on Power Converter Reliability in Wind Turbines with Time-Dependent Covariates

ABSTRACT. A considerable portion of the cost of wind energy is related to operation and maintenance (O&M) of wind turbines. Herein, a major part accounts for repairs and replacements. Enhancing reliability is therefore key to achieve further cost reductions. Understanding the failure behavior and relevant factors driving failure is essential to develop effective countermeasures and establish cost-effective O&M processes.

As different reliability surveys have shown, e.g., Lin et al. (2017), power converters are among the most frequently failing subsystems of wind turbines. Despite considerable progress during the past years, the knowledge about the mechanisms and causes can still be improved. This publication aims at extracting further insights from field-reliability data to support root-cause analysis.

Wind turbines in the field are exposed to different environmental and operating conditions varying with regions and seasons. Hence, reliability models should include environmental and operating conditions to account for different climatic conditions and particularly for seasonality. Previous field-data-based reliability studies have been investigating the impact of site conditions on the failure behavior of wind turbine components by means of regressions models, e.g., including site-specific conditions as constant covariates (Slimacek and Lindqvist 2016; Pelka and Fischer 2022) or as monthly-averaged time series (Reder and Melero 2018). The present work extends (Pelka and Fischer 2022) by incorporating time-dependent covariates.

In this study, failure data of >6,000 wind turbines of different manufacturers operating at onshore and offshore sites on five continents is analyzed. The failure dataset covers in total >15,000 years of operation during 2006-2020. The environmental data is obtained from publicly available ERA5 reanalysis data. To characterize the load regime, power time series are approximated based on wind-speed time series and turbine-specific power curves.

A nonhomogeneous Poisson process (NHPP) regression model is utilized to quantify the effect of environment- and operation-related covariates on the failure behavior of power converters. To characterize observed heterogeneity between the converter systems, both constant and time-dependent covariates are included. In addition, random effects accounting for unobserved heterogeneity are considered in the model.

The model with different combinations of time-dependent covariates is numerically fitted by means of the maximum likelihood method. The results show that incorporating site-specific absolute humidity and estimated active power significantly improves the model, indicating that these covariates have a significant effect on the failure behavior of power converters in wind turbines. Both higher humidity and higher active power negatively affect converter reliability.

Selected references:

Lin, Y., Tu, L., Liu, H. and Li, W. (2016). Fault analysis of wind turbines in China. Renewable and Sustainable Energy Reviews 55, doi: 10.1016/j.rser.2015.10.149.

Pelka, K. and Fischer, K. (2022). Field-data-based reliability analysis of power converters in wind turbines: Assessing the effect of explanatory variables. Wind Energy 26(3), doi: 10.1002/we.2800.

Reder, M. and Melero, J.J. (2018). Modelling the effects of environmental conditions on wind turbine failures. Wind Energy 21(10).

Slimacek, V. and Lindqvist, B.H. (2016). Reliability of wind turbines modeled by a Poisson process with covariates, unobserved heterogeneity and seasonality: rate of occurrence of failures of wind turbines. Wind Energy 19(11).

17:45
Knowledge and Data Fusion-driven for Offshore Wind Turbine Gearbox Fault Diagnosis
PRESENTER: Hao Liu

ABSTRACT. At present, the cumulative installed capacity of offshore wind power in China has reached 27.26 million kilowatts, which promotes the clean and low-carbon transformation of energy and helps China achieve the goal of “carbon peak and carbon neutrality”. However, the gearbox of offshore wind turbine is affected by its structure, working condition and environment, which leads to high gearbox failure. Traditional fault diagnosis methods are difficult to meet the requirements of offshore wind turbine equipment diagnosis with variable working conditions and multiple fault types. This paper proposes a knowledge and data fusion-driven fault diagnosis method for offshore wind turbine gearbox. It not only classifies the operating data of offshore wind turbine gearbox through convolutional neural networks (CNN), but also uses knowledge graph to display detailed information of faults and perform intelligent question answering. The innovation of this method is that the fault diagnosis method based on data monitoring and the fault type reasoning using knowledge graph are combined to form a comprehensive fault diagnosis model of offshore wind turbine gearbox. This method is applied to the fault diagnosis of gearboxes in Jiangsu offshore wind farms. For the types of cracks, wear and missing teeth of offshore wind turbine gearbox, the accuracy of fault diagnosis based on convolutional neural network model reaches 95%. At the same time, the visual display and intelligent question answering of offshore wind turbine gearbox faults are realized by using the constructed knowledge graph. The results show that the knowledge and data fusion-driven fault diagnosis method for offshore wind turbine gearbox has a good application effect on the intelligent operation and maintenance of offshore wind turbines.

18:00
Wind turbine bearing diagnostics using deep learning approach
PRESENTER: Eduardo Menezes

ABSTRACT. The prognostics and health management (PHM) is closely related to the improvement in maintenance times and reduction of O&M costs. This is particularly important for complex and high-cost machines, such as the wind turbines (WTs), exposed to the highly variable wind loads and a continuous operation. The bearings of WTs are frequently pointed as one of the main sources of WT failures, being responsible for causing large financial losses. In this context, the use of deep learning algorithms to deal with the vibration data can lead to an effective diagnostics and prognostics of the WT bearing remaining useful life. This work intends to run the analysis of three state-of-the-art deep learning algorithms in real WT bearings, in order to compare their performance and establish the conditions for the reduction of O&M costs in the wind industry by using predictive maintenance.

16:45-18:00 Session 6D: Accident and Incident Modelling II
Location: Room 2A/2065
16:45
The joint application of Functional Resonance Analysis Method (FRAM) with the Perceptual Cycle Model (PCM) supporting the safety analysis of an accident.

ABSTRACT. Many safety advancements in aviation have been achieved, through improvements in aircraft systems technology and automation. In contrast, these same factors have been pointed as contributing factors in some aviation accidents. The increasing automation and complexity in aircraft systems may present considerable challenge to flight crews all over the world and concerns related to human machine interface appears. In an unexpected or non-normal event, mainly in high work-load circumstances, it is not uncommon to emerge problems in decision-making related to interaction with complex aircraft systems, such as loss of situational awareness, over-reliance, lack of vigilance, misprogramming, which may be affected by human performance variability into a stressful situation and potentially lead to an accident. This paper proposes use the Functional Resonance Analysis Method (FRAM) as a valuable tool in supporting safety analysis in sociotechnical systems to reconstruct an event scenario to bring clear comprehension in the everyday performance adjustments and to allow characterize the main points where the variability of human performance, in a given specific situation, could lead to an observed negative result. In this way, the main points could be monitored and damped aiming to anticipate and prevent future occurrences. In addition, this paper also proposes the application of the Perceptual Cycle Model (PCM) in the analysis of naturalistic decision-making coupled to the FRAM model. A case study is presented applying the FRAM together with PCM to model aspects of an accident and better understand why the factors contributing to the accident as identified by the investigation authority could manifest themselves as they occurred. In July 2007, the Airbus 320 of TAM Airlines flight JJ3054 destined to São Paulo lost control on the ground and overran the runway, colliding with a building and a fuel service station. All the 187 people onboard perished along with 12 fatalities on the ground. This crash became the deadliest aviation accident in Brazil history at that time. The probable cause was associated to one of the thrust levers not to be moved back to idle position, leading to ground spoilers not to be deployed and also the auto-brake not to be activated during landing. Several aspects related to this tragic event, such as the procedure to operate the thrust levers with one thrust reverser inoperative, predecessors occurrences, runway pavement conditions and flight crew indications and warnings were discussed in this study. An important FRAM function modeled in this accident was the thrust levers operation for landing, which is highly affected by human performance. The PCM usage highlighted the flight crew behavior and course of actions operating the thrust levers given the information received and the pilots schema at that moment. With the understanding obtained from the PCM application, it is possible to feed back the FRAM model improving the comprehension of the accident scenario and key points affecting the human performance variability, enabling manage it. The joint application of FRAM and PCM proved to be a great contribution to accident prevention and the engineering of more resilient complex dynamic sociotechnical systems.

17:00
Bibliometric analysis applied to the analysis and investigation of accidents in high risk industries
PRESENTER: Francisco Silva

ABSTRACT. The application of accident analysis and investigation techniques as a tool to prevent unwanted events has been of great importance in several organizations, mainly in high risk industries, such as: mining, nuclear, maritime, oil and gas, aviation and others. This article is a bibliometric research that aims to identify which are the main accident investigation methodologies that have been used in high risk industries. The article also proposes to identify how many usual methods of analysis and investigation of accidents exist and, consequently, to identify who are the main authors. And finally, the article researched the differences and similarities between the accident investigation methodologies used in industries in general and compared them with the aviation industry.

References

Aria, M., & Cuccurullo, C. (2017). bibliometrix: An R-tool for comprehensive science mapping analysis. Journal of Informetrics, 11(4), 959–975. https://doi.org/10.1016/j.joi.2017.08.007 Capes. (2023). Portal da Capes. https://www.periodicos.capes.gov.br/ CSB. (2023). U.S. Chemical Safety and Hazard Investigation Board. https://www.csb.gov/ Donthu, N., Kumar, S., Mukherjee, D., Pandey, N., & Lim, W. M. (2021). How to conduct a bibliometric analysis: An overview and guidelines. Journal of Business Research, 133, 285–296. https://doi.org/10.1016/j.jbusres.2021.04.070 NSIA. (2023). Norwegian Safety Investigation Authority (NSIA). https://www.nsia.no/About-us/Methodology Peron, M., Arena, S., Paltrinieri, N., Sgarbossa, F., & Boustras, G. (2022). Risk assessment for handling hazardous substances within the European industry: Available methodologies and research streams. Risk Analysis. https://doi.org/10.1111/risa.14010 Wienen, H., & Allah Bukhsh, F. (2017). Accident Analysis Methods and Models-a Systematic Literature Review. https://doi.org/10.13140/RG.2.2.11592.62721

17:15
A concept of information-based strategy for accident prevention
PRESENTER: Tiantian Zhu

ABSTRACT. This paper presents a concept of information-based strategy for accident prevention. Information-based strategy puts more weight on hazard detection and monitoring and making use of data obtained during operation. With the wide application of sensors on processes, critical facilities, hazards, and environment monitoring, using the collected data to reduce uncertainty to ensure safety is gradually a common practice. Many critical facilities including ships and offshore installations are becoming remotely operated which means that the safety management is done remotely. The information-based strategy can be considered as a new barrier for safety management. The functionality of this barrier is neither to reduce the probability to the undesired event nor to reduce its consequence directly but to create a state of knowing for decision-making. The reliability of this barrier implementation during operation should be considered in risk analysis. The proposed concept of information-based strategy can provide theoretical support for remote safety management of offshore installations. In addition, it can promote investigation about safety information environment in the organization, information behavior in resolving risk-related problems, such as how decision-makers seek and use relevant information, whether their basic information needs for solving risk-related problems are satisfied, and whether safety alarms are properly handled. The paper starts with a discussion of why such a concept is needed and its implications. Furthermore, it discusses and elaborates some existing accident causation theories to provide the rationale of the proposed concept. Thirdly, the paper presents the identified key challenges and relevant problems which need to be solved to promote the application of the proposed concept.

17:30
A risk model for recreational craft accidents
PRESENTER: Christoph Thieme

ABSTRACT. The Recreational Craft Platform (RCP, Norwegian: Fritidsbåtplattformen) is being developed to collect and merge available data on recreational craft accidents and thereby enable stakeholders to actively take measures to achieve the vision. The Norwegian Maritime Authority (NMA) will be the owner of the platform, using it to analyse the causes and risks associated with recreational craft, and to identify and evaluate risk reducing measures to reduce the number of accidents significantly. This paper presents the risk model for recreational craft accidents, developed together with the NMA that will support NMA's and their partners work to achieve these goals. The risk model was developed following NMA's existing risk modelling approach, with focus on the accident frequency of motorboats, sailing boats and personal watercraft. The model will be employed to assess effectiveness of measure and visualizing the contributing factors within the RCP.

16:45-18:00 Session 6E: S.01: Climate Change and Extreme Weather Events Impacts on Critical Infrastructures Risk and Resilience II
Location: Room 100/4013
16:45
Evaluating resilience in a holistic and quantitative manner in an evolving power supply system
PRESENTER: Kris Schroven

ABSTRACT. Current developments and challenges in the power supply infrastructure in Europe – especially the transition to renewable energies, advancing digitization and the effects of climate change – demand a comprehensive resilience assessment for the whole system. While new vulnerabilities arise due to system evolution, which need to be detected, observed and appropriately treated, the transition also unlocks new methods and abilities to increase the resilience of the power supply system. A resilience assessment can be done by measuring and combining the most relevant system parameters, i.e., performance indicators, to form a resilience metric, where deviations from the optimum directly account for an increased vulnerability (or even damages) of the system under study, i.e. a loss of resilience.

Resilience metrics for power grids are extensively discussed in literature. Most of them either focus on a general, qualitative discussion, or concentrate on a single system aspect. The holistic resilience metric for the power supply system developed here, attempts to cover all resilience dimensions on every scale in a quantitative, encompassing way. This metric can account for levels reaching from a local to a supra-regional scale. It is specified to our needs for monitoring an increasingly digitized system and considers aspects arising from a rise in renewable energy. Key Performance Indicators are identified, which distinguish different facets of power supply resilience. Inspecting the KPI evolution in real-time allows an evaluation of the system's performance before, during and after a crisis.

17:00
Operational Response to extreme weather events
PRESENTER: Ricky Campion

ABSTRACT. During times of extreme weather, a whole system approach to risk needs to be taken to ensure that any operational controls (speed restrictions or service suspension) limit the overall risk and not just the immediate risk posed from extreme weather. A whole system approach also considers the hazards that operational controls may introduce (crowding, SPAD’s and fatigue) and the associated risks that accompany these hazards. With this balance of risk, a more informed decision can be made as to what the operational response should be during an extreme weather event.

A statistical model has been developed based on analysis into the frequency of extreme convective rainfall events and their impact on soil cuttings on the GB mainline railway. The model takes the output of this analysis to consider the probability of failure of any soil cutting given the characteristics of the cutting in a range of extreme convective rainfall events. The train service over the section of line is fed into the model and this is used to determine the probability and consequences of a train striking an obstruction caused by a cutting failure.

Statistical analysis is then used to derive the risk from any operational controls that are imposed. The two risk values are compared in the model to determine the impact on the overall risk by the operational controls in that given scenario. The output of the model can be used to assist in any decision making for imposing operational controls on sections of the GB mainline railway. It can suggest the optimal operational response for a section of the network given the amount of convective rainfall that has been experienced over that section. This can range from no reduction in speed to a full suspension of services. The outputs of the model show the tradeoff between the reduction of the immediate derailment risk and the possible increase of other risks that may be introduced by the given operational response.

The model currently focusses on convective rainfall and its impact on soil cuttings but the whole system approach could be applied to a far wider range of possible events that the railway faces. Future development of this model could consider frontal rainfall and how the operational response may need to be different in this scenario. Failures from embankments could also be incorporated into the model which would give a more holistic view of the risk posed by all earthworks on the railway. Development of a user interface for the model with fully automated inputs could enable the model to be used more widely and have a greater impact on the way operational controls are imposed across the whole GB mainline railway.

17:15
Reducing coastal flood risk by improving closure reliability of storm surge barriers
PRESENTER: Leslie Mooyaart

ABSTRACT. Coastal floods can be castastrophic as the recent flood near Fort Myers caused by Hurricane Ian demonstrated (>100 deaths, >100 billion dollars of damage). Luckily, coastal flood protection works have prevented many of these events worldwide. However, due to climate change and growing populations in coastal zones both the number of coastal floods as their damage is expected to rise. Consequently, coastal flood protection works need to be adapted.

Storm surge barriers form an important class of coastal flood protection at estuaries. Storm surge barriers are large movable hydraulic structures which only close during a storm surge. As they are normally open, it is possible that the barrier does not close when required. At two of the six Dutch storm surge barriers, the probability of non-closure is considered the most likely cause for a flood behind the barrier. Therefore, lowering the probability of non-closure can be an effective measure to compensate for climate change and growing populations in coastal zones.

At Dutch storm surge barriers, the probability of non-closure is assessed with FMEA’s and fault trees. FMEA are used to identify failure modes at a component level, while fault tree analysis quantifies the probability of non-closure. These analyses result in about thousand to ten thousand different failure modes, often referred to as minimal cut-sets. Due to large number of known failure modes, however; it is difficult to find improvements which significantly lower the probability of non-closure.

This study tests and discusses six methods to find effective improvements:

1) Ranking failure modes based on their contribution to the probability of non-closure 2) Using importance measures to detect basic events which highly influence the probability of non-closure 3) Grouping of failure modes 4) Using a standardized longlist of possible improvements 5) Using safety principles such as adding diversity at system and subsystem levels 6) Consulting experts to propose improvements.

17:30
Estimating Tropical Cyclone induced Power Outages in Future Climate Scenarios

ABSTRACT. The energy system is one of the most critical services. Natural disasters such as tropical cyclones, floods, and tsunamis can disrupt the system and cause power outages. Simultaneously, climate change can cause more frequent and higher-intensity hazards, which can result in longer-lasting outages. As the duration of the outage increases, it causes economic and societal losses and disrupts dependent critical infrastructure such as water and communications systems. Although many hazards can cause significant damage to energy systems, tropical cyclones are particularly dangerous. This paper aims to estimate power outages caused by tropical cyclones due to future climate scenarios up to six days before landfall for every six hours. The model builds upon McRoberts et al., 2018 static reduced network model. This 2-step outage prediction model first includes a binary classification that indicates whether an outage will occur at the census track level. In the second step, given an outage, it predicts the number of outages at each location. For further use of the model, our proposed model reduces the original set of predictor variables and uses only publicly available and accessible information, including hurricane-related characteristics, socio-demographic and environmental variables. This work estimates the fraction of customers without power in each census tract for each tropical cyclone considering changes in intensity and frequency caused by future climate change scenarios.

Reference McRoberts, D. B., Quiring, S. M., & Guikema, S. D. (2018). Improving Hurricane Power Outage Prediction Models Through the Inclusion of Local Environmental Factors. Risk Analysis, 38(12), 2722–2737. https://doi.org/10.1111/risa.12728

17:45
Return Periods of Extreme Events in the Changing Climate: LEYP Model
PRESENTER: Mahesh Pandey

ABSTRACT. A rapid pace of climate change is now becoming evident by a marked increase in the frequency and intensity of weather extremes, and this trend is expected to continue with an increase in global warming in the coming decades. The paper presents the linear extension of the Yule process (LEYP) as a general stochastic model of environmental hazards induced by non-stationary climate conditions. The LEYP is a more versatile model than the Poisson process, as it can incorporate dependence among events occurring over time. In the paper, explicit expressions have been derived for the return period, a traditional measure of reliability that is commonly used in the design of infrastructure systems. Unlike the stationary climate, the return period between extreme events would continue to decrease as climate change effects would become more pronounced in the future. Examples presented in the paper demonstrate that a modest degree of statistical dependence among events leads to a significant reduction in the return period, i.e., a remarkable increase in the frequency of extreme events. Therefore, existing design codes would need to be revised to accommodate such non-stationary changes to ensure a high level of safety of infrastructure systems in the changing climate.

16:45-18:00 Session 6F: Risk Assessment III
Location: Room 100/5017
16:45
Extreme External Event Probabilistic Safety Assessment Framework
PRESENTER: Hyungjun Kim

ABSTRACT. Considering the climate change phenomenon, a new analysis is needed for external events related to natural disasters on nuclear power plants. In the case of extreme/multi natural disasters such as tsunamis and earthquakes, the consequences of accidents will be very serious. After screening and analyzing the natural disasters affecting the nuclear power plant, the nuclear power plant structure and components affected by the selected natural disasters are derived through the nuclear power plant walkdown. After the failure mode and effect analysis of the derived component is performed, the initial event analysis is performed by referring to the internal event PSA(Probabilistic Safety Assessment Framework). It is necessary to analyze the accident scenario according to each initiating event. In addition, Hazard and Fragility analysis is performed on the screening component.

17:00
Forecasting risks using the competence of experts

ABSTRACT. It is often impossible to build models of risk forecasting, since it is difficult to obtain and analyze a large volume of data in some subject areas, so data statistics do not exist. Therefore, experts with high competence are often involved in calculating risks. Also, the calculation of changes in their competence after each examination is important, which can lead to changes in the composition of the expert group. Expertise is based on the use of human experience and is carried out with the involvement of experts. It is of great importance both in the prediction of natural technogenic disasters and to avoid these disasters or to reduce the risks as much as possible, which is expressed in the calculation of the seismic resistance of buildings. We have developed a system for selecting experts and subsequently an algorithm for calculating the competence of experts based on their expertise.

17:15
Process hazard analysis: Proposing a structured procedure based on Multilevel Flow Modelling
PRESENTER: Ruixue Li

ABSTRACT. Process hazard analysis is significant in improving process safety in complex systems. Hazard and operability study (HAZOP) is one of the event-based methods of displaying hazards in the process industry, which can identify a wide range of hazards throughout the process life. However, HAZOP would repeat work on the same failure and lack a global view, as well as the result is not highly readable and reusable. Therefore, a hazard analysis procedure based on Multilevel Flow Modeling (MFM) is proposed. The procedure divides the system into sub-objectives and flow-based structures, subsequently analyzing and modeling hazard knowledge in terms of the objects and agents that realize the function. By comparing with a HAZOP report of the Minox process in a water injection system, it is demonstrated that MFM-based hazard analysis shows the potential for a more systematic and comprehensive representation of process hazards to improve process safety management.

17:30
Feasibility Study on Integration of Operator Modelling in DICE
PRESENTER: Dohun Kwon

ABSTRACT. PSA (Probabilistic Safety Assessment) using ETs (Event Trees) and FTs (Fault Trees) has contributed to enhancing the safety of nuclear facilities. Although PSA has been evaluating the safety of nuclear facilities for a long time, the necessity to introduce a method to complement the existing approaches is increasing due to the complexity of risk assessment such as wide-range analysis, long-term evaluation, mobile devices, and so on. To facilitate these limitations, Integrated Deterministic Probabilistic Safety Assessment (IDPSA) which can facilitate contextual evaluation with sensitivity and uncertainty analysis has been proposed. DICE (Dynamic Integrated Consequence Evaluation), a dynamic reliability analysis tool using DDET, was developed as a supporting tool for the conventional PSA. The DICE consists of a physics module, a diagnostic module, a reliability module, and a scheduler that controls the three modules. DICE has two simulation modes called single-branch mode and multi-branch mode, depending on the purpose of analysis. The diagnosis module reflects the operator’s tasks in DICE. The diagnosis module including the operator model has an important role in temporal dependence on accident scenarios and various accident propagation can be seen. Although the same accident scenario occurs, there is a possibility that it will proceed to a different scenario due to the results of the operator model. Through this, unknown scenarios can be explored and it can supplement conventional PSA. The operator model determines the success or failure of the operator task and, upon success, also determines operator action time. This model maintains consistency with the HRA(Human Reliability Analysis) applied to the existing PSA, so provides the same result, which is human error probability, while calculating operator action time. The operator model randomly calculates success or failure and operator action time according to a specific distribution set for each simulation. accident scenario variability can be affected by providing time variability for the operator action time. The operator model can also be developed with other HRA methods and is currently developed based on SPAR-H. This paper introduces the coupling DICE with the operator model using SPAR-H.

17:45
Forecasting Risks Using the Competence of Experts

ABSTRACT. It is often impossible to build risk prediction models - since it is difficult to obtain and analyze large volumes of data in some subject areas statistics. Therefore, when calculating risks, specialists with high competence are often involved. It is also important to calculate the changes in their competence after each audit, which can lead to a change in the composition of the expert group. The expertise is based on the use of human experience and is carried out with the involvement of experts. It is of great importance both when predicting natural and technogenic disasters, and in the prevention of these disasters due to the maximum possible reduction in risks which is expressed in the calculation of the seismic resistance of buildings and structures. We have developed an expert selection system and subsequently an algorithm for calculating the competence of experts based on their expertise.

18:00
Uncertainty quantification in portfolio optimization
PRESENTER: Xiang Zhao

ABSTRACT. In this paper, we introduce a novel method for uncertainty quantification in portfolio optimization. We follow a matrix shrinkage approach controlled by a vector of multipliers to construct the uncertainty set, instead of the more common bootstrap method. Our method results in portfolios with smaller short sales and higher diversification ratios compared to classical methods in numerical tests.

18:15
Surrogate modelling of risk measures for use in probabilistic safety analysis applications

ABSTRACT. Probabilistic Safety Analysis (PSA) is an efficient tool for assessing, maintaining and improving the Nuclear Power Plant (NPP) safety. In the literature, different PSA applications have be identified such as, for example: PSA to support NPP testing and maintenance planning and optimization, PSA as a tool to monitor level of safety or PSA as a predictive evaluation of risk. In general, these applications require analyzing aging trends, updating reliability parameters and maintenance related to safety equipment (IAEA, 2001). PSA are typically large models developed using event tree representations coupled with fault tree models. An indispensable tool in PSA is the software such as RiskSpectrum and CAFTA which are broadly used in NPP. The main problem with the use of commercial software such as theses its lack of flexibility in modelling. They do not consider failure rate models explicitly depending on aging and the effectiveness of maintenance and asset management policies, nor the effect of surveillance effectiveness on the availability of safety equipment. In this context, the use of tools like surrogate models, or metamodeling, emerges as a tool that can improve realism in probabilistic safety analysis. In this approach, the PSA code is substituted by a surrogate model in order to obtain the risk measures of interest (e.g. Core Damage Frequency (CDF)). Different surrogate models have been proposed (James et al., 2017). In this paper, four models have been considered (Generalized Additive for location, scale and shape, K-Nearest-Neighbor, Support Vector Regression and Extreme Gradient Boosting). The models have been trained to predict the CDF using 10000 simulations obtained using a PSA code with 595 inputs variables corresponding a basic events which have been modelized according a one distribution probability. To evaluate the performance of the different models, two quality metrics have been used, root mean square error (RMSE) and mean absolute error (MAE), which have been evaluated using k-fold cross-validation technique. The results obtained demonstrate the capacity of the surrogate models to provide accurate and computationally efficient estimates of CDF. Therefore, the surrogate models could be used in advanced applications of the PSA such as living PSA or Ageing PSA (APSA), which must integrate the combined effect of component ageing, maintenance management and technical specification requirements at NPP, In addition, the use of the original PSA model can involve a high computational cost compared to the use of metamodels whose computational cost is very low.

REFERENCES IAEA-TECDOC-1200. (2001). Applications of probabilistic safety assessment (PSA) for nuclear power plants. James, G., Witten, D., Hastie, T., Tibshirani, R. (2017). An introduction to statistical learning with applications in R. Springer, New York.

16:45-18:00 Session 6G: Maritime and Offshore Technology III
Chair:
16:45
Real-time risk monitoring of ship pilotage operations: Automating BN risk model development
PRESENTER: Sunil Basnet

ABSTRACT. The maritime industry is evolving further with new technologies and services, such as autonomous ships and remote pilotage operations, under development. Whilst these services may bring new opportunities and benefits, the ability to manage risks becomes increasingly vital. Furthermore, these new technologies and services require risk models of a dynamic nature, which can incorporate ongoing systemic changes and provide risk estimation in real time. Hence, this paper presents a novel approach, which extracts the incident data and automatically establishes a real-time Bayesian risk model. The model consists of a clear hierarchy denoting a chain of risk events, i.e., root causes, hazards, accidents, and losses. The resulting risk model provides an estimation of the posterior probability of occurrence of all variables in the Bayesian network. These results are next plotted and monitored with Graphical user interface application to monitor the critical factors leading to losses and thus requiring risk control in real time. The effectiveness of the method is afterward demonstrated using a case study of ship pilotage operations. The resulted model shows the probability of occurrence of risk events such as accidents, losses, hazardous scenarios and causal factors.

17:00
A Bayesian inference and metaheuristics model for estimating the frequency maritime accidents: the case of Fernando de Noronha

ABSTRACT. Toxic spills that arise from maritime accidents can lead to catastrophic environmental damage to animals. The numerous oil tankers that travel the planet raise the risk of potential oil spills that can affect sensitive ecosystems such as oceanic islands. To evaluate those risks, frequency estimation is an essential step. However, dealing with events with low frequency and high consequence poses a challenge, since classical statistical approaches strongly rely on data, which are scarce in this case. To overcome this shortcoming, a Bayesian population variability-based method is proposed to assess the accident rates considering accident data from various databases combined with the expertise of professionals such as academics, captains, pilots, and chief officers. As a real case application, we used this framework to estimate the frequency of accidents near Fernando de Noronha Archipelago. The results can support decision-making regarding measures to prevent accidents or reduce risks.

17:15
Availability study of an installation dedicated to CO2 capture, optimization of the design and of the injection strategy via Petri nets

ABSTRACT. In recent years, the fact that environmental constraints increases has prompted all sectors of industry to reduce their greenhouse gas emissions and limit the impact of their activities on the environment. TotalEnergies is one of the companies the most committed to reducing emissions, primarily through innovation in the field of CO2 capture and injection. The underlying idea is quite simple: converting disused gas production reservoirs into CO2 storage. Several projects in the North Sea have recently been launched. The main principle of the process is to transport liquid or gaseous CO2 to facilities located close to the sea, download, regas (partially) and compress the gas and then inject it under pressure into a storage reservoir via disused production wells.

The subject of the research presented in this paper is unique in many respects because both it is carried out jointly with other industrialists and the chemical composition of CO2 being very different from raw gas, many complex constraints must be considered to preserve the reservoir. Concerning the project under study, the CO2 injected comes from two different sources: in gaseous form (from compression) or in liquid form (transported by ship). Mixing these two types of CO2 results in a new composition that will be injected into disused production wells. However, since the nature of the reservoir does not allow to inject continuously, injectivity envelopes linked to CO2 composition, temperature and pressure in the wells must be taken into account.

Due to the fact that it is a new type of project, the main issues are profitability and, from technical point of view, the injection capacity of the wells. In order to address these two essential points, a production availability study was carried out based on a Petri net model. To achieve this, first, it was necessary to define the complex constraints applicable to injection in terms of reservoir pressure, molecular composition of the gas and temperature. Once this step was completed, we were able to start building the model using the software suite GRIF, a technology of TotalEnergies. The GRIF Petri module, which combines stochastic Petri nets with Assertions and Predicates, with Monte-Carlo Simulation, was used to model the behaviour of the installation. It was also used, via algorithms, to define the optimisation strategy for filling the injection wells according to their available envelopes. This modelling technique is applied for more than 30 years within TotalEnergies, but never before in a CO2 injection context. Petri nets can handle complex constraints and allow to obtain a wide range of results on the system under study and provide decision aid support.

This study validated the feasibility of such a project through a production availability model. It also helped the teams to compare different design options between them for several parts of the installation. For instance, it enabled them to optimise the sizing of the storage tanks and confirmed that it would still be possible to inject CO2 into the wells after their 15-year operating period.

17:30
A copula-based Bayesian Network to model wave climate multivariate uncertainty in the Alboran sea

ABSTRACT. An accurate estimation of wind and wave variables is key for coastal and offshore applications. Recently, copulas have gained popularity for modelling wind and waves multivariate dependence, since accounting for the hydrodynamic relationships between them is needed to ensure reliable estimations of the required design values. In this study, copula-based Bayesian networks (BNs) are explored as a tool to model extreme values of significant wave height (Hs), wave period, wave direction, wind speed and wind direction. The model is applied to a case study located in the Alboran sea, close to the Spanish coast, using ERA5 database. Extreme values of Hs are sampled using Yearly Maxima and concomitant values of the missing variables are used. K-means clustering algorithm is applied to separate the different wave components and a BN is built for each of them. The assumption of modelling the dependence between the variables using Gaussian copulas and the structure of BNs are supported with the d-calibration score. Fitted marginal distributions are introduced in the nodes of the BNs and their performance is assessed using in-sample data and the coefficient of determination. The BN models proposed present high performance with a low computational cost proving to be powerful tools for modelling the variables under investigation. Future research will include different locations and databases.

16:45-18:00 Session 6H: Mechanical and Structural Reliability
16:45
Environmental contours and time dependence

ABSTRACT. Environmental contours are widely used as a basis for e.g., ship design. Such contours are typically used in early design when the strength and failure properties of the object under consideration are not known. An environmental contour describes the tail properties of some relevant environmental variables, and is used as input to the design process. A methodology for constructing environ¬mental contours based on the Rosenblatt transofrmation was introduced by Winterstein et al. (1993) and Haver and Winterstein (2009). Huseby et al. (2013) presented an alternative approach where environmental contours are constructed using Monte Carlo simulation. Improved methods are found in Huseby et al. (2015a), Huseby et al. (2015b) and Huseby et al. (2021). Typically, the strength of a structural design is chosen so that the expected return period of a failure event exceeds the desired lifetime of the structure. If time dependence in the environmental variables is neglected, the expected return period is simply the inverse of the failure probability. In a more realistic model, however, such dependence should be included. In this paper we describe a method for constructing an environmental contour where time dependence is taken into account. The method is illustrated with a few numerical examples showing the effect of various levels of dependence.

17:00
Reliability analysis of flexural subassemblies under ultra-high and low cycle fatigue based on Monte-Carlo method
PRESENTER: Cheng Xie

ABSTRACT. Flexural subassemblies are widely used in building structures (beams and columns) and bridges. Structures in earthquake prone areas are not only subjected to ultra-low cycle (<100 cycles) fatigue (ULCF) load caused by earthquake action, but also experience ultra-high cycle (>10 million cycles) fatigue (UHCF) load during their service life, such as vehicle load and wind load. This paper first introduced a method, that is ultra-high cycle fatigue pre-damage and ultra-low cycle fatigue damage (UHCF_PreD-ULCF_D) criteria based on lumped damage mechanics, to model the damage state of flexural subassemblies under combined fatigue loading. The fatigue loading test data of three flexural subassemblies were referred to verify the accuracy of the model; Then, the reliability analysis and computing process of flexural subassemblies under fatigue damage state based on Monte Carlo method were established; Finally, the reliability of flexural subassemblies was analyzed under the uncertainty of material constant, loading cycle and lateral force. The results showed that UHCF_PreD-ULCF_D models of the flexural subassemblies not only achieved high accuracy, but also greatly improved the computing speed and convergence compared with the traditional FE method when calculating the UHCF and ULCF loading responses. The efficient method of calculating structural damage state was well applicable to the Monte Carlo method for reliability analysis. The reliability analysis program proposed for flexural subassemblies could consider the uncertainty of external loads and material defects well, compute the failure probability of the structures efficiently, and calibrate the critical value of the damage variable of the theoretical model corresponding to structural failure.

17:15
Reliability analysis for overturning and sliding of lacustrine dikes: The Nezahualcoyotl dike case

ABSTRACT. Before the year 1519, the Valley of Mexico was a closed basin and at the bottom of the valley, an extensive system of shallow lakes was formed. Within this lacustrine system, the capital of the Aztec empire, Tenochtitlan, was built. The Aztecs were known for their impressive constructions and complex hydraulic structures, of which the most impressive structure was the Nezahualcoyotl dike. This structure was constructed across Lake Texcoco. Its principal function was to protect the city of Tenochtitlan from high water levels at the lake. However, there is not enough information about the reliability of this dike. Mainly due to two reasons, today there are no remains left of the dike and most of the lacustrine system is drained. In this paper, we present a method to study the reliability of the Nezahualcoyotl dike under two failure modes, overturning and sliding. This is done by following up on the work presented by Torres-Alves \& Morales-Nápoles (2020) where they developed a hydrological characterization of the lacustrine system and studied the dike under one failure mode, overflow. The proposed analysis aims to provide a more realistic assessment of the reliability of the dike as a flood defense mechanism.

17:30
Improving the diagnostic reliability of AE-based failure mode detection and distinction in CFRP

ABSTRACT. The usage of carbon fiber reinforced plastics (CFRP) in safety critical systems requires the application of Structural Health Monitoring (SHM). A well-known non-destructive testing (NDT) method is Acoustic Emission (AE). AE-based methods enable a continuous and in-situ monitoring of CFRP structures. While the classification of damages using AE signal features is thoroughly studied, the investigation of the correlation between the reliability of classification results and the applied loading patterns enables a new metric to ensure the success of SHM methods. In this contribution the probability of detection (POD) is used to evaluate the reliability of classification results. The four damage modes (debonding, delamination, matrix crack, and fiber breakage) are classified by a support vector machine (SVM). To distinguish the damage modes time-frequency domain features of the corresponding AE signal are calculated and finally classified to evaluate the dependency between the applied loading patterns and the classification quality. A concept of an online control loop is proposed using the reliability of classification results as control variable for improved testing strategies finally leading to ensure a safe usage of CFRP structures.

17:45
Probabilistic Analysis of Interaction of Atmospheric Icing with Wind on Structures

ABSTRACT. Presently the 2nd generation of Eurocodes for structural design is nearly completed and national climatic maps are expected to be updated and a number of climatic parameters nationally assessed. Partial and combinations factors for load effects should be calibrated at a national level. Several inconsistencies can be found in the modelling of combinations of climatic actions on structures. Some actions such as atmospheric icing or temperatures are often ignored, or their effects are overestimated when too simplified models are applied. An example of the climate interaction is the combined effect of atmospheric icing and wind on structures where the reduction factor k is recommended to be used to decrease wind pressure due to small probability that e.g. a 50-year maximum of the wind action will occur simultaneously with heavy icing; strong wind gusts may prevent forming significant icing. The reduction factor is provided for several icing classes in the new Eurocode EN 1991-1-9 for atmospheric icing on structures. The influence of temperatures needs also to be considered. Focusing on the conditions of mild climate in the Czech Republic, analysis of available datasets of the Czech Hydrometeorological Institute reveals that the simultaneous effects of icing with wind indicates need for national calibration as the Eurocode model gives overly conservative estimates. The submitted study is focused on the analysis of the interaction of icing and wind on structures. The interdependency of icing, wind and temperature is assessed. The reduction factor k is analysed and calibrated on the basis of available data from measurements spanning over 40 years. It appears that for the Czech national conditions, the lower value of reduction factor k could be recommended in the National annex to EN 1991-1-9. The achieved reliability level of steel members designed for a combination of icing and wind are compared with the target reliability levels recommended in Eurocodes.

18:00
AN ASSESSMENT OF THE TRADE-OFF BETWEEN ENERGETIC EFFICIENCY AND STRUCTURAL INTEGRITY IN WAVE ENERGY CONVERTERS
PRESENTER: María L. Jalon

ABSTRACT. Since the last decades, wave energy is being investigated as an alternative to fossil fuels. Notwithstanding, wave energy conversion technology is not yet competitive. In this context, a number of authors have focused on finding optimal designs from an energy point of view; however, very little research has been oriented to explore the relationship between optimal design of Wave Energy Converters (WECs) and their long-term reliability. In this paper, a rigorous simulation methodology is proposed to investigate the trade-off between the energetic response and the structural longevity of WEC systems in irregular waves. This methodology is embedded within a parameterized computational model by coupling analytical and finite element models, to simulate both the energetic efficiency of the system and the accumulated fatigue damage. This model takes into account the real environmental conditions, the fluid-structure loading, the material properties, and the energy efficiency of the system. The proposed methodology is exemplified for a particular bottom-fixed Oscillating Water Column (OWC) system, however the methodology is generic and can be applied to any offshore wind or marine renewable energy device. From an engineering point of view, this methodology aims to support decision-making in the predesign stage to provide better solutions for marine renewable energy applications.

16:45-18:00 Session 6I: Mathematical Methods in Reliability and Safety III
16:45
Failure On Demand Analysis in the Case of Score Based Binary Classifiers: Method and Application

ABSTRACT. Safety assessment and verification have become more complex in the past years. Especially the incorporation of machine learning components, and their black box nature, are proposing new difficulties to overcome. Therefore new techniques are needed to judge the safety of machine learning components and further integrate those into existing safety analysis methods. In this contribution we will provide a new method for safety analysis of a score based binary classifier. The presented technique can output a single reliable value for the failure on demand. Latter one can then be used inside a system safety analysis, as done for physical engineering systems. In particular we will briefly mention a general approach for score based binary classifiers, as already applied for general systems. Furthermore we will contribute a more refined method in the case of a normal distributed score. The main idea is to incorporate confidential bounds on the parameters to obtain a function that serves as upper bound for the failure on demand. Further analysis of the retrieved function will then provide a mathematically based single value for the reliability. In the end of this work we will demonstrate this technique at the example of breast cancer detection and evaluate the performance in this scenario.

17:00
A New Method for Reliability Evaluation of Two-Terminal Multistate Networks in Terms of d-MCs
PRESENTER: Shuai Zhang

ABSTRACT. Many real-world complex systems can be modelled by multistate networks. Theoretically, the evaluation of multistate network reliability is an NP-hard problem. Therefore it is essential to develop more efficient methods to analyse the reliability of practical multistate networks. There are mainly direct and indirect methods for evaluating the reliability of multistate networks. In this paper, we focus on the third stage of the indirect method which is calculating the union probability of the events given all d-MCs. Based on the reliability evaluation method proposed by Provan and Ball (1984) on binary networks, this study tries to develop its extended version for multistate network scenario. The correctness and effectiveness of the proposed method is verified by illustrative example and several benchmark networks.

17:15
Safety Performance Analysis of High Pressure Wellhead Based on Thermodynamic Coupling Model
PRESENTER: Bin Li

ABSTRACT. Due to the effect of harsh environmental factors and huge alternating moment loads, subsea wellhead system is prone to fatigue damage. The high-pressure wellhead , as an important component and pressure-bearing part of the subsea wellhead, imposes higher requirements on its safety performance under the complex temperature-pressure coupling environment. A comprehensive assessment of the safety performance of key pressure-bearing components in subsea wellhead is crucial, but thermodynamic coupling factors are missing in performance analysis of traditional methods. This paper proposes a safety performance analysis method of the high pressure wellhead, which combines wellbore temperature distribution and location. A coupled finite element model is established for high pressure wellhead thermodynamically analysis, considering the effects of sensitive factors such as bending moment loads and temperature on safety performance. The approach is tested through the application to a case study with a subsea wellhead system in the South China Sea. The results show that the bending moment can cause greater equivalent stress on the pressure bearing components, and the impact on the locking ring is greater than the high-pressure wellhead. Temperature has a particularly significant impact on the casing and locking ring. Compared to traditional subsea wellhead safety performance evaluation methods, the technique proposed can be used for a broader and more comprehensive evaluation of subsea wellhead safety performance, and is more suitable for guidance of practical engineering applications.

17:30
Importance analysis in the evaluation of input attributes of classifiers
PRESENTER: Elena Zaitseva

ABSTRACT. Most often, the techniques of Machine learning are used for the decision of problems in Reliability Analysis. In this study, we propose to consider the application of the Reliability Analysis based method application for the decision problem in Machine Learning, in particular, the analysis of the influence of input attributes on the classification result. Some attributes are most important for the classification because they significantly influence the classification result than others. A new method for the determination of the most important attributes is proposed. This method is developed based on the approach of Importance Analysis, which is widely used in Reliability Analysis. The attribute’s importance is evaluated by structural importance.

17:45
Schematizing rainfall events with multivariate depth-duration dependence
PRESENTER: Guus Rongen

ABSTRACT. Accurately modelling rainfall events is crucial for flood risk assessment and stormwater infrastructure design. However, transforming statistical characteristics of events into relevant rainfall patterns is challenging due to the natural variability of rainfall. Two commonly used methods to schematize rainfall events have limitations: the nested storm profile overestimates the resulting flow by assuming complete dependence between different durations, while determining the critical event duration by simulating each duration separately assumes independence and underestimates the flow. To overcome these limitations, this study presents a method that models the dependence between different rainfall durations using a Gaussian copula and combines this with marginal rain statistics to create a probabilistic model for the rain event. The SCS Curve Number approach is used to model the resulting flow, and a first-order reliability method (FORM) is applied to determine the critical combination of durations within an event. The findings of this study show that the rainfall events generated using the proposed method result in comparable flows to those produced by conventional design events. While this may not make the model a preferred choice for standard applications, it can still be valuable for flood risk assessments as it provides a probabilistic model that better captures critical rainfall patterns.