ESREL 2022: 32ND EUROPEAN SAFETY AND RELIABILITY CONFERENCE (ESREL) - DUBLIN 2022
PROGRAM FOR MONDAY, AUGUST 29TH
Days:
next day
all days

View: session overviewtalk overview

09:30-10:50 Session 2: Plenary Session: Hydrogen and the Future of Energy Systems shared by Aedhon McAleer ESB Ireland, John Finnegan Principal Officer, Department of Environment, Climate and Communications Ireland & Ellen Diskin, ESB Network

"Delivering Net-Zero: The role of Hydrogen and Energy Storage" Aedhon McAleer ESB Generation and Trading Ireland

&

"New Infrastructure for a decarbonised energy sector: Electricity Storage and Green Hydrogen" John Finnegan Principal OfficerElectricity Networks and Systems, Department of Environment, Climate and Communications

&

"National Network, Local Connections : a smarter network for Net Zero"  Ellen Diskin | National Network, Local Connections Programme Manager | ESB Networks 

 

Chairs:
Maria Chiara Leva (Technological University Dublin, Ireland)
Edoardo Patelli (University of Strathclyde, UK)
Location: CQ-006
11:10-12:50 Session 3A: Crisis management
Chair:
Tina Comes (TU Delft, Netherlands)
Location: LG-22
11:10
Constance Péronneau (IMT Mines Alès, France)
Noémie Fréalle (IMT Mines Alès, France)
Florian Tena-Chollet (IMT Mines Alès, France)
Sophie Sauvagnargues (IMT Mines Alès, France)
Bertrand Marion (Grenoble Alpes Metropole, France)
Vincent Boudières (Grenoble Alpes Metropole, France)
Methodology to support municipalities in the crisis management of levee failures
PRESENTER: Noémie Fréalle

ABSTRACT. Adapted to a local scale, French crisis management plans need to include a multi-hazard dimension in order to enable the mayors to deal with all the risks identified in their area. Although the municipalities of Grenoble-Alpes Metropole already have set up their municpal protection plans (PCS), the risk associated to the breaking of the levees on the Drac (tributary to the Isère river) is not sufficiently considerated yet. If such a risk is not taken into account in the local crisis management strategy, it can quickly overtake the capabilities of the managers, who would have to face and adapt to the emergency. Therefore, without anticipation, the evacuation of numerous citizen in a situation of emergency is difficult to consider. To overcome this point, a methodology was built and presented to support municipalities in the crisis management of levee failures. This research project, financed by Grenoble-Alpes Métropole, provided an initial response to the concerned municipalities. First based on the innovative approach of the evaluation of the local emergency plans and the capability of the municipalities to cope with risks, a new doctrine was built. Dealing with the anticipation of the levee failures, it led to the presentation of Graded Anticipation Plans (GAP) for two pilot-municipalities: Grenoble and Seyssinet-Pariset. Relying on hydrological level established by Meteo France and the levee managers, this doctrine focuses on the anticipated local response, graded according to the situation expected by the French Flood Forecasting Services and implemented before the first disorders in the levees can be observed. This research project is based on an iterative reflexive methodology : regular working sessions with both municipalities led to the elaboration of Graded Anticipation Plans adapted to the territories. Including « least regrets » decisions (e.g home working, preventive closure for establisments open to the public…) the GAP enables the elected officials to anticipate the information of sensitive people and their preventive evacuation through a « reverse proximity » mindset. Thus, the present doctrine shows the crisis management as a whole and allows the use of a shared language at the scale of a metropolis. In fine, the GAP ensures the coordination between affected municipalities upstream or downstream the Drac river. To ensure that the stakeholders take the outcomes on the GAP philosophy, a framework exercise was submitted to the two municipalities and Grenoble-Alpes Metropole. This exercise was designed with numerous educative purposes and three distinct phases : two in real time and one phase of reflexion on the next anticipation decisions. Not only were the GAP improved, but the players tested the communication between every crisis management entities. This point was crucial as it allowed the participants to clarify the continuity of each others competences, especially in regard of the French regulation with the transfer of the GEMAPI competence. By considering all the entities who may be involved (the Prefecture, levee managers on both sides of the Drac, water public company…), the exercise was a way to share the GAP strategy and philosophy. It was also the opportunity to underline the coordination and communication during an important weather event. This simulation highlighted the relevance of the anticipative approach, and beyond the learner’s interest, elected officials or municipal agents, the exercise shows the interest for the municipalities affected by the failure to use the anticipative strategy. At last, the main prospect of this research project is to extend the submitted tool to other municipalities affected by the phaenomenon and improve the GAP through simulations.

11:30
Gibeom Kim (Kyung Hee University, South Korea)
Gyunyoung Heo (Kyung Hee University, South Korea)
A study on finding an optimal response strategy considering infrastructures in an agent-based radiological emergency model using a deep Q-network
PRESENTER: Gibeom Kim

ABSTRACT. A radioactive emergency involves a variety of elements such as evacuees, hazardous materials, and response facilities and resources, and the interactions of those elements make it difficult to predict consequences or response effects. Agent-based modeling (ABM) may be a useful method for the integrated modeling of the radiological emergency. In this paper, a simple case study that simulates emergency evacuation considering shelter’s relief supplies using ABM is presented. The evacuation completion time may vary depending on how to distribute the limited shelter resources. To obtain an optimal strategy for the resource distribution, reinforcement learning is applied. Deep Q-network (DQN) was applied in consideration of the extensive state space due to the complexity of the radiological emergency. By applying DQN, a shelter resource distribution scenario that shortens the evacuation completion time was obtained. Through this study, the availability of DQN as a way to find the optimal response strategy was assessed.

11:50
Henrik Hassel (Lund University, Sweden)
Alexander Cedergren (Lund University, Sweden)
Enabling and impeding factors for organizational adaptive capacity – a review of the literature
PRESENTER: Henrik Hassel

ABSTRACT. Organizations responsible for maintaining critical societal functions must be able to handle both events that can be foreseen, for which concrete planning and preparations can take place, but also events that come more or less as surprises, requiring the organization to have capacities to adapt to new circumstances (Linkov et al., 2014). Traditional risk assessments mainly build on an anticipatory perspective with little emphasis on analysing or developing adaptive capacities. However, increasingly, researchers stress that relying too much on anticipation is insufficient or even counter-productive in the pursuit of organizational resilience (Sikula et al. 2015). If people and organizations are not able to adapt to constantly changing, complex and perplex situations, disastrous outcomes may occur (Woods, 2018; Woods & Hollnagel, 2006). Expanding traditional anticipatory risk assessments to also address adaptive capacities, requires knowledge concerning what factors and conditions that enable or impede organizational adaptive capacity. Extensive theoretical and empirical work has been conducted across a range of disciplines and sub-disciplines on the topic, including the literature on resilience engineering (Woods, 2018), High Reliability Organizations (Weick & Sutcliffe, 2001) and crisis management (Webb & Chevreau, 2006; Boin & McConnell, 2007). However, there is no general consensus about what these factors and conditions are, and many approaches that conceptualize adaptive capacity are vague and under-specified to be useful in approaches that aim to analyse adaptive capacity proactively (Anderson et al., 2020). The aim of this paper is therefore to review the state-of-the-art literature focusing on mapping what conditions and factors are seen as enabling and/or impeding adaptive capacity. Our aim is to map both factors and conditions that are broadly recognized in the literature as being important, as well as conditions and factors where views diverge and where additional empirical research would be warranted. We hope that mapping and categorising enabling and impeding factors and conditions can provide a basis for how traditional anticipatory risk assessment approaches can be complemented to also address adaptive capacities, which would put organizations in a better position to be prepared to handle both foreseen and unforeseen events.

12:10
Moritz Schneider (Institute for the Protection of Terrestrial Infrastructures, German Aerospace Center (DLR), Germany)
Oscar Hernán Ramírez-Agudelo (Institute for the Protection of Terrestrial Infrastructures, German Aerospace Center (DLR), Germany)
Lukas Halekotte (Institute for the Protection of Terrestrial Infrastructures, German Aerospace Center (DLR), Germany)
Daniel Lichte (Institute for the Protection of Terrestrial Infrastructures, German Aerospace Center (DLR), Germany)
A Probabilistic Approach to Risk Scenario Identification
PRESENTER: Moritz Schneider

ABSTRACT. Situation awareness is crucial for decision makers during an emergency. An efficient knowledge management can enhance situation awareness by providing information about the most relevant factors of the situation. Scenario analysis, based on Morphological analysis, represents a structured method that can support the identification of such factors. Various studies based on this method have already been presented in the literature, e.g. as a method to strategically enhance disaster preparedness. In this paper, we introduce an approach that allows us to analyze current information in order to dynamically identify an emerging or developing risk scenario for emergency management. First, Morphological analysis is applied to construct a scenario space. Second, in order to quantify the dependencies between scenario-factors, a Bayesian network model is implemented. For identification of the scenario, current information about the scenario-factors are needed. Information can be gathered from different sources, e.g. sensors or observations by emergency personnel and processed in the Bayesian network model to calculate the posterior probabilities of the parameters in the model. We illustrate the approach by applying it to an example in the context of emergency management. To conclude, we discuss the benefits and limitations of this approach as a knowledge management tool for enhancing situation awareness.

11:10-12:50 Session 3B: S.03: Transdisciplinary Infrastructure Asset Management for Sustainable and Resilient Infrastructure I
Chair:
Omar Kammouh (Delft University of Technology, Netherlands)
Location: CQ-006
11:10
Bryan Tyrone Adey (ETHZ, Switzerland)
Claudio Martani (Purdue, United States)
Steven Eberle (Rothpletz, Lienhard + Cie AG, Switzerland)
Using real options to evaluate highway designs for resilience and sustainability with uncertain future mobility patterns

ABSTRACT. With increasing populations, increasing urbanization and no indication that the world is reducing its use of cars and trucks, the modification of existing and the construction of new highways will be required for the foreseeable future. At the same time, transport planners are abundantly aware that future infrastructure should be both resilient, i.e. able to continue to provide service quickly in the case of unexpected events, and sustainable, i.e. enable transport with the least negative impact on the environment. In order to ensure that highways are modified and constructed to meet the needs of society in the face of the current vast uncertainty in future mobility patterns, it would be useful for transport planners to have systematic methodology to evaluate competing designs.

This paper presents an exploratory example of how real options can be used to evaluate highway designs, considering uncertainties in future mobility patterns. This includes the explicit modelling of the uncertainty of mobility patterns and the simulation of how future scenarios affect the provided service with respect to unexpected events, harmful emissions and intervention costs. The use of real options is explored by using it to evaluate competing designs for a fictive but realistic case study based on the completion of the A15 highway, in the canton of Zürich, Switzerland. The benefits and limitations of using real options to improve the modification of transport infrastructure are discussed, as well as future research directions.

11:30
Amirreza Kandiri (School of civil engineering, University college Dublin, Ireland)
Rui Teixeira (School of civil engineering, University college Dublin, Ireland)
Maria Nogal (Faculty of Civil Engineering & Geosciences, Delft University of Technology, Netherlands)
Deciding how to decide under uncertainty: A methodology map to address decision-making under uncertainty
PRESENTER: Amirreza Kandiri

ABSTRACT. With the present need for an adequate transition towards a sustainable and resilient future, decision-makers face the complex question of identifying the most suitable approach to inform their decision-making process. Decision-making under uncertainty is a challenging area for which a number of different approaches have been developed in recent years, motivated by increasing awareness of the uncertain future of climate change. In the present study, uncertainty is divided into two categories, that is, (i) probabilistic uncertainty in which the probability of each future scenario is assumed, and (ii) deep uncertainty in which the probability of each future scenario is uncertain or unknown. This paper presents a methodology map built on a thorough comparison of approaches that can be used to address decision-making based on the level of uncertainty considered. The methodology map is provided to help researchers and practitioners choose the most convenient approach for their specific context. In the first step, an overview of different approaches for decision-making under uncertainty is provided. Then, the approaches are compared to each other, and their requirements, limits, pros, cons, and different circumstances under which each approach is more appropriate are discussed. Seven different approaches are studied, namely, Cost-Benefit Analysis (CBA), Multi-Criteria Decision Making (MCDM), Probabilistic Decision Tree (PDT), Robust Decision Making (RDM), Dynamic Adaptation Policy Pathway (DAPP), No Regret, and Heuristic. CBA, MCDM, and PDT are the approaches that can be used under probabilistic uncertainty, RDM and DAPP can be applied to problems with deep uncertainty, and No regret and Heuristic are applicable under both types of uncertainty. The comparison is conducted using a case study that addresses issues of applicability. The case study is a simple water distribution system that provides water for a small town. This system needs to be adapted to increased rainfall caused by climate change. Different adaptation measures are assumed for that purpose, and given the uncertainty of rainfall volume in the future, a decision-making approach with consideration of such uncertainty needs to be applied. To be able to choose the most suitable adaptation strategy amongst different alternatives, there is a need to understand which decision-making approach under uncertainty suits better each specific scenario and application. Also, understanding the implicit assumptions behind each approach and how they affect the output is relevant when making an informed decision. Results show that different considerations should rule this choice. It should depend on the problem at hand (e.g. type of uncertainty, the number of alternatives, etc…) and enclose a rationale that addresses its limitations (e.g. available time, fund, etc…). For instance, CBA can only consider alternatives’ monetary input parameters (costs and benefits). Therefore, those parameters that cannot be monetarized are omitted. This specific property results in sorting all of the alternatives in a single ranking. Hence, the most suitable alternative can be chosen without any further consideration. In contrast to CBA, MCDM can consider non-monetary parameters as well as monetary ones. This approach can sort the alternatives based on each decision criteria. Thus, it is needed to give an importance score to each criterion (e.g. pair-wise comparison matrix), however, it allows a better-informed decision. PDTs can assign probabilities to each outcome, determine the worst and best alternative for each future case scenario, and it is easy to understand it by non-experts. However, this approach is unstable which means a small change in the input data can change the structure of the tree and its outcomes dramatically. The sensibility of the outcomes to changes in the input data is a very relevant aspect to consider when searching for robust approaches in the face of uncertainty. RDM is one of the approaches that can be applied under deep uncertainty, which can provide a single alternative as the most suitable one. Therefore, if a single most suitable alternative without any further consideration is needed, RDM can be used. DAPP is flexible through time. The outcome of this approach is not a static optimal plan, on the contrary, it provides a dynamic adaptation over time and allows the practitioners to change their plan if the future changes. Using the no-regret approach can benefit the systems even if climate change does not happen. In other words, it allows the identification of those adaptive measures that can benefit the system regardless of the extent of climate change. However, the identified outcome may not be the most suitable adaptive measure. Finally, although Heuristic-based approaches are simpler and faster to apply because they do not require the decision-maker to identify or evaluate the potential performance of numerous alternatives, it does not consider all the alternatives. Therefore, the outcome is not necessarily the most suitable alternative. The proposed methodology map will help researchers and practitioners to select the most convenient approach to inform decision-making based on the available knowledge about the future, with awareness of the implications of the selected approach over the final output. Then, the optimal design and/or management of the built environment can be realized underpinned by a holistic perspective of the capabilities and limitations of the existing methods and the trade-off between the method assumptions and the trust in the obtained solution.

11:50
Ariane Iradukunda (Belgian Defence, Belgium)
Pieter-Jan Zwaan (Belgian Defence, Belgium)
Nicolas Boutet (Belgian Defence, Belgium)
Asset management from cradle to grave in a single software: the use of ILIAS by Belgian Defense

ABSTRACT. Total asset management is one of the root of a secure and reliable use of material resources for an enterprise. Its implementation is, however, often only partial. If a commercial software is able to manage the maintenance activities, you probably need another one to manage the stock of spare parts, the budget planning, the purchases, the safety & environmental limitations or the qualification of the personal. This leads to a complex constellation of pieces of software, each responsible for its specific part of asset management. Some aspects of asset management are then covered by multiple softwares, which leads to conflicts, while others are simply not covered. The situation becomes even more complex when you use third part provided systems that come with their own integrated management tool, which is the case for weapon systems used by armed forces, like aircrafts, combat vehicles of ships. For more than 20 years now, Belgian Defence (BEDEF) has been using the same single software (ILIAS) for every aspect of asset management. From the budget management to the disposal of the material, the management of the stock, the fleet and the maintenance is executed in a single software for aircrafts, ships, vehicles but also tools, clothes, medicines and even buildings. This total asset management is the centre of the life cycle material management policy of BEDEF and makes it possible to ensure the wanted level of security and reliability of the material resources. In this article, we provide an overview of span of the software ILIAS and explain the advantages of using a single software.

12:10
Beatrice Cassottana (Singapore-ETH Centre, Singapore)
Srijith Balakrishnan (Singapore-ETH Centre, Singapore)
Nazli Yonca Aydin (Delft Univ. of Technology, Netherlands)
Giovanni Sansavini (ETH Zurich, Switzerland)
Quantifying the Relationship Between Resilience and Sustainability: An Application to a Water Distribution System and its Interdependent Systems

ABSTRACT. To fight climate change and its implications, governments have launched a series of appeals to increase resilience and sustainability. Although previous literature has shown that a positive correlation between resilience and sustainability may exist for certain system designs and recovery strategies, their costs outweigh the benefits under the current market conditions. In this study, we aim at shedding light on the factors that drives up the costs related to infrastructure disruption in order to understand under which conditions investing in resilience and sustainability results in economic savings. First, we develop a methodological framework for the simulation of water networks and its interdependent infrastructure systems under various disruption and recovery scenarios and define three metrics to be used as proxies for resilience and sustainability. We then translate these metrics into monetary terms in order to identify the most important variables that contribute to the increase in costs due to infrastructure disruption and conduct a sensitivity analysis to determine under which conditions a given strategy becomes economically profitable. We showcase the methodology with reference to the water distribution system and interdependent power and transport systems of the hypothetical city of Micropolis. Under given assumptions, the results show that water price and the frequency of the disruptions are the most important variables that will drive up the costs associated with infrastructure disruption. Therefore, various scenarios are provided to guide system owners and managers in the investment decision-making process.

12:30
Hugo Rosero-Velásquez (TU München, Germany)
Juan Camilo Gómez-Zapata (Helmholtz Centre Potsdam GFZ German Research Centre for Geosciences, Germany)
Daniel Straub (TU München, Germany)
Comparative assessment of models of cascading failures in power networks under seismic hazard

ABSTRACT. Risk analysis of power networks under natural hazards requires a model of the power flow following initial failures in the network caused by the hazard. The model should include cascading failures through the network, for which different models have been proposed in the literature. Past studies have compared widely used models for assessing the performance of power networks, such as the topological, betweenness-based and power flow models, and found correlations among the model outcomes. However, they do not compare them for systems subjected to natural hazards, where other factors (e.g., seismic intensity and resulting ground motions) also affect the system performance. Ultimately, the choice of the appropriate model depends on the analysis purposes, the type of power network (e.g., transmission vs. distribution), the available amount of information, and computing resources. In this contribution, we investigate the effect of the cascading failure model on a seismic risk evaluation. To this end, we perform numerical investigations on the power network in the central coastal area of Valpara\'iso Region, Chile. Specifically, we compute and compare loss-exceedance functions for two models: Origin-destination betweenness centrality (ODBCM) and DC linear power flow (DCLPFM), for different representative seismic scenarios. We also compare the models with and without considering the uncertainty in the ground motion field. On this basis, we formulate recommendations for the use of these models in different decision contexts.

11:10-12:50 Session 3C: S.05: Exploring new trends in Machine Learning approaches I
Chairs:
Enrique Lopez Droguett (UCLA, United States)
Michael Beer (Leibniz University Hannover, Germany)
Location: CQ-007
11:10
Joaquín Figueroa Barraza (University of São Paulo, Brazil)
Enrique Lopez Droguett (Department of Civil and Environmental Engineering, and Garrick Institute for the Risk Sciences, UCLA, United States)
Marcelo Ramos Martins (University of São Paulo, Brazil)
Interpretable Prognostics and Health Management Through Counterfactual Generation

ABSTRACT. Interpretability is a key aspect in deep learning-based prognostics and health management. Nowadays, neural networks are able to achieve outstanding results in recognizing failure patterns within data. However, neural networks work as black boxes as it is almost impossible to track the input value transformations that lead to the output value. This lack of interpretability hinders the massification of such models in the industry, as companies are not willing to trust algorithms whose inner prediction dynamics are not understood or interpretable. In this sense, techniques that enable interpretability of neural networks while keeping their high predictive performance are desirable for exploiting these models’ capabilities in the industry. In this paper, we develop a multi-task neural network that is trained to simultaneously diagnose a system’s health state and create a counterfactual value for the opposite health state. A counterfactual is a minimally altered input value that generates a variation in the output’s class. They are useful to characterize mappings from the input values to each of the health states. They can also indicate the cause of a failure such that preventive actions can be planned and implemented. This framework is tested in a case study that uses real-world data from an Amine Treatment Plant and compared with a post-hoc framework for counterfactual generation typically used in the literature.

11:30
Gabriel San Martín (Department of Civil and Environmental Engineering, University of California, Los Angeles, United States)
Enrique Lopez Droguett (Department of Civil and Environmental Engineering, and Garrick Institute for the Risk Sciences, UCLA, United States)
Exploring Kernel Based Quantum Machine Learning for Prognosis and Health Management

ABSTRACT. As engineering systems become more complex and interconnected, the research community and practitioners continuously face the challenge of developing techniques capable of efficiently assessing their performance, reliability and resilience considering both internal and external factors. During the last ten years, a significant part of the Reliability and Prognostics and Health Management (PHM) research and applications have focused on leveraging the developments of Machine Learning, and specifically those of Deep Learning to tackle increasingly difficult and large problems related to complex engineering systems such as predicting remaining useful life or inferring health states from multi-sensor monitoring data. Nevertheless, over the last two years the pace has stabilized and as those techniques are slowly incorporated into industry practice, the research community is starting to again shift its focus to search for other areas that could allow a new jump in reliability modelling and predictive capacity. In that regard, Quantum Machine Learning presents itself as a novel alternative to be explored given the recent advances in quantum hardware that have allowed researchers to begin developing and testing algorithms in various areas such as machine learning, optimization, and simulation. In this paper, we present a structured exploration of Quantum Machine Learning applied to the Prognosis and Health Management, focusing on Quantum Kernels and their application to fault diagnostics and clustering tasks. We briefly introduce the main aspects of quantum computing, followed by an exposition of the theory behind Quantum Kernels and then we present results using real-world data from the Case Western Reserve University ball bearing dataset benchmark. The main objective of the paper is to assess the current state of Quantum Machine Learning for reliability applications, identify potential advantages and challenges as well as proposing possible new paths for future research and development.

11:50
Marcin Hinz (University of Wuppertal, Germany)
Jannis Pietruschka (University of Wuppertal, Germany)
Stefan Bracke (University of Wuppertal, Germany)
A comprehensive parameter study regarding recurrent neural networks based monitoring of grinded surfaces
PRESENTER: Marcin Hinz

ABSTRACT. The optical perception of high precision, fine grinded surfaces is an important quality feature for these products. Its manufacturing process is rather complex and depends on a variety of process parameters (e.g. feed rate, cutting speed) which have a direct impact on the surface topography. Therefore, the durable quality of a product can be improved by an optimized configuration of the process parameters. By varying some process parameters of the high precision fine grinding process, a variety of cutlery samples with different surface topographies are manufactured. Surface topographies and colorings of grinded surfaces are measured by the use of classical methods (roughness measuring device, gloss measuring device, spectrophotometer). To improve the conventional methods of condition monitoring, a new image processing analysis approach is needed to get a faster and more cost-effective analysis of produced surfaces. For this reason, different optical techniques based on image analysis have been developed over the past years. Fine grinded surface images have been generated under constant boundary conditions in a test rig built up in a lab. The gathered image material in combination with the classical measured surface topography values is used as the training data for machine learning analyses. Within this study the image of each grinded surface is analyzed regarding its measured arithmetic average roughness value (Ra) by the use of Recurrent Neural Networks (in this case LSTM). LSTM s are a type of machine learning algorithms which can particularly be applied for any kind of analysis based on time series. For the determination of an appropriate model, a comprehensive parameter study is performed. The approach of optimizing the algorithm results and identifying a reliable and reproducible LSTM model which operates well independent of the choice of the random sampled training data is presented in this study.

12:10
Christian Agrell (DNV, Norway)
Erik Stensrud (DNV, Norway)
Safety of Autonomous Ships - Uncertainty Quantification of Deep Neural Networks for computer vision
PRESENTER: Christian Agrell

ABSTRACT. Deep Neural Networks (DNNs) are planned used in autonomous ships, replacing the human lookout on the bridge. Lookout is a safety-critical function. Car accidents have exposed DNNs lack of robustness to irregular events like unusual image objects and scenes. A misclassification with a high score, which we term a high confidence mistake, is of particular concern to autonomous ships where we foresee a remote, land-based human operator in the loop who can intervene if warned. A high confidence mistake will not generate a warning to the human operator. Ideally, if the classification instead was supplemented with a proper uncertainty estimate, the risks related to these mistakes could be controlled.

Recent work on Bayesian DNNs has shown that it is possible to obtain good approximations of model uncertainty. However, for larger models, the computational cost is a major limitation. In this study, we consider a version of Monte Carlo dropout which can be applied to existing (trained) models without much modification, while also being computationally inexpensive with respect to the full Bayesian alternative. We demonstrate the kind of uncertainty quantification that can be obtained with this approach, but more importantly, we discuss how to judge whether a probabilistic classifier is fit for purpose. Our model is trained using images of marine vessels of four different vessel categories. To evaluate the model, we test against images of varying quality and out-of-distribution examples. Our results suggest that valuable uncertainty quantification may be provided at a reasonably low computational cost. However, further analysis of how probabilistic predictions affect actions and consequences in the real world is essential to assess the safety level of a DNN in this function.

12:30
Patrick Jonk (Royal Netherlands Aerospace Centre, Netherlands)
Vincent de Vries (Royal Netherlands Aerospace Centre, Netherlands)
Rombout Wever (Royal Netherlands Aerospace Centre, Netherlands)
Georgios Sidiropoulos (University of Amsterdam, Netherlands)
Evangelos Kanoulas (University of Amsterdam, Netherlands)
Natural Language Processing of Aviation Occurrence Reports for Safety Management
PRESENTER: Patrick Jonk

ABSTRACT. Occurrence reporting is a commonly used method in safety management systems to obtain insight in the prevalence of hazards and accident scenarios. In support of safety data analysis, reports are often categorized according to a taxonomy. However, the processing of the reports can require significant effort from safety analysts and a common problem is interrater variability in labeling processes. Also, in some cases, reports are not processed according to a taxonomy, or the taxonomy does not fully cover the contents of the documents. This paper explores various Natural Language Processing (NLP) methods to support the analysis of aviation safety occurrence reports. In particular, the problems studied are the automatic labeling of reports using a classification model, extracting the latent topics in a collection of texts using a topic model and the automatic generation of probable cause texts. Experimental results showed that (i) under the right conditions the labeling of occurrence reports can be effectively automated with a transformer-based classifier, (ii) topic modeling can be useful for finding the topics present in a collection of reports, and (iii) using a summarization model can be a promising direction for generating probable cause texts.

11:10-12:50 Session 3D: S.20: Reliability and Resilience of Interdependent Cyber-Physical Systems
Chair:
Francesco Di Maio (Politecnico di Milano, Italy)
Location: CQ-106
11:10
Francesco Di Maio (Politecnico di Milano, Italy)
Alessandro Stincardini (Politecnico di Milano, Italy)
Enrico Zio (MINES Paris, PSL University and Politecnico di Milano, Italy)
IDENTIFICATION OF VULNERABILITIES IN INTEGRATED POWER-TELECOMMUNICATION INFRASTRUCTURES: A SIMULATION-BASED APPROACH

ABSTRACT. In the last decade, power grids have evolved into Integrated Power-Telecommunication (IP&TLC) infrastructures for enhancing supply efficiency and response-to-demand speed. However, IP&TLC infrastructures have not been originally designed as such, and vulnerabilities may arise from such strong interdependences between the power grid and the TLC network. In this work, we propose a novel simulation-based approach for the identification of vulnerabilities of an IP&TLC infrastructure, and for the evaluation of the related potential impacts on customers of different economic sectors relying on the service from such infrastructure. The approach is exemplified on a typical power distribution grid (i.e., the IEEE14 bus test grid), integrated with a TLC network that comprises of Phasor Measurement Units (PMUs) for collecting data from the power grid, Phasor Data Concentrators (PDCs) for locally gathering the data, and a Control Centre (CC) supervising the power grid dispatchment control. We show that the simulation-based approach can identify the vulnerabilities of the IP&TLC infrastructure with reference to traditional metrics used for the performance assessment of power grids (such as Energy Not Supplied (ENS), Cumulative Power Mismatch (CPM), Dispatch-Demand Ratio (DDR)) and TLC networks (such as Cumulative Transmission Delay (CTD), Cumulative Packet Loss (CPL), System Average Interruption Frequency Index (SAIFI)). Finally, we summarize the insights provided by these metrics by calculating the performance of the IP&TLC infrastructure in terms of the impact that the IP&TLC infrastructure unavailability has on the customers service availability, by using the Sector Loss (SL) metric that considers a selection of Customer Damage Functions (CDFs).

11:30
Juan Pablo Futalef (Politecnico di Milano, Italy)
Francesco Di Maio (Politecnico di Milano, Italy)
Enrico Zio (MINES Paris, PSL University and Politecnico di Milano, Italy)
Grey-Box Models for Cyber-Physical Systems Reliability, Safety and Resilience Assessment

ABSTRACT. Cyber-Physical Systems (CPSs) integrate physical components with cybernetic elements. Modeling the complex interactions that arise among these is necessary to realistically represent the physical processes, the interconnections among physical components and cybernetic elements, and the data transmission within the cyber network. The computational effort needed for the solution of such a model challenges the CPSs reliability, safety and resilience assessment, which implies simulating a large number of scenarios. Grey-Box Models (GMB), which combine physics-based and data-driven models, offer a way to tackle the problem, while keeping model accuracy and preserving interpretability. In this work, we elaborate on a hierarchy-based architecture of literature to develop a systematic methodology in support of the development of GBMs for CPSs. The methodology is exemplified by developing the GBM of an Integrated Power-Telecommunication (IP\&TLC) CPS infrastructure.

11:50
Sandra König (Austrian Institute of Technology, Austria)
Lorcan Connolly (Research Driven Solutions Ltd., Ireland)
Stefan Schauer (Austrian Institute of Technology, Austria)
Alan O'Connor (Research Driven Solutions Ltd., Ireland)
Paraic Carroll (University College Dublin, Ireland)
Daniel McCrum (University College Dublin, Ireland)
Combining Cascading Effects Simulation and Resilience Management for Protecting CIs from Cyber-Physical Threats
PRESENTER: Sandra König

ABSTRACT. Critical Infrastructures (CIs), located at the heart of modern society, require special protection as the services they provide are essential to the functioning of our daily life. Major efforts have been made over the last decade to improve the protection of CIs to physical and/or cyber threats; however, they are still very vulnerable due to the increasing digitalization and the resulting interdependencies among them. Thus, recent approaches in CI protection focus on making the CIs more resilient instead of only trying to reduce their risks. While there are several valuable approaches covering CI resilience management, most of them still deal with CIs as isolated objects and do not take into account their interrelations to other CIs. In this paper, we discuss an approach that combines a CI interdependency model and cascading effects simulation with a structured resilience management framework (RMF). The dependency model describes how the functionality of a component depends on that of related components and simulates the changes in the functionality level after an incident. The changes are described through a probabilistic Mealy automaton model. The RMF relies on the definition of resilience indicators to capture the decrease of a CI’s service level after an incident takes place. The two frameworks have a similar approach towards understanding and protecting one or more CIs and could therefore benefit from one another. In this paper, we demonstrate how both frameworks are combined and illustrate it through an example. This example is presented in the context of a Serious Game approach to CI protection within a living lab demonstrator in the H2020 funded PRECINCT project (www.precinct.info). The RMF defines the CI as a system of various types of infrastructure (transport, power, telecoms etc.). Resilience indicators quantify the variation in service due to cyber-physical hazards of known intensity. It is essential that the interdependencies in the infrastructure types are considered when examining service breakdown. The graph-based interdependency model of the CI system allows the user to quickly examine the impact of changes to the resilience indicators in terms of service breakdown following a hazard, considering these interdependencies. Additionally, the interdependency mode’s model of a CI may capture information on the resilience, such as the resilience indicators. The simulation model then mimics the effects on dependent CIs, including how the resilience evolves. The combined model allows us to capture the functionality loss of an entire CI network after an incident has taken place as well as its recovery such that the resilience of the entire CI can be assessed (instead of just a single one), which later provides insights on identifying holistic counter measures for improving the resilience for all CIs. On the other hand, the simulation model can be refined based on available knowledge of the resilience. In particular, the reaction of a component to an incident (i.e., the probabilities of the Mealy automaton) may depend on the resilience level.

12:10
Clara Maathuis (Open University, Netherlands)
Sabarathinam Chockalingam (Institute for Energy Technology, Norway)
Victim versus Offender: Behaviour Modelling during Covid-19 Pandemic Cyber Attacks
PRESENTER: Clara Maathuis

ABSTRACT. The ongoing Covid-19 pandemic transformed the way, how we as a society, function at both global and individual level. This not only had, but also continues to have a direct impact on important aspects of life like education, healthcare, and work (Vargo et al. 2021). This was possible since in a limited amount of time a direct projection and/or switch to digital technologies was made for both systems and processes that were already relying on digital technologies as well as the ones who were not relying on them. In other words, a mindset change and a forced digitalization took place, further directly opening a door for taking advantage of existing risks and exploiting both unknown and unknown vulnerabilities that organizations and users’ systems had in different domains and industries. Consequently, this makes us realize that we are much more vulnerable and exposed to a more diverse palette of risks than what we thought. This is directly evident from a significant increase in the number and type of cyber-attacks carried out all over the world (Lallie et al. 2021), aiming at, for instance, sending out phishing e-mails to employees working from home, gathering relevant systems’ production information for further exploitation, manipulating critical infrastructure functionalities, stealing cryptocurrencies, influencing public opinion in relation to the origin and impact of Coronavirus as well as vaccination campaigns, and political polarization concerning diverse conflicts and elections. On this behalf, both communities of security researchers and practitioners started to identify attacks that are prevalent in this Covid-19 pandemic situation and proposed relevant recommendations to both cyber security and non-cyber security personnel for effectively dealing with such attacks (Pritom et al. 2020). However, a structured and formalized approach based on lessons learned gathered from the main cyber-attacks conducted since the beginning of this pandemic lacks. It is then the aim of this research to build and propose a novel attack-defense tree to tackle this gap based on extensive literature review and cyber security incident analysis following a Design Science Research methodological approach. The model proposed captures the relation between the attack event, exploited vulnerabilities, impact assessment, and specific applicable countermeasures, and aims to further support security and policy decision makers when analysing incidents, proposing dedicated countermeasures, and gather lessons learned that can further support their overall decision-making processes and activities.

References Lallie, H. S., Shepherd, L. A., Nurse, J. R., Erola, A., Epiphaniou, G., Maple, C., & Bellekens, X. (2021). Cyber security in the age of covid-19: A timeline and analysis of cyber-crime and cyber-attacks during the pandemic. Computers & Security, 105, 102248.

Pritom, M. M. A., Schweitzer, K. M., Bateman, R. M., Xu, M., & Xu, S. (2020, November). Characterizing the Landscape of COVID-19 Themed Cyberattacks and Defenses. In 2020 IEEE International Conference on Intelligence and Security Informatics (ISI) (pp. 1-6). IEEE.

Vargo, D., Zhu, L., Benwell, B., & Yan, Z. (2021). Digital technology use during COVID‐19 pandemic: A rapid review. Human Behavior and Emerging Technologies, 3(1), 13-24.

12:30
Rialda Spahic (Norwegian University of Science and Technology, Norway)
Vidar Hepsø (Norwegian University of Science and Technology, Norway)
Mary Ann Lundteigen (Norwegian University of Science and Technology, Norway)
Using Risk Analysis for Anomaly Detection for Enhanced Reliability of Unmanned Autonomous Systems
PRESENTER: Rialda Spahic

ABSTRACT. Technological breakthroughs in autonomous systems are reshaping how the offshore industry manages risk. Unmanned Autonomous Systems (UAS) are intended to improve safety by removing the need for operators and vessels at remote and possibly dangerous locations while simultaneously residing at the seabed, monitoring and inspecting the assets and the environment [1]. The UAS are expected to have a permanent role in risk reduction and protective measures. These systems, such as underwater intervention drones, collect data through sensors and video analysis of graphics to identify early warning signs of a failing asset or potentially hazardous event and to alarm the operators onshore. Identification of unwanted events relies on anomaly detection methods responsible for finding anomalous occurrences that do not conform to the trend of the vast collections of data [2]. Anomaly detection can detect degradations or damages of barriers, i.e., the technical integrity of the pipeline is a critical barrier to prevent gas releases to the sea. Fortunately, the probability of a hazardous event with a high consequence is low. However, this results in a significant data imbalance with little to no evidence of hazardous events in the training datasets for anomaly detection. Consequentially, the anomaly detection method's ability to recognize early warnings of an undesired event is reduced, potentially signaling the operators with false alarms. Moreover, the anomalies can end up being sacrificed for efficiency and ignored as tolerable collateral damage [3]. Unfortunately, the amount of unlabeled data is prevalent in research and industry, making training of the anomaly detection methods significantly more challenging. The unlabeled data requires the use of unsupervised methods that are often complex and black-boxed, making them difficult to assess, comprehend, and incorporate with risk analysis. Previous research [4-8] has demonstrated that depending on unsupervised methods for prediction outcomes during operations might be misleading, as the data is frequently imbalanced or biased. Numerous ways have been used to improve the data balance, including defining decision boundaries and extrapolating data using simulations [4][5]. While these approaches are promising, establishing decision boundaries is subjective and involves extensive parameter tuning [5]. Similarly, extrapolation via simulation of hazardous events is resource-intensive and frequently does not accurately reflect the physical world [6][7][8]. More lately, we have seen an increasing need for multidisciplinary methods to address the rapidly growing autonomy-enabling technology challenges. In comparative research, risk analysis has been utilized to harvest anomaly detection results and to use detected anomalous observations as input data for hazard identification and subsequent risk analysis processes. This study introduces a novel approach to anomaly detection by examining risk analysis as a tool for providing a semi-supervised approach to anomaly detection, thereby lowering the probability of false alarms or missed signals caused by data imbalance. Anomaly detection and risk analysis have numerous common features, apart from recognizing high-impact, low-probability events. While risk analysis identifies hazards and hazardous events, anomaly detection is concerned with curating data and identifying anomalies. Barriers play a critical role in risk analysis as they serve as safeguards, encircling and containing the hazards [9]. Similarly, in anomaly detection, decision boundaries define regions of space that encompass the distinctions between normal and anomalous occurrences in data [4]. Three main categories of anomalies, contextual, collective, and point anomaly, rely on a sequence of events, the frequency of their occurrence, and likelihood - sharing the risk definition where Risk = Likelihood * Frequency [10]. For autonomous alarm management, the combination of anomaly detection and risk analysis is interesting. Alarms are used to warn operators of a malfunctioning piece of equipment, a process deviation, or an unanticipated state requiring operator involvement [11]. By integrating risk analysis and anomaly detection, we can address trustworthiness, transparency, the absence of context in complex algorithms, and biased, unreliable data - conforming to the essential EU guidelines for trustworthy intelligent systems [12].

References: [1] L. Erhan, M. Ndubuaku, M. Di Mauro, W. Song, M. Chen, G. Fortino, O. Bagdasar, and A. Liotta. Smart anomaly detection in sensor systems: A multi-perspective review. Inf. Fusion, 67(September 2020):64–79, 2021. [2] Michael A. Hayes and Miriam A.M. Capretz. Contextual anomaly detection in big sensor data. In Proc. - 2014 IEEE Int. Congr. Big Data, BigData Congr. 2014, pages 64–71. Institute of Electrical and Electronics Engineers Inc., sep 2014. [3] Karima Makhlouf, Sami Zhioua, and Catuscia Palamidessi. On the applicability of ML fairness notions. arXiv, pages 1–32, 2020. [4] Peng Li, Oliver Niggemann, and Barbara Hammer. On the identification of decision boundaries for anomaly detection in CPPS. In Proc. IEEE Int. Conf. Ind. Technol., volume 2019-Feb, pages 1311–1316. Institute of Electrical and Electronics Engineers Inc., Feb 2019. [5] Dong Hoon Shin, Roy C. Park, and Kyungyong Chung. Decision Boundary-Based Anomaly Detection Model Using Improved AnoGAN from ECG Data. In IEEE Access, volume 8, pages 108664–108674. Institute of Electrical and Electronics Engineers Inc., 2020. [6] Simen Eldevik and Frank Børre Pedersen. AI + safety - DNV. Technical report, 2018. [7] Tianci Zhang, Jinglong Chen, Fudong Li, Kaiyu Zhang, Haixin Lv, Shuilong He, and Enyong Xu. Intelligent fault diagnosis of machines with small & imbalanced data: A state-of-the-art review and possible extensions. ISA Trans., 119:152–171, jan 2022. [8] Rui Zhu, Yiwen Guo, and Jing Hao Xue. Adjusting the imbalance ratio by the dimensionality of imbalanced data. Pattern Recognit. Lett., 133:217–223, may 2020. [9] William G. Johnson. MORT safety assurance systems. M. Dekker, New York, 1980. [10] E. Zio, “The future of risk assessment,” Reliab. Eng. Syst. Saf., vol. 177, no. March, pp. 176–190, 2018. [11] Eric William Scharpf, Harold W. Thomas, and R. Stauffer. Practical SIL Target Selection: Risk Analysis Per the IEC 61511 Safety Lifecycle. exida.com LLC, Sellersville, Pennsylvania, 2 edition, 2012. [12] EU guidelines on ethics in artificial intelligence: Context and implementation | Think Tank | European Parliament. (n.d.). Retrieved February 14, 2022, from https://www.europarl.europa.eu/thinktank/en/document/EPRS_BRI(2019)640163

11:10-12:50 Session 3E: S.15: Reliability, Durability, Sustainability of Consumer Electronic Devices
Chair:
Maxim Nikiforov (Amazon Lab126, United States)
Location: LG-20
11:10
Philippe de Cuetos (AFNUM, France)
Feedback from the “French experiment” on the repairability index

ABSTRACT. France was the first country in Europe to implement a nation-wide repairability index (or repair score). Following general environmental objectives to extend the lifespan of manufactured products, the index is designed to nudge users into buying new products that can be repaired easily. This also intends to motivate producers to put more of such products on the market. The repairability index was introduced on the French market on January 1, 2021. It was originally applied to five categories of electronic devices (smartphones, laptops, TV monitors, front-loading washing machines and electric lawnmowers) and is now progressively being extended to other product categories. Although the index composition has already been subject to various critics, it acts as a basis for the definition of the upcoming European repairability index. It will also be used as part of a new “French experiment” with a broader scope: the durability index, which France intends to introduce on the market in 2024.

11:30
Kok Yiang (Amazon, United States)
Ryan Bradley (Amazon, United States)
Lin Shi (Amazon, United States)
A Methodology for Correlating Annualized Replacement Rate (ARR) Reduction to Sustainability Benefits
PRESENTER: Ryan Bradley

ABSTRACT. Reducing the Annualized Replacement Rate (ARR) of a product brings a two-fold benefit to its sustainability impact. Firstly, it reduces the warranty stockpile and therefore a lower carbon footprint required to fulfill warranty replacements. Secondly, it extends the lifetime of the product, which reduces the overall carbon footprint every year during use. This paper discusses a methodology to quantify the amount of carbon emission (kgCO2e) that is mitigated when the ARR of a consumer product is reduced because of durability improvements. This methodology is generalizable and can be applied to all consumer products in any product category.

11:50
Maxim P. Nikiforov (Amazon Lab126, United States)
Geoffrey Chu (Amazon Lab126, China)
Bill Liu (Amazon Lab126, United States)
Paresh Mukhedkar (Amazon Lab126, United States)
Kok Yiang (Amazon Lab126, United States)
Neda Shafiei (University of Maryland, College Park, MD, U.S., United States)
Mohammad Modarres (University of Maryland, College Park, MD, U.S., United States)
Guneet Sethi (Amazon Lab126, United States)
Aaron Krive (Amazon Lab126, United States)
Understanding the Reliability of Portable Consumer Electronics through Customer Surveys (user stress, device strength, and field failure rate).

ABSTRACT. Millions of portable consumer electronics, such as tablets, e-readers, laptops, and smartphones, are shipped every year. Different device models have different field reliability. In this paper, we present a methodology for estimating field failure rate using customer survey data and validate using Amazon tablet and e-reader product lines for failures related to mechanical impact. Paper provides an overview of the entire process: how to use a customer survey to understand user stress (e.g. drop frequency, drop height, drop surface); project multi-dimensional user stress onto a 1-D frequency axis using a drop impact model internally developed with Fire HD10 tablets; use customer survey to understand failure modes and customer actions after failure occurrences; utilize stress - strength analysis to estimate field failure rate; correlate survey-based failure rates and internal data on field failure rates to validate the developed stress-strength model.

12:10
Subhankar Dutta (National Institute Of Technology, Rourkela, India)
Suchandan Kayal (National Institute Of Technology, Rourkela, India)
Reliability analysis of K-out-of-N system for Weibull components based on generalized progressive hybrid censored data
PRESENTER: Subhankar Dutta

ABSTRACT. In this paper, we have investigated the reliability of a K-out-of-N system for the components following Weibull distribution based on the generalized progressive hybrid censored data. We have obtained the maximum likelihood estimates (MLEs) of the unknown parameters and the reliability function of the system. Using asymptotic normality property of MLEs, the corresponding asymptotic confidence intervals are constructed. Furthermore, Bayes estimates are derived under squared error loss function with informative prior by using Markov Chain Monte Carlo (MCMC) technique. Highest posterior density (HPD) credible intervals are obtained. A Monte Carlo simulation study is carried out to compare performance of the established estimates. Finally, a real data set is considered for illustrative purposes.

12:30
Boris Kaganovich (VK, Russia)
Alexander Malyshev (VK, Russia)
Ivan Vozvakhov (VK, Russia)
Methods of statistical analysis as a tool for VK Capsula speakers sustainable manufacturing process
PRESENTER: Boris Kaganovich

ABSTRACT. The manufacturing process may be influenced by a variety of factors. The VK team uses statistical analysis methods to account for the reciprocal interactions of these factors. This allows us to react to deviations in the manufacturing process and make the necessary changes before the first defective products appear. In this article, we describe an approach for using ATE data to build a sustainable manufacturing process. This approach enables us to switch from 100% inspection to AQL while keeping the FFR and overall high quality of the final product. Under the influence of the global semiconductor shortage, we again became convinced of the approach's efficiency, changing the entire hardware platform on the go. We believe that our experience may become helpful in establishing a universal QA standard for consumer electronics.

11:10-12:50 Session 3F: S31: Advancing Human Factors Integration in Aviation and Maritime Domains: the SAFEMODE Project
Chair:
Barry Kirwan (Eurocontrol, France)
Location: LG-21
11:10
Diana Paola Moreno Alarcon (ENAC, France)
Fanni Kling (HungaroControl, Hungary)
Luca Save (Deep Blue, Italy)
Barry Kirwan (EUROCONTROL, France)
How well do Human Factors tools really support the design process?
PRESENTER: Barry Kirwan

ABSTRACT. In 2013, as part of the European project OPTICS, seventy Human Factors professionals gathered in Brussels to consider the strategic needs of Human Factors in the aviation domain. One of the top three issues raised was the lack of application of Human Factors early enough in design processes. This was seen to lead to operational systems where mistakes could happen more easily, and be more difficult to correct and mitigate, than if Human Factors had been applied at an earlier design stage.

Six years later, the European SAFEMODE project was launched. One of its principal aims is to enhance the integration of Human Factors into the design of aviation and maritime systems, via a ‘designer-friendly’ Human Factors Risk-Informed Design process called HURID. Central to the HURID process is the development of a Human Factors Toolkit for designers, since not all design teams can count Human Factors professionals in their ranks.

The Human Factors Toolkit contains 25 methods, including 19 techniques, 3 models and 3 processes. The techniques are further divided into seven functions that can support the design process, from task analysis, prototyping and error identification, through to real-time simulation, Human Reliability Assessment and preparing for system deployment and operation via consideration of fatigue risk management and safety culture. Some of the Human Factors techniques can be applied very early in the design life cycle, others not, and it was an objective in SAFEMODE to see how easily designers could use them, and how useful they were found to be by designers in realistic design settings.

Four aviation design use cases, two of which were at a very early concept design stage, were studied. Each independent design team selected and applied techniques from the Toolkit they deemed appropriate and executable, given the resources available in their design team, over a two-year period. The results showed firstly that not all techniques were selected, suggesting that certain techniques were seen as superfluous to designers. Secondly, in one specific case related to Human Factors standards and guidance for designers, something was missing, which led to an action to develop new guidance on the use of such standards. Third, certain techniques worked well even at very early design stages, to the surprise of some of the design teams. Fourth, several teams brought in other techniques not originally in the Toolkit (e.g. from the Usability domain), which they found useful, and these are now being added to the Toolkit. Lastly, a common ‘workflow’ was seen in the application of the techniques, across all four design use cases, and this workflow matches the HURID process.

This study of Human Factors tools used in early design stage projects by design teams, has shown that Human Factors can indeed enhance design early on, and that many of the techniques are relatively ‘accessible’ to designers, though some require support by Human Factors professionals. The SAFEMODE Human Factors Toolkit is now being refined, based on the results and insights from these use cases, to deliver better guidance and support to design teams. This will lead to enhanced Human Factors integration at the design stage, and ultimately to safer and more efficient aviation and maritime systems.

11:30
Matteo Cocchioni (Deep Blue, Italy)
Anna Giulia Vicario (Deep Blue, Italy)
De Wolff Louis (CalMac Ferries, UK)
Francesca Wade (CalMac Ferries, UK)
Rafet Kurt (University of Strathclyde, UK)
Beatriz Navas De Maya (University of Strathclyde, UK)
Osman Turan (University of Strathclyde, UK)
Andrea Lommi (CETENA, Italy)
Simone Pozzi (Deep Blue, Italy)
From the aviation domain to the maritime: an application of the SAFEMODE methodology
PRESENTER: Matteo Cocchioni

ABSTRACT. This paper describes the application of a new methodological approach developed within the SAFEMODE project intending to improve the results of human factors analysis obtained from the existing investigation approaches. This set of methodologies comprehends, on the one hand, models and techniques already used in the aviation domain, a benchmark for safety practices, and on the other hand, new framework and taxonomies developed within the SAFEMODE projects. The combined use of these methodologies has been applied to analyse accidents and near misses in a partner shipping company in the SAFEMODE consortium to demonstrate its usability in the maritime domain. The results confirmed the effectiveness of the methodologies in finding an increased number of unsafe acts, contextual preconditions and latent organisational factors involved in different accidents, near misses and occurrences, compared to a more “traditional” approach. The use of these techniques has been recognised as very promising in other domains such as railways and drone operations.

11:50
Scott MacKinnon (Chalmers University, Sweden)
Yasser B. A. Farag (University of Strathclyde, UK)
Panagiotis Sotiralis (National Technical University of Athens, Greece)
Rithvik Dandu Basappa (Chalmers University, Sweden)
Robert Thomson (Chalmers University, Sweden)
Barry Kirwan (EUROCONTROL, France)
Marta Llobet Lopez (EUROCONTROL, France)
Using occurrence data to map the elements of a risk model
PRESENTER: Scott MacKinnon

ABSTRACT. The EU funded SAFEMODE project provided a platform for stakeholders from the Maritime domain to pull knowledge and best practices from the aviation sector with respect to the identification and mediation of accident risk. The aviation industry has matured a risk modelling approach that considers barriers (e.g., organization, supervision, preconditions and acts) that should reduce the cascading events that may lead to incidents and accidents. In the SAFEMODE approach, the main structures of these risk models are named the backbone, contributors, human influence layers (performance & shaping factors).

The maritime industry has a less matured culture in risk assessment compared to other safety critical domains. This has created challenges for the industry, inter alia, a lack of standardization of the processes for accident investigations, reporting and, more importantly, a scarcity of the number of incident reports (i.e., near misses) that often provide much more insight into error aetiologies.

Risk modelling can be used in the design and control of operations. The emergence of digitalization and automation has become ubiquitous in the undertaking of navigation practices and has created, paradoxically, safety related problems, due to changes in how work is done. It would be advantageous for the industry to adopt a methodological framework that demonstrate its utility, usability, adaptability, and robustness to create an acceptance within the industry.

This work addresses the following questions: (1) Can a risk modelling framework applied in other safety critical domains (i.e., such as aviation) be adapted to serve the needs of a maritime socio-technical system, and (2) would such risk models satisfy the needs and capacities of this system.

The paper describes the validation process for a risk model to assess collision risk in congested waterways. The validation activities were divided into two parts: (1) Mapping of incident data using the developed risk models. (2) Analysis by subject matter experts (SMEs) (outside of the project consortium) regarding the model barriers. A face validity approach was applied to study the problem. This method considers how suitable the content of the components of a model seems to be and is based on a subjective assessment by SMEs from critical stakeholder groups.

The risk model validation considered occurrence reports. The occurrence mapped to the risk models were all reported accidents, therefore only the failure contribution is considered. As a first overview, at the backbone level, most of the elements were tested during this validation phase. From the 38 elements that compose the backbone (23 precursors, 12 barriers and 3 circumstantial factors), 10 barrier failures were identified through the collision incidents. This observation represents 32% of the backbone elements.

The results from the deviation reports for the Collision in Congested Waters model were mostly the terminology changes required in the base events that can be made generic and better suited for capturing the human-related contributing and influencing factors. The terminology changes were made in the backbone structure and barriers. The other deviations identified were inappropriate “AND” and “OR” gates used to complement the links to the contributors responsible for the failure or success of the barriers and requirement for additional base events. These findings point to the old question of the degree to which risk models (should) represent occurrence data. In that sense, the contrast between maritime and aviation is interesting.

The main objective of the expert survey was to collect feedback from internal and external SMEs. This was undertaken to confirm internal validation results (proof of concept), as well as the practicality of the developed SAFEMODE risk models applications to industry stakeholders (proof of acceptance). The analysis of the influence layer was expanded to capture the most common shaping factors observed under the acts, organisation, supervision, and preconditions categories. The majority of the participants were in consensus that the risk models should support the development of new systems and equipment to improve user interactions and thus improve safety. However, modifications will be required to customize the models and extend their capabilities.

The participants provided positive feedback regarding the exploitation of the SAFEMODE risk models. The feedback included the following statements. The risk models can:

1. be translated into training activities to improve operations by identifying the different contributing factors that lead to the accident. 2. develop operational barriers & safeguards which in turn will optimize standard operating procedures. 3. inform the risk assessment of new designs and modification of the existing ones. 4. help in accident investigations. 5. help identify key Human Factors that contribute to the barriers' failure and the overall risk.

In conclusion, there are opportunities to develop risk models within such a framework to assess the DNA of the risk of incidents and accidents within the maritime domain. However, there will be a considerable learning curve before a general industry-wide acceptance will be obtained. Yet, the benefits of such a systematic, documented, and standardised approach should improve the overall safety at sea.

12:10
Frederic Rooseleer (EUROCONTROL, Belgium)
Attila Pasztor (HungaroControl, Hungary)
Fanni Kling (HungaroControl Hungarian Air Navigation Services Pte. Ltd. Co, Hungary)
Elizabeth Humm (Deep Blue, Italy)
Gianluca Borghini (Sapienza University of Rome, Italy)
Jonathan Pugh (De Montfort University, UK)
Barry Kirwan (EUROCONTROL, France)
Mikhail Goman (De Montfort University, UK)
Diana Paola Moreno Alarcon (ENAC, France)
A Tale of Two Simulations – the Challenges of Validating an Air-Ground Collaborative Safety Alert

ABSTRACT. All commercial jet engine aircraft generate vortices in their wake. Such vortices can persist for several minutes after the wake-generating aircraft has flown by, potentially causing a hazard to any following aircraft that may encounter them, resulting in an aircraft upset (induced roll, loss of height or rate of climb), potential loss of control in-flight (LOC-I) and cabin injuries. Despite maintaining the correct separation between commercial aircraft according to international rules, wake vortex encounters (WVE) are reported in the en-route / cruise phase of flight in some airspace, occasionally resulting in significant upsets (up to 60° bank) in particular for smaller aircraft types such as business jets. Experience has demonstrated that if the pilot reacts at the first roll motion – possibly influenced by the ‘startle effect’ due to the unexpected and sudden uncontrolled aircraft motion – the roll could be amplified by this initial piloting action, increasing the risk of loss of control. Such risks from wake vortex encounters in en-route airspace have remained relatively unaddressed except via separation procedures and flight crew standard operating procedures, in part because until now, en-route Air Traffic Control has had no specific means to detect wake encounter risk.

A new Wake Vortex Alert system is now at the exploratory design stage. The potential vortex encounter is predicted on the ground and flagged to the air traffic controller handling the affected traffic, who then orally passes a Caution message to the flight crew of the aircraft likely to encounter the turbulence, enabling them to anticipate the wake (reducing startle response and potential loss of control) and secure the cabin.

The operational concept for the alert is therefore collaborative in nature, in essence a two-step ‘tandem’ operation whose first step occurs on the ground, and whose second step occurs in the cockpit. In order to validate this operational concept, and show that it is viable and can be effective in mitigating wake vortex flight risk, a two-stage validation approach was utilized: a real-time, high fidelity simulation with controllers on the ground, and an equally high-fidelity simulation with airline pilots in a full-scope moving flight cockpit simulator. These two validations both took place within the European SAFEMODE project, whose aim is to increase the use of Human Factors in the design of aviation and maritime systems.

The simulation with professional controllers took place at the HungaroControl (HC) facilities in Budapest (Hungary), while the one with pilots recruited from TUI and Ryanair was performed at the Aircrew Training Solutions & Simulator Equipment (AMST) facilities in Ranshofen (Austria). The experimental protocols were designed with the aim to use environments as realistic as possible, though with certain constraints. In particular, in the ground experiments the controllers were talking to ‘pseudo’ pilots who were in an adjacent room with a script in front of them. Similarly, the pilots in the flight simulator were talking to a ‘pseudo’ controller. We collected controllers’ and pilots’ behavioral (eye-tracking), subjective (self-report), and neurophysiological (brain activity and skin conductance) data during the simulations and under the different experimental conditions (e.g., with vs. without the Wake Vortex Alert system). The different kinds of data will be combined to accurately assess the benefit and impact of the Wake Vortex Alert system with respect to the current scenarios. The experimental hypotheses for each simulation were to an extent conditionally dependent on the results of the other simulation, so that they were run within a month of each other.

Despite these challenges, preliminary results show that the operational Wake Vortex Alert concept has viability: the controllers can detect the alert and transmit the information to the flight crew in time, even when their workload is relatively high, and the pilots find the alert information valuable in avoiding the sudden surprise and even startle effects, and can control the aircraft better (e.g. when the autopilot disengages due to the excessive roll induced by the wake encounter). The simulations have also led to some unexpected and useful insights, e.g. relating to the situation when the aircraft generating the wake, and the aircraft in its path, are in two different air traffic sectors, and are therefore being controlled by two different controllers. Such insights will lead to refinement and improvement of the operational concept. The overall multimodal approach has also to an extent validated the Human Factors approach and toolkit developed within the SAFEMODE project, which was used in both simulations.

12:30
Sybert Stroeve (Royal Netherlands Aerospace Centre NLR, Netherlands)
Barry Kirwan (EUROCONTROL, France)
Beatriz Navas de Maya (University of Strathclyde, UK)
Bas Van Doorn (Royal Netherlands Aerospace Centre NLR, Netherlands)
Patrick Jonk (Royal Netherlands Aerospace Centre NLR, Netherlands)
SHIELD human factors taxonomy and database for systematic analysis of safety occurrences in the aviation and maritime domains
PRESENTER: Sybert Stroeve

ABSTRACT. There is a scarcity of suitable human factors data derived from investigation of safety occurrences in support of effective safety management and feedback to design. In the aviation and maritime domains, details about human contributors are not always systematically analysed and reported in a way that makes extraction of trends and comparisons possible. As a way forward, a taxonomy and data repository were designed for systematic collection and assessment of human factors in safety occurrences in aviation and maritime operations, called SHIELD (Safety Human Incident & Error Learning Database). Its human factors taxonomy uses four layers: the lowest layer addresses the sharp end where acts of human operators contribute to a safety occurrence; the next layer concerns preconditions that affect human performance; the third layer describes decisions or policies of operations leaders that affect the practices or conditions of operations; and the top layer concerns influences from decisions, policies or methods adopted at an organisational level. The taxonomy has been effectively used by maritime and aviation partners for the analysis of more than 400 incidents and accidents. The resulting human factors statistics and occurrence traceability provide feedback to designers and safety management for learning more systemic lessons on human contributions in safety occurrences. Furthermore, they highlight similarities and differences between the aviation and maritime industries.

11:10-12:50 Session 3G: Wind farms monitoring and maintenance
Chair:
Marko Cepin (University of Ljubljana, Slovenia)
Location: CQ-009
11:10
Yikai Ma (University of Warwick, UK, UK)
Wenjuan Zhang (University of Warwick, UK, UK)
Juergen Branke (University of Warwick, UK, UK)
Genetic Programming Hyper-heuristic for Evolving a Maintenance Policy for Wind Farms
PRESENTER: Yikai Ma

ABSTRACT. Reducing the cost of operating and maintaining wind farms is essential for the economic viability of this renewable energy source. This study applies hyper-heuristics to design a maintenance policy that prescribes the best maintenance action in every possible situation. Genetic programming is used to construct a priority function that determines what maintenance activities to conduct, and the sequence of maintenance activities if there are not enough resources to do all of them simultaneously. The priority function may take into account the health condition of the target turbine and its components, the characteristics of the corresponding maintenance work, the workload of the maintenance crew, the working condition of the whole wind farm, the current inventory level, and stochastic wind conditions. Empirical results using a simulation model of the wind farm demonstrate that the proposed model can construct maintenance dispatching rules that perform well both in training and test scenarios, which shows the practicability of the approach. The results also show the advantage of the proposed maintenance policy over several previous maintenance strategies. Furthermore, the importance of taking the weather condition and inventory conditions is also proved.

11:30
Théodore Raymond (VALEMO, France)
Christophe Bérenguer (Gipsa-Lab, France)
Sylvie Charbonnier (Gipsa-Lab, France)
Alexis Lebranchu (VALEMO, France)
Data-driven Model Generation Process for Thermal Monitoring of Wind Farm Main Components through Residual Indicators Analysis

ABSTRACT. A wind turbine is a complex system which is exposed to many disturbances that can damage the machine components and generate faults. These faults, if left unattended, can lead to the failure of major components and result in high production losses. That is why the system operating data of a wind turbine needs to be transformed into health indicators in order to switch to a condition-based maintenance strategy, so as to optimize the intervention schedule and to reduce the associated costs. To address this concern, new maintenance strategies are being developed and implemented, based on condition monitoring. These strategies aim at monitoring the health status of components by analyzing synthetic condition indicators. Very often, these indicators are residuals calculated from a normal behavior model. These models are built for a specific component and turbine, and cannot be applied to other components. This makes them difficult to use in an industrial context. In this paper, a solution to automatically generate linear normal behavior models for any component and any turbine of a wind farm is proposed. The approach aims at building a model from normal behavior data by selecting iteratively, for a given variable to be estimated, the best variables to add to the model. The models obtained are simple, and easy to interpret. They are able to predict efficiently the temperature of any component, for all the turbines of a wind farm. The set of models obtained makes it possible to build a network of health indicators, which can be used for fault isolation. The method is applied to monitor the thermal condition of a real French wind farm where a converter failure impacted all the turbines in the summer period of 2020. The indicators generated from the models are then evaluated on simple detection and estimation performance criteria on the default period.

11:50
Wen Wu (Institute for Aerospace Technology, Resilience Engineering Research Group, The University of Nottingham, NG7 2RD, UK)
Ali Saleh (Department of Structural Mechanics and Hydraulic Engineering, University of Granada, Spain)
Rasa Remenyte-Prescott (Resilience Engineering Research Group, Faculty of Engineering, University of Nottingham, UK)
Darren Prescott (Resilience Engineering Research Group, Faculty of Engineering, University of Nottingham, UK)
Manuel Chiachio Ruano (Department of Structural Mechanics and Hydraulic Engineering, University of Granada, Spain)
Dimitrios Chronopoulos (KU Leuven, Department of Mechanical Engineering, Mecha(tro)nic System Dynamics (LMSD), 9000, Belgium)
Asset management modelling approach integrating structural health monitoring data for wind turbine blades
PRESENTER: Wen Wu

ABSTRACT. Optimal asset management strategies for wind turbine blades help to reduce their operation and maintenance costs, and ensure their reliability and safety. Structural health monitoring (SHM) can determine the health state of wind turbine blades through implementing damage identification strategies. The main load-bearing structure spar of the wind turbine blade is inside the structure, which is difficult to inspect. Advanced SHM techniques, such as guided wave monitoring, can be used to monitor the development of cracks in real-time and provide an early indication. This paper presents a risk-based maintenance model based on the state information provided by SHM. The model is based on Petri net method, used to describe blade degradation process, guided wave monitoring process, inspection and maintenance works. Failure model of wind turbine blades is presented to provide input for Petri nets. The reliability of guided wave monitoring is also assessed. The proposed model is able to predict the condition state, expected number of repairs and asset management costs of wind turbine blades, which can potentially help to make informed decisions for operation of wind turbine blades.

12:10
Laurent Barthelemy (ENSM, France)
Optimizing berthing of Crew Transfer Vessels against Floating Wind Turbines – a Comparative Study of Various Floater Geometries

ABSTRACT. Securing the return on investment for commercial floating wind farms by a proper estimate of the operation and maintenance downtime is a key issue to triggering final investment decisions. That is why crew transfer vessel wheather stand-by issues should be assessed together with new floating wind floater concepts, in an attempt to boost their cost attractivity. However, such issues as the numerical investigation of the landing manœuvre of a service ship against a floating wind turbine reveal complex to calculate. Based on similarities with seakeeping, the proposed paper investigates both various ship hull and floater geometries, in an attempt to estimate the weather limitations associated to each configuration. Most recent works find that calculation compares with 5% accuracy to an experiment from a test tank at model scale. Therefore, after 3 years work, we are now in the position to be able to propose weather access criteria guidelines for various cases and to compare them with other publications.

METHOD DESCRIPTION

Vessel seakeeping:

• Assess vessel responses (amplitude and phase angles). • Compare them with vessel responses from available publications, as a benchmark.

Vessel berthing

• Model the friction between the vessel fender and the floater boat landing analytically, then with a software and compare. • Model both vessel and floater with a software. • Compare resulting wave masking effects of existing floater shapes.

MAIN RESULTS AND FINDINGS

The wave masking effect calculation for a square floater has already been cross-checked with an existing demonstrator. For other floaters, the present paper proposes an estimate by calculation. The present calculation method also shows potential developments for the following reasons: • It is independent on a specific hull or floater shape, and therefore may be made to measure. • It may accommodate more realistic sea-states if data is available: bidirectional waves, etc.

12:30
Panagiotis Psomas (University of the Aegean, Greece)
Ioannis Dagkinis (University of the Aegean, Greece)
Agapios Platis (University of the Aegean, Greece)
Vasilis Koutras (University of the Aegean, Greece)
MODELLING THE DEPENDABILITY OF AN OFFSHORE DESALINATION SYSTEM USING THE UNIVERSAL GENERATING FUNCTION TECHNIQUE

ABSTRACT. The objective of this paper is to assess the dependability of an offshore desalination system subject to wind turbine’s maintenance actions and particularly the overall ability of the system to generate and supply fresh water by means of the Universal Generating Function Technique (UGF). As time progresses the offshore desalination system may either deteriorate gradually and go to lower performance states or it may fail suddenly. It is assumed that if wind turbine degrades due to a sudden failure, appropriate repair actions are carried out to restore the turbine in an “as good as new” condition. There are three major factors affecting the general performance and thus the offshore desalination system output: the wind intensity, the wind turbine failures, and the RO (Reverse Osmosis) unit failures. For the wind intensity, a power curve is used to determine the monthly energy output for a given wind turbine in each location. However, the energy output depends also on the different wind turbine degraded states due to a variety of failures. Combining the wind intensity categories with the power output of the offshore desalination plant, consisting of the wind turbine and the RO unit, we have developed a multi-state system (MSS) model, characterizing all the different levels of fresh water output through a formally composition operator, to obtain the final system dependability measures, such as availability, the expected output performance and the expected energy not supplied. The main aim of this work is to evaluate the above dependability measures of the proposed model for the considered offshore desalination system.

11:10-12:50 Session 3H: S.01: Advances in Well Engineering Reliability and Risk Management: reliability of novel technologies
Chairs:
Feliciano Silva (Petrobras, Brazil)
Marcio Das Chagas Moura (Federal University of Pernambuco, Brazil)
Location: CQ-105
11:10
Steven Buchanan (Schlumberger, United States)
Ahed Qaddoura (Schlumberger, United States)
System and Growth Analysis to Ensure Product Reliability During Development
PRESENTER: Ahed Qaddoura

ABSTRACT. Well completion products are intended to last the life of a well without requiring intervention which precludes traditional field testing to establish the field reliability of the products. One option to predict reliability is to perform lifecycle tests of a representative sample of the products. For many products, this approach is not technically feasible or is cost prohibitive. Alternative methods are described that use reliability modeling and growth analysis during product development. Reliability modeling optimizes customer needs, eliminates unworkable concepts, and prevents unexpected system interaction failures. Reliability growth analysis forecasts the product reliability on the basis of the success rates of critical functions tested during development.

11:30
Pankaj Shrivastava (Halliburton, United States)
Desiderio Rodrigues (Halliburton, Brazil)
Effective Application of Design for Reliability in New Product Development

ABSTRACT. Intelligent completion technology helps oil & gas operators to optimize life of well production and enhance reservoir management capabilities without costly intervention. Reliable and fit-for-purpose SmartWell® completion systems enable operators to collect, transmit and analyze downhole data, remotely control selected reservoir zones and maximize reservoir efficiency in production. These systems typically operate in severe downhole environments and are rarely retrieved to surface for analysis or maintenance. And, they are expected to achieve high operational reliability over the life of the well. To meet system reliability targets, a Design for Reliability (DfR) process was applied in the form of a systematic, streamlined, and concurrent engineering program. DfR is an integrated process rather than a step task in the new product development. It covers the entire set of methods and tools that support the product design to ensure that the operator’s expectations for reliability are met throughout the life of the system and that the overall life-cycle costs are low or economically viable. This paper focuses on effective application of the DfR process and highlights the successful application of key DfR tools and techniques in the new product development. It examines the key steps required to make DfR a powerful and effective process that can help any organization design reliability into its products.

11:50
Rafael Azevedo (Federal University of Pernambuco, Brazil)
Isis Lins (Federal University of Pernambuco, Brazil)
Márcio Moura (Federal University of Pernambuco, Brazil)
Eduardo Menezes (Federal University of Pernambuco, Brazil)
July Macêdo (Federal University of Pernambuco, Brazil)
Caio Maior (Federal University of Pernambuco, Brazil)
João Santana (Federal University of Pernambuco, Brazil)
Manoel Da Silva (Petrobras, Brazil)
Marcos Nobrega (Petrobras, Brazil)
Methodology for assessing the reliability of equipment under development
PRESENTER: Márcio Moura

ABSTRACT. This work aims to develop a methodology for predicting and monitoring reliability metrics of new technologies along the development stages. The methodology is composed of three main modules: (i) formulation of a multilevel system reliability model (MSRM), (ii) Bayesian estimation of the MSRM parameters and (iii) definition of the test protocols. The MSRM represents the system reliability via Fault Tree Analysis (FTA) based on reliability models of each possible failure mechanism. The FTA diagram is constructed from FMEA (Failure Modes and Effects Analysis) and/or PoF (Physics of Failure) studies. A Bayesian framework aggregates the various evidence gathered in the different stages of the development timeline to estimate the MSRM parameters and predict system reliability. Thus, an updated reliability metric is given as new information is obtained, for instance, from reliability tests. In this way, managers can adopt both the updated reliability measures and their uncertainty levels to monitor if these indicators have lived up to pre-defined expectations (target) at each development stage. Finally, the test protocols module relies on FMEA and PoF studies and compares target and calculated reliability metrics to define the test plan over the development stages. The methodology also covers the modules updating for cases where design modifications or identification of new failure mechanisms occur during a specific stage of development. We discuss an application of the methodology for completion equipment installed in a Brazilian oil field.

12:10
Eduardo Menezes (Center for Risk Analysis, Reliability Engineering and Environmental Modeling - Federal University of Pernambuco, Brazil)
Rafael Azevedo (Center for Risk Analysis, Reliability Engineering and Environmental Modeling - Federal University of Pernambuco, Brazil)
Caio Souto Maior (Center for Risk Analysis, Reliability Engineering and Environmental Modeling - Federal University of Pernambuco, Brazil)
Márcio Moura (Center for Risk Analysis, Reliability Engineering and Environmental Modeling - Federal University of Pernambuco, Brazil)
Isis Lins (Center for Risk Analysis, Reliability Engineering and Environmental Modeling - Federal University of Pernambuco, Brazil)
Feliciano da Silva (CENPES - Research Center Leopoldo Américo Miguez de Mello, Petrobras, Brazil)
Marcos Vinicius Nóbrega (CENPES - Research Center Leopoldo Américo Miguez de Mello, Petrobras, Brazil)
Proposal of a test protocol for reliability evaluation of O&G equipment
PRESENTER: Eduardo Menezes

ABSTRACT. Reliability evaluation has increasingly been recognized as a key factor in the success of O&G operations. The use of reliability metrics allows for a more precise control over equipment failures, maintenance and operational safety, leading the O&G supply chain to a new level of performance. For this accomplishment, a rigorous reliability analysis program is required, to surely assess the continuous operation of O&G equipment, often expensive, with a very long lifetime and with hard maintenance. Additionally, the long-term operational conditions impose an additional challenge to the reliability evaluation, since traditional lifetime tests are not feasible in an acceptable time-window, and alternative approaches must be adopted. Therefore, with the aim of enlightening the pathway towards the adoption of a reliability-based O&G operation, this paper discusses the development of reliability test protocols for the O&G industry, considering reliability analyses and accelerated lifetime tests. The discussion is initially based on the FMECA diagram, next identifying the main failure modes, next setting up the failure trees, and finishing with the methodology for reliability estimation for specific, most critical failure modes. In particular, a case study regarding fatigue life testing for reliability estimation is analyzed and presented as an example of application of the developed test protocol.

12:30
Caio Souto Maior (Universidade Federal de Pernambuco, Brazil)
Eduardo Novaes (Universidade Federal de Pernambuco, Brazil)
Márcio Moura (Universidade Federal de Pernambuco, Brazil)
Isis Lins (Universidade Federal de Pernambuco, Brazil)
Manoel Feliciano da Silva (Petrobras, Brazil)
Marcus Magalhães (Petrobras, Brazil)
Guilherme Ribeiro (Welltec, Brazil)
Fatigue-life assessment under random loading conditions using a calibrated numerical model and Monte Carlo samplings
PRESENTER: Caio Souto Maior

ABSTRACT. As oil and gas (O&G) wells operation usually involves extremely complex equipment, a common and critical cause that leads to oil production problems is fatigue failure. Many safe-fail plans have been created based on the S-N curve and the Palmgren-Miner rule, as well as in cycle-counting strategies (e.g., rain flow counting). However, the alternatives are rather limited for realistic and stochastic loadings, and the problem is an open field of research. Usually, methods in the frequency domain are more prominent than conventional time-domain methods to evaluate the damage. Additionally, computational methods permit the analysis and design of large-scale engineering systems in which numerical procedures are used to provide uncertainty for predictions. In this paper, we considered a numerical experiment to estimate the fatigue life of an open-hole expansive packer with feedthrough lines. The random loading is obtained through a numerical finite element model (FEM) calibrated with experimental data, which frequency-domain and stochastic damage evaluation methods are applied to. Finally, Monte Carlo samplings are carried out to obtain lifetime probability distributions.

11:10-12:50 Session 3I: Autonomous Driving Safety
Chair:
Anne Barros (CentraleSupelec, France)
Location: CQ-107
11:10
Jan Petter Wigum (Nord University, Norway)
Gunhild B Sætren (Nord University, Norway)
Exploring how automated technology and advanced driver-assistance systems are taught in the Norwegian driver-training industry. A qualitative study.
PRESENTER: Jan Petter Wigum

ABSTRACT. Every 24 seconds, a person dies in a traffic accident in the world. In Norway, the accident development has been declining for many years from the peak in 1970, with 560 killed even though there were far lesser cars, to an average of 110 killed the past 5 years (SSB, 2021). Factors including improved driver education, better infrastructure, targeted control activities, and the fact that young people are somewhat different than before all come into play. In addition, a technological development that has made the cars safer has led to fewer people dying in road traffic accidents (Sætren, Wigum, Robertsen, Bogfjellmo & Suzen, 2018). The technological changes are accelerating, and we are experiencing an increasing number of cars that to a certain extent make their own choices (Rao & Frtunikj, 2018). Even though there are obvious benefits from technological development such as the technology taking over tasks such as changing gears, keeping the speed stable, avoiding collisions with pre-crash systems, navigation, and so forth, the driver can pay attention to other aspects. However, a challenge that is highlighted from several research environments is the mix between vehicles with advanced driver assistance systems (ADAS) and those without (e.g. Sætren et al., 2018; Banks, Eriksson, O’Donoghue, & Stanton, 2018). A prerequisite for safe traffic flow will be good interaction. Further, the algorithms of machine learning development in cars are still assessed to be immature regarding driving in real life road traffic (Rao & Frtunikj, 2018) and the legal aspects of responsibility are not yet established (Sætren et al., 2018; Helde, 2021). We are in a transition process to new and more complex technology be and this affects how driver training currently is and should be conducted (Sætren et a., 2018). However, there are very little research found on the topic of how driver instructors implement new and different technological solutions in their driver training and how this affects the learning outcome. For this reason, our research question is: How are driving schools including new and more automated technology in their driver training? The study will look at current driver training and interviews are planned to be conducted of 10-15 driver instructors from a variety of driving schools in Norway. These will include schools with few driver instructors with few school cars to larger driving schools which include advanced simulator training for their students. A comparative nature is thus of interest from different aspects of how technology is used and taught and to establish which factors that determine which technology is taught to future drivers and not.

References

Banks, V., Eriksson, A., O’Donoghue, J. & Stanton, N. (2018). Is partially automated driving a bad idea? Observations from an on-road study. Applied Ergonomics, 68, 138-145. Helde, R. (2019). Juss i veitrafikk og trafikkopplæring. (Law in road traffic and driver training. Our translation). Bergen: Fagbokforlaget. Rao, Q. & Frtunikj, J. (2018). Deep Learning for Self-Driving Cars: Chances and Challenges. Proceedings in ACM/IEEE 1st International Workshop on Software Engineering for AI in Autonomous Systems. Sætren, GB. Wigum, JP, Bogfjellmo, PH., Suzen, E. (2018). The future of driver training and driver instructor education in Norway with increasing technology in cars. In: Safety and Reliability – Safe Societies in a Changing World. Proceedings of ESREL 2018, June 17-21, 2018, Trondheim, Norway. CRC Press 2018 ISBN 9781351174657. s. 1433-1441 Link: https://www.taylorfrancis.com/chapters/oa-edit/10.1201/9781351174664-181/future-driver-training-driver-instructor-education-norway-increasing-adas-technology-cars-s%C3%A6tren-wigum-robertsen-bogfjellmo-suzen SSB (Statistics Norway) (2021). Road traffic accidents involving personal injury. https://www.ssb.no/statbank/table/09000/

11:30
Redge Melroy Castelino (DLR-Institute Systems Engineering for Future Mobility, Germany)
Christian Steger (DLR-Institute Systems Engineering for Future Mobility, Germany)
Arne Lamm (DLR-Institute Systems Engineering for Future Mobility, Germany)
Axel Hahn (DLR-Institute Systems Engineering for Future Mobility, Germany)
Shift from simulation to reality: Test carrier architecture for seamless embedding of highly automated driving functions

ABSTRACT. Autonomy in transport has the potential to transform mobility in terms of reducing congestion, combating climate change and enhancing traffic safety. In order to realize the potential of autonomous system through successful establishment of autonomous driving systems, a high degree of technical trustworthiness is necessary. Ensuring correct functionality during operation is essential for the deployment of highly automated and autonomous driving functions in all transportation sectors, as the decision-making process will be handled by the vehicle in the near future. This results in increasing complexity of the system as a consequence for the testing phase. Presently, the development cycle of autonomous systems involves extensive testing with simulations due to its advantages in time and economic viability. However, acceptance tests for autonomous systems are generally conducted in real world environments. It is necessary to increasingly carry out the functionalities and assurance in a real demonstration environment. The need for scalable and sustainable testbeds that are not specifically developed for one specific application and can be dynamically adapted to changing requirements, is therefore increasing. As a consequence of the increasing complexity, the development of the vehicle and the driving functions are taking place simultaneously to an increasing degree. Furthermore, the software used is also being updated after the launch while the vehicle is in use. As a result, the test phase is not yet complete after the acceptance phase and a solution for the functional expansion and safe update of the system must also be ensured. For this reason, a test vehicle architecture that can be seamlessly integrate into real test environments and capable of supporting testing at every stage in the development cycle is proposed in this work. Based on a thorough literature study of the state of art development of autonomous systems across different transportation sectors (e.g., maritime and automotive), requirements were derived in order to propose key components in the architecture necessary to verify and validate the trustworthiness and correct functionality of autonomous systems. The testbed architecture is proposed in a domain agnostic way, leaving room for application of different data models or communication standards specific to a domain. Thereby allowing the architecture to be adaptable to any traffic domain and level of automation. Some key features derived from the literature study are: 1) A seamless integration platform via a modular System under Test (SuT) component, with provisions both on the test vehicle and infrastructure to enable testing and evaluation of distributed intelligence concepts such as connected vehicles. 2) Supported communication between modules operating on different data models or communication standards via a polymorphic interface module. 3) A seamless shift between simulation based and real-world tests via a simulation adapter module. This module allows replacing sensor feeds with simulated sensor data, thus enabling mixed reality testing. A feedback loop is proposed such that the simulation is updated with real world information to allow synchronization of simulation and real-world components of the test. 4) A ground truth sensor box which is a combination of high accuracy sensors that collect necessary ground truth information required for evaluation of a system under test. The ground truth sensor box is independent from the sensor network of the SuT, whose ground truth information is available only at the monitoring station. Additionally, ground truth sensor boxes can also be located on other actors in the test environment to collect ground truth information. 5) A Fault injection module that supports testing fault tolerance of the system for example in a situation of denial of service. 6) A provisioning platform to support communication with and evaluation of backend traffic management systems and also supports a Data recording functionality. 7) A monitoring station which has access to communication between all internal modules of the architecture as well as ground truth information. In combination with the data recording functionality, an additional feature would be the ability to collect real world sensor data with ground truth information for reference. To evaluate the architecture and effectiveness of the key functionalities proposed by the authors, the approach will be implemented and demonstrated with a maritime use case. The authors will prove the concept by integrating the intelligent collision avoidance system MTCAS (Maritime Traffic Alert and Collision Avoidance system) into the developed test carrier and conduct representative, qualitative studies (V&V) with SuT and testbed. References: Nidhi Kalra, Susan M. Paddock, Driving to safety: How many miles of driving would it take to demonstrate autonomous vehicle reliability?, Transportation Research Part A: Policy and Practice, Volume 94, 2016, Pages 182-193, ISSN 0965-8564, https://doi.org/10.1016/j.tra.2016.09.010. S. Gopalswamy and S. Rathinam, “Infrastructure Enabled Autonomy: A Distributed Intelligence Architecture for Autonomous Vehicles,” in 2018 IEEE Intelligent Vehicles Symposium (IV), Jun. 2018, pp. 986–992. doi: 10.1109/IVS.2018.8500436. M. Brinkmann, “Physikalische Testfeld-Architektur für die Unterstützung der Entwicklung von automatisierten Schiffsführungssystemen,” phd, Universität Oldenburg, 2018. Accessed: Nov. 29, 2021. [Online]. Available: http://oops.uni-oldenburg.de/3725/ Guidelines for Autonomous Shipping. 2019. Accessed: Nov. 09, 2021. [Online]. Available: https://erules.veristar.com/dy/data/bv/pdf/641-NI_2019-10.pdf A. Marchetto, P. Pantazopoulos, A. Varádi, S. Capato, and A. Amditis, “CVS: Design, Implementation, Validation and Implications of a Real-world V2I Prototype Testbed,” in 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring), May 2020, pp. 1–5. doi: 10.1109/VTC2020-Spring48590.2020.9129136.

11:50
Tim M. Julitz (University of Wuppertal, Germany)
Antoine Tordeux (University of Wuppertal, Germany)
Manuel Löwer (University of Wuppertal, Germany)
Reliability of fault-tolerant system architectures for automated driving systems
PRESENTER: Tim M. Julitz

ABSTRACT. Automated driving functions at high levels of autonomy operate without driver supervision. The system itself must provide suitable responses in case of hardware element failures. This requires fault-tolerant approaches using domain ECUs and multicore processors operating in lockstep mode. The selection of a suitable architecture for fault-tolerant vehicle systems is currently challenging. Lockstep CPUs enable the implementation of majority redundancy or M-out-of-N (MooN) architectures. In addition to structural redundancy, diversity redundancy in the ECU architecture is also relevant to fault tolerance. Two fault-tolerant ECU architecture groups exist: architectures with one ECU (system on a chip) and architectures consisting of multiple communicating ECUs. The single-ECU systems achieve higher reliability, whereas the multi-ECU systems are more robust against dependent failures, such as common-cause or cascading failures, due to their increased potential for diversity redundancy. Yet, it remains not fully understood how different types of architectures influence the system reliability. The work aims to design architectures with respect to CPU and sensor number, MooN expression, and hardware element reliability. The results enable a direct comparison of different architecture types. We calculate their reliability and quantify the effort to achieve high safety requirements. Markov processes allow comparing sensor and CPU architectures by varying the number of components and failure rates. The objective is to evaluate systems' survival probability and fault tolerance and design suitable sensor-CPU architectures.

12:10
Navreet Singh Thind (Universität Duisburg-Essen, Germany)
Daniel Adofo Ameyaw (Universität Duisburg-Essen, Germany)
Dirk Söffker (Universität Duisburg-Essen, Germany)
Adaptive situated and reliable prediction of object trajectories
PRESENTER: Dirk Söffker

ABSTRACT. The autonomy of autonomous vehicles like inland vessels requires reliable information from their own systems as well as from moving objects like encountering vessels in the environment. Besides the detection of objects, the prediction of object’s trajectory is one of the most crucial tasks to realize the safe operation of autonomous systems like inland vessels. In previous contributions, an extension of the Probability of Detection (POD) approach as well as a situated model approach for trajectory prediction are developed. Is the new prediction model reliable? In this contribution, the reliability of this newly developed prediction model relative to the prediction time is evaluated using the POD approach. The goal is to define the time interval beyond which the reliability is less than the 90/95 (90 % detection probability at 95 % confidence interval) criterion. The used POD approach provides a new certification standard for prediction approaches and is therefore useful in safety-critical systems. The situated prediction algorithm allows predicting the trajectory of waterway objects for a safety-relevant period of time (minutes) using a simple parameter-based approach where some parameters are globally trained and a local parameter is adapted based on past data using a sliding window approach. The past data consider all local environmental and hydrodynamical effects affecting object’s motion in the next minutes. The predictions are assumed as dependent on the different types of geometry-dependent trajectories like straight, curved, and sharp curved paths. The approach is only using the position data of vessels (known from AIS or radar data). Experimental data from a German research inland vessel are used to validate the approach. Based on the results using the new POD-based approach, it can be shown that the local model-based predictions are reliable in defined time intervals. Using the reliability measure the reliable prediction horizon dependent on the locations of the waterways can be defined.

12:30
Ivo Häring (Ivo Häring, Fraunhofer EMI, Germany, Germany)
Yupak Satsrisakul (Work done at Fraunhofer EMI, Germany)
Jörg Finger (Fraunhofer EMI, Germany)
Georg Vogelbacher (Fraunhofer EMI, Germany)
Corinna Köpke (Fraunhofer EMI, Germany)
Fabian Höflinger (Fraunhofer EMI, Germany)
Patrick Gelhausen (University of Hagen, Germany)
Advanced Markov Modeling and Simulation for Safety Analysis of Autonomous Driving Functions up to SAE 5 for Development, Approval and Main Inspection
PRESENTER: Ivo Häring

ABSTRACT. The development, approval and recurring testing of driving assistance, partial and conditional automation (SAE levels 1 to 3) and, above all, high and full automation (4 to 5) requires an ever-increasing effort. At the same time, there is a lack of recognized sufficiently scalable general quantitative approaches in this area, in particular simulation methods including models, software and hardware up to vehicle-in-the-loop simulations, e.g. in the context of main inspection. In this context, the paper explores the potential of non-classical Markov modeling. First, it exemplifies how vehicles, drivers and other road users can be modeled for different driving scenarios using the Systems Modeling Language (SysML). Based on this model an abstract Markov diagram is presented. The two Markov simulation methods used operate on a discrete finite state space and allow for time-dependent state transitions. When compared to the matrix solver-based simulation, the Monte Carlo based simulation method is in principle extensible to state history-dependent as well as rule-based state transitions. Also, it allows to include subsystem simulation models. The extended Markov model allows to evaluate states with respect to functionality and safety, e.g. to determine whether it is sufficiently likely to reach fail operational states. It can also be used to determine dominant critical transitions and insufficient system resolution. In addition, numerous standardized safety and reliability measures are accessible. It will be shown how such a reference model can be quantified using simple failure models, as well as how it could be fed with further transition and failure models at different levels of abstraction using simulations as well as software tests data up to field test data. Finally, the paper hints at the potential of such an extended Markov modeling, especially with respect to an understandable and efficient safety verification in different product life cycle phases including after-sales.

11:10-12:50 Session 3J: Organizational Factors and Safety Culture
Chair:
Marja Ylonen (University of Stavanger, Norway)
Location: CQ-008
11:10
Aud Wahl (NTNU, Norway)
Anniken Solem (Sintef, Norway)
Enabling safety training: facilitating learning in post-simulation debriefing of maritime officers
PRESENTER: Aud Wahl

ABSTRACT. Simulator-based training programs are common in training of maritime officers. Much of this training explore social or interpersonal frames of action that influence team and individual’s performance during the simulator tasks. A central goal is to bridge professionals' experience between actual work and what is learned during simulator tasks. Post-simulation debriefing is regarded as a critical component in this training as it helps trainees develop and integrate insights from tasks performed in the simulator into future safe work practice through peer discussions and feedback. Facilitation of the learning process is emphasized as an important competence for instructors conducting the debrief, but how to do this is treated as a black box in much of the existing literature.

The intention of this article is to combine what we know about post-simulation debriefing in the maritime domain with facilitation theory and suggest instructor practices that support safety training of sharp-end personnel. The work is based on a case study of simulator courses for experienced deck officers that man shuttle tankers in the North Sea. These mariners undergo training to maintain valid certificate as a dynamic position operators (DPOs).

Dynamic positioning (DP) is a computerized system for automatic positioning and heading control of a vessel controlled from the bridge. A shuttle tanker is a ship designed to offload oil from an offshore oil field. This is a risk prone operation that requires a high degree of accuracy, and the DP system is used to keep the ship within specified position and heading limits. Major risks are oil spill, material damage and personnel injuries. Strict safety procedures and DPO expertise is essential to maintain the safety and it is thus of high importance that they receive effective training. A hallmark of high-quality training is that the trainees are assessed not only by the trainer but also receive feedback from other trainees with more or different experience than themselves.

The debriefing session itself provides learning by revisiting critical events in the simulator scenario and usually follows a conversational structure consisting of different phases. As such it can be understood as an interactive process, adjusted to the events in the simulator, which will unfold differently every single time because of the trainees' skills and decisions. In the training of mariners, the trainees' experience at sea, prior operational or technology-specific knowledge, experience with similar training settings, and ability or willingness to collaborate with others attending the training affects the learning outcome.

The simulator instructors' skills to guide discussions and ensure positive and constructive comments during training programs is paramount in this process. A key is the ability of the instructor to establish trust and build rapport with the trainees from day one, creating a “no-blame culture” by facilitating the various interactions and carrying out the briefing and debriefing in a positive learning environment. A facilitator's job is to make participation and interaction in a group easier, thus the instructor's objective in this context is to ensure a learning environment where honest reflection over the simulation experience may take place. This includes giving meaning to the trainees' contributions in the simulation and giving value to trainee's experience of successes and failures during the simulation.

A post-simulation debriefing could be tense due to its focus on peer review and public self-evaluation, and to create a climate of psychological safety is an important part of the facilitator's job. The learning activities before, during, and after simulator sessions must therefore be regarded as part of the instructional achievement. This includes activities from the first meeting between instructor and trainees and throughout the entire training program.

To ensure that trainees willingly contribute with actions, ideas, and share concerns or knowledge with others participating in the program it is therefore important to apply a holistic approach to the training and not only look at the debriefing sessions. Hence, this article discusses the facilitative aspects of the beginning, working, and closing phases of the training as well as the continuous adjustments needed to ensure effective learning. Adding to the existing knowledge in the debriefing literature, the following elements are highlights: the opening statement; doing a check-in activity, being transparent as an instructor; monitoring group dynamics; being flexible and able to improvise, and varying activities to create engagement. The work is summarized in a model that will aid the simulator instructor to facilitate debriefing sessions that help reach the goal of effective safety training.

11:30
Rachael Thompson Panik (Georgia Institute of Technology, United States)
Hamidreza Nazemi (Georgia Institute of Technology, United States)
Joseph Saleh (Georgia Institute of Technology, United States)
Brian Fitzpatrick (Georgia Institute of Technology, United States)
Patricia Mokhtarian (Georgia Institute of Technology, United States)
Probing Elements of Safety Culture Among Engineering Students: Factor Analysis and Preliminary Results

ABSTRACT. Safety culture is a key concept in the literature on accident analysis and prevention. Following its introduction after the Chernobyl disaster, safety culture has become essential in the intellectual toolkit of researchers and safety professionals. While no broadly accepted definition for it exists, it is generally agreed that safety culture includes a shared set of beliefs, attitudes, and competencies in relation to safety within an organization. More nuances are added by some authors to emphasize either its core or its manifestation, such as the shared values or patterns of behaviors within an organization affecting its risk exposure. Safety culture has been probed in a wide range of hazardous industries, for example in the nuclear industry, the oil and gas, the airline, and the healthcare industries. There are over 30,000 scholarly articles whose titles include "safety culture in", which illustrates the broad appeal of this concept.

Fewer articles, however, have explored how an individual entering the workforce acquires or partakes in a company’s safety culture over time, what facilitates or impedes this process, and how prior training or predispositions help shape this acquisition. In this work, we begin such exploration by focusing on engineering students, many of whom will become contributors to, managers, and leaders of technology-intensive or hazardous industries. As such, they are important stakeholders who will partake in and contribute to the safety culture of their future work environment.

We develop a survey instrument and investigate the students’ attitudes and beliefs toward safety, their self-reported knowledge and efficacy, and their perception of safety in their field. We also probe aspects of the safety climate at school, as well as the students’ exposure to safety issues in their curriculum and other related topics. This work is the first phase in a long-term research project, the objective of which is to investigate the development and acquisition of an organization’s safety culture by individuals entering the workforce. In this first phase, we adopt an individual-centric – as opposed to the prevailing organization-centric – approach to the topic, and we investigate what can be referred to, for a lack of a better term, as “pre-safety culture”: attitudes and beliefs that an individual brings to an organization that interacts with its existing safety culture to inform their acquisition of said culture.

We report in this work on preliminary results of this first phase. We surveyed students (n = 432) from several engineering disciplines at the Georgia Institute of Technology, including aerospace, mechanical, civil, and industrial and systems engineering, as well as computer science students. Preliminary results are thought-provoking, and we report on them here before we further increase our sample size for a more definitive analysis. We present some of the more salient themes here.

First, many students perceive that safety is important in their fields, but their self-reported knowledge of safety concepts and competence is lacking. This suggests a tension or gap between the students’ awareness of the importance of safety and their insufficient preparation. Interestingly, we heard from industrial partners that they find new engineering hires ill-prepared in terms of safety awareness and preparation. The curriculum in some engineering schools may therefore be out of step with the expectations of students and future employers. Second, safety appears to have a reputation problem among engineering students: a large minority of students understand safety as just “checking the box” and view it as an impediment to productivity. In addition, they reported that someone who is safety-conscious is also risk-averse, an unfortunate connection that equates safety mindedness with an inability or unwillingness to take risks. Third, most students reported having frequently experienced peer pressure to engage in unsafe activities. Fourth, students in one major – computer science – appear to be consistently uninterested in and dismissive of safety issues. While more detailed analysis is required to understand this result and confirm whether it is robust beyond our sample, this finding is disquieting, given that engineering systems are becoming increasingly software-intensive; the future professionals who will oversee their development may be uninterested and/or undereducated in safety. This is particularly important as hardware-based safety measures are increasingly being replaced with software-based solutions across many industries. Finally, we examine differences in responses (pre-safety culture) by engineering majors and other covariates, including gender, “year in college” and whether the students had an internship experience or not. We conduct an exploratory factor analysis and ordinal logistic regression to characterize the latent factors influencing these results and identify the strength of associations between responses and predictors.

Future research will expand on the work begun here, including additional samples from other engineering programs in the United States and beyond. Ultimately, the findings can help inform curriculum development and identify factors that can help students and emerging professionals acquire a positive safety culture.

11:50
Dag Atle Nesheim (SINTEF Ocean, Norway)
Marius Imset (University of South-Eastern Norway, Norway)
Kay Endre Fjørtoft (SINTEF Ocean, Norway)
Validation methodology for assessment of new e-navigation solutions
PRESENTER: Dag Atle Nesheim

ABSTRACT. The international maritime industry is on a current wave of digitalization and there is a dire need to enable an effective prediction of the different digital solutions, especially in terms of international implementation as well as regulatory requirements related to the same.

When assessing new e-navigation (e-nav) solutions, or indeed new technology in general, there is a challenge related to defining Key Performance Indicators (KPIs). The solutions or technology are not yet implemented, i.e. there is no possibility to validate technology strength on basis of lagging indicators - indicators which express actual performance within quality of the service to which the solution is targeted. One must instead turn to leading indicators - indicators which, based on a hypothesis, express expected performance within quality of the service to which the solution is targeted. Vital to the use of leading indicators, is the validity of the hypothesis on which the leading indicators are based. The more valid the hypothesis, the more valid the leading indicators will be to express the new solutions' effectiveness towards solving or improving the task at hand.

Several guidelines, such as the International Association of Marine Aids to Navigation and Lighthouse Authorities' (IALA) guidelines for the development of testbeds and the International Maritime Organization's (IMO) guideline on software quality assurance and human-centred design for e-navigation also rely on leading indicators. The same is true for vetting regimes such as the Ship Inspection Report programme (SIRE) from the Oil Companies International Marine Forum (OCIMF). These all have in common an underlying hypothesis on which attributes constitute a future rise in quality once implemented. Finally, several international rules and regulations related to safe, secure and environmentally sustainable operations also rely on the notion of what will result in a positive effect.

The methodology described in this paper, is created on basis of the relationship between leading and lagging indicators and how this relationship provides a continuous improvement loop between the two. A leading indicator predicts performance while a lagging indicator expresses actual performance. To assess whether the leading indicators were able to make high quality predictions, we use lagging indicators to express whether the predicted performance was met in real life. By creating a continuous loop of validating the leading indicators through lagging indicators, the hypothesis is that the quality (or validity) of the leading indicators, in terms of predicting performance related to e-navigation will increase as we are able to compare the predictions with actual performance.

One obvious challenge related to the suggested methodology, is the issue of timing. If lagging indicators of actual performance indicate that the leading indicators used to predict performance are not valid, decisions have been made on invalid hypotheses. Nevertheless, the knowledge gained can be used to ensure that the same mistakes are not repeated. A proven invalid hypothesis may prove just as useful as a proven valid hypothesis, following the theory of Hegelian dialectic where a thesis is challenged by an antithesis resulting in a synthesis. This, however, requires that there is a similar set of solutions eligible for assessment further down the road, as in our scope related to E-navigation and Maritime Services.

In this paper, we have validated KPIs from the Baltic and International Maritime Council' Shipping KPI Standard (BIMCO Shipping KPI) in light of their suitability in expressing actual performance, hereby validating the quality and suitability of a set of leading indicators from the port call scenario from the Research Council of Norway supported SESAME Solutions II project (Secure, Efficient and Safe maritime traffic Management in the Straits of Malacca and Singapore).

The objective of the SESAME Solutions II project is to develop and validate a new method of ship traffic management, through digitization, that enables the bridge team and onshore authorities to: a)Increase shared situational awareness through digitization of ship to shore communications, focused on local ship traffic, port approach planning, and challenging weather conditions; b)Improve collaborative decision support through shared alerts and traffic predictions, focused on inappropriate ship behavior; and c)Reduce the administrative burden of both bridge teams and shore-based operators through the digitization and automation of ship-to-shore reporting. The project shall accomplish these objectives by developing new functionality in both existing systems and prototype services, improving new communications strategies, and studying the effect of the new technology on operators. Development shall be on ship systems, shore systems, and communications equipment.

The SESAME Solutions II project seeks to be the first project to develop and demonstrate, on operational systems both onboard and ashore, a fully realized suite of e-navigation services. It aims to apply the Human-Centered Design (HCD) guidelines proposed in IMO MSC.1/Circ.1512, and as part of this, to quantify the effects of new systems and services on human operators and ship performance. These data will be part of the business case for end-users to buy into e-navigation solutions, especially ship owners and operators.

12:10
Kine Reegård (Institute for Energy Technology, Norway)
Espen Nystad (Institute for Energy Technology, Norway)
Robert McDonald (Institute for Energy Technology, Norway)
Do different ways of organizing outage work have implications for situation awareness? Empirical insights from a case study
PRESENTER: Kine Reegård

ABSTRACT. Planned outages of nuclear power plants can directly affect plant availability and give rise to problems with plant safety and are therefore important for their operational performance. Outages refers to complex work that is distributed between different technical disciplines (and organizations) and performed under time pressure. Plants organize this work differently in terms of temporality and co-location of outage teams. However, there is currently limited empirical knowledge of how this can influence their performance. In this paper, we report on a comparative case study of two outage organizations, one having a permanent team and using an outage control centre (OCC) and the other having a temporary project team using staff from the departments, and discuss how their different attributes can influence emergence of situation awareness (SA). We find the OCC organization to provide a better basis for SA by being more facilitative of interactions between teams and supporting the flow of information so that relevant information is communicated to the right person at the right time.

12:30
Elizabeth Solberg (Institute for Energy Technology, Norway)
Rossella Bisio (Institute for Energy Technology, Norway)
Improving Learning by Adding the Perspective of Success to Event Investigations

ABSTRACT. It is suggested that examining successful performance could enhance the learning gained from nuclear facility event investigations. Yet, little research has addressed how. We argue that examining successful performance displayed during the progression of an event, in addition to failures, would help nuclear facilities challenge taken-for-granted assumptions about successful performance and generate learnings that help to ensure safe and reliable performance in the future. We test our predictions with data collected from 29 event reports made in the Fuel Incident Notification and Analysis System between 2016-2020. Our findings reveal that successful performance is often displayed at multiple stages during the progression of an event (e.g., both prior to the initiating event and in relation to event detection, mitigation, recovery), but its safety and reliability is rarely examined. However, when successful performance is considered, learning points and follow-up actions are generated that improve the resiliency of nuclear operations.

11:10-12:50 Session 3K: Nuclear Industry: Safety issues
Chair:
Edoardo Patelli (University of Strathclyde, UK)
Location: CQ-010
11:10
Mina Torabi (National Centre for Nuclear Research, Poland)
Karol Kowal (National Centre for Nuclear Research, Poland)
Failure modes analysis of the electrical power supply for the GEMINI+ High Temperature Gas-cooled Reactor
PRESENTER: Mina Torabi

ABSTRACT. The High Temperature Gas-cooled Reactor (HTGR) is the Generation IV nuclear technology that can potentially provide an outlet temperature of about 750°C to 950°C. High-efficiency gas turbines are planned to be connected to the HTGR-based facilities for heat and power cogeneration, where the heat could be supplied to the processes of the chemical industry. The commercial operation scale of such cogeneration plants requires adequate safety and profitability which are indeed pivotal elements for the whole industry. One of the key aspects affecting the safety and efficiency of the HTGR-based cogeneration plants is the inherent reliability and availability of constituent installations, including nuclear and non-nuclear systems [1]. Consequently, systems modifications towards higher reliability can significantly enhance both safety and profitability. This work is focused on the electrical power supply, which is of high importance for both the safety standpoint and economic goals, as its availability results in the operation of the mitigation systems during emergency conditions and ensures production continuity during normal reactor operation. Therefore, in the Probabilistic Risk Assessment (PRA) of the nuclear power plant, the reliability analysis of the electrical systems has become an indispensable factor. In particular, the classical Fault Tree Analysis (FTA) can be considered a useful tool for the reliability study of electrical facilities [3]. Some attempts have been also made to develop more relevant methods, for instance, the classical fault tree method was extended to include time-related effects [4]. In this paper, an application of the Failure Mode and Effect Analysis (FMEA) is described as a preceding step for the FTA of the HTGR electrical systems. The aim of this work was to develop a systematic approach for the identification and selection of the relevant electrical failures causing the reactor shutdown and consequently the interruption of cogeneration. However, the application of FMEA in this specific area poses numerous challenges. This, for instance, includes the specification of the associated rating scales for the failure frequencies and severities as well as the definition of the interfaces between nuclear and non-nuclear systems. In the current research, the High Temperature Engineering Test Reactor (HTTR) and GEMINI+ reactor were considered as two reference projects that rely on the HTGR technology developed for demonstration of the high temperature nuclear cogeneration. This work was intended to apply the FMEA method in the electrical system of GEMINI+ to determine, classify, and prioritize the set of electrical failure modes within the plant. The results were then compared with the findings from the comprehensive FMEA study of the HTTR electrical system published elsewhere [2]. The FMEA-based Gradual Screening method was used for the classification of the electrical failures influencing the GEMINI+ operation. The results were compared with analogous studies for the HTTR electrical system as well [2]. The outcomes of our work show that the FMEA, as a risk assessment tool, can be applied and developed in the prospective design procedure of nuclear-chemical installations. In the light of the obtained FMEA results, more advanced reliability models, including the Fault Trees and Reliability Block Diagrams can be developed for the considered systems of GEMINI+ and HTTR, and thereby, a direct comparison of the future studies on safety and profitability of the HTGR-based cogeneration plants is enabled.

[1] Kowal K, Potempski S, Turski E, Stano PM. A general framework for integrated RAMI analysis of nuclear/non-nuclear facilities. In: Proceedings of the 29th European Safety and Reliability Conference (ESREL 2019). Hannover, Germany; 2019, p. 2274. 4. http://dx.doi.org/10.3850/978-981-11-2724-3_0552-cd. [2] Kowal K, Torabi M. Failure mode and reliability study for electrical facility of the high temperature engineering test reactor. Reliability Engineering & System Safety, 210 (2021), p 107529. [3] Volkanovski A, Čepin M, Mavko B. Application of the fault tree analysis for assessment of power system reliability. Reliability Engineering & System Safety, 94 (2009), pp. 1116–27. [4] Borysiewicz M., Kaszko A., Kowal K., Potempski S. Time-dependent PSA model for emergency power system of nuclear power. Safety and Reliability of Complex Engineered Systems, Taylor & Francis, London (2015), pp. 1463–1468.

11:30
Karel Vidlak (PhD student FSv, Czechia)
Dana Prochazkova (Czech Technical university in Prague, Czechia)
RISK MANAGEMENT PLAN AT STEAMGENERATOR MAINTENANCE OF NUCLEAR POVER PLANT
PRESENTER: Dana Prochazkova

ABSTRACT. Maintenance of technical fittings is a set of activities which keep to ensure that their working conditions are maintained or, in the event of a failure, these conditions are quickly restored. In simple terms, the maintenance process can be divided into four areas: supervision; inspection; maintenance; and improvements. Maintenance of technical fittings ensures: extension and optimal use of the service life of the equipment; improving the operational safety; increasing the readiness of the fittings to perform the required function; optimization of operating regulations; reduction in the number of breakdowns; and the ability to plan the cost of operating the equipment. The maintenance of each technical fittings depends on both, the technical design and function of the fittings and on the conditions in which the fittings in question operate. The subject of the article is to draw up a plan for risk management in the maintenance of the WWER 1000 type steam generator. The steam generator is a critical device in a nuclear power plant because its flawless operation ensures the safety of the entire nuclear power plant. It physically separates the primary radioactive circuit from the secondary non-radioactive circuit. It provides the process of cooling water from the primary circuit, which has a high temperature and high pressure (in the case under review, 320 °C, 16 MPa). It is a horizontal heat exchanger with a large heat transfer area, formed by a bundle of "U" pipes, which converts the heat generated in the nuclear reactor into the feed water and steam of the secondary circuit. The temperature and pressure conditions in the steam generator are set so that intensive steam development occurs on the surface of the pipes, further needed to drive the turbo-generators. The critical components of the steam generator are: hot primary water inlet equipment; equipment for the outlet of cooled primary water back into the primary circuit; pressure vessel of the steam generator; steam separators; equipment for spraying the secondary cooling water on pipes with warm primary water; pipes with primary water; equipment for the removal of saturated steam to the turbine. The nuclear power plant project itself contained the requirements for the maintenance of the steam generator, i.e. the schedule of activities and the methods of their implementation for individual components and their interconnection. On the basis of this document, maintenance plans were drawn up, which have been implemented since the commissioning into permanent operation. However, during operation, local conditions, such as the aggressiveness of cooling water with chloride levels or changes in the surrounding environment have been impacts, and it has been shown that some critical elements wear out faster than the project anticipated. Therefore, the entire process of the steam generator function was subjected to a risk analysis, and the critical points of the process that need to be monitored and ensured for timely maintenance, i.e. to set up a risk-based maintenance process, were identified. In order for the maintenance of the steam generator to be safe and economical, the maintenance process needs to be based on risk management in favor of safety. For this reason, we have identified critical points of the steam generator and phenomena (external, internal, human factor and organizational errors) that could disrupt the steam generator function. We have created scenarios of their impacts on the process of steam generator function for normal, abnormal and critical conditions, especially at critical points. Based on our knowledge and practical experience, we formulated specific countermeasures that we discussed with experts to avoid possible conflict situations. The result is a risk management plan that is site-specific.

11:50
Coralie Esnoul (Institute for energy Technology, Norway)
Yonas Zewdu Ayele (Institute for energy Technology, Norway)
Rune Fredriksen (Institute for energy Technology, Norway)
Challenges in evidence evaluation and decision making within D I&C systems safety demonstration
PRESENTER: Coralie Esnoul

ABSTRACT. The development of e.g., nuclear plants and systems, follows standards, regulations and guidance that allows the necessary degrees of freedom to allow the use of new technology and solutions. Similar degrees of freedom can be found in how to build a safety demonstration. An important factor impacting safety demonstrations can be found in the variability of evidence. This is partially due to the nature of the evidence itself, as many different types of evidence can be provided to support the same claim. Evidence can also be organized and presented in different ways in the demonstration. On the other hand, the evidence is evaluated by an assessor, either as sufficient or insufficient with regards to supporting the safety claim. This process introduces an element of subjectivity, as the assessor may have previous experience on different types of evidence, as well as preferences in the way argumentation is built. The focus of the assessor can vary depending on the complexity of the system and the evidence. Thus, a central question is how to improve the current practices to reduce the uncertainty among the stakeholders on how to both present and evaluate evidence, including the link between the evidence-claim?

The constant evolution and changes in technology and solution creates an opportunity to improve how safety demonstrations and argumentations are organized and performed. Although this does not mean that current safety demonstration practice is not good enough in assessing the safety of systems, it allows the opportunity to make the processes more efficient for all the stakeholders. In this topic, the Halden Human-Technology-Organization (HTO) program, a research international collaboration project, is supporting its partners by performing research to improve the safety of nuclear power plants, including processes of the safety demonstration. This paper presents part of the results achieved in one project of the HTO program.

After a short introduction of the HTO program, this paper presents the scope and the objectives of the project: to capture the state-of–practice within the nuclear industry on independently verifiable evidence and evidence combinations in assurance claims for critical DI&C systems. Through a literature review, the project is trying to answer the following questions: i) what type of evidence are used in safety demonstration; ii) what criteria are used to evaluate combinations of evidence sufficient in an assurance claim for critical DI&C systems; iii) and how can the safety demonstration process best be facilitated with regards to evidence. The second part of the paper presents the findings of a literature review, identifying the current gaps and challenges in safety demonstration, as, e.g. possible improvements in the communication among the stakeholders of the safety demonstration. The literature review is also initiate a list of standards, guidance that contains information on how to structure the safety demonstration, what quality of evidence is expected and how to organize and communicate the study, building the current state of practices. This work is currently part of an on-going project and will be further developed through 2022 and 2023.

12:10
Ana Sánchez (Universitat Poltècnica de València. Departamento de Estadística e Investigación Operativa Aplicadas y calidad, Spain)
José Felipe Villanueva (Universitat Poltècnica de València. Departamento de Ingeniería Química y Nuclear, Spain)
Sebastián Martorell (Universitat Poltècnica de València. Departamento de Ingeniería Química y Nuclear, Spain)
Carlos Sofia (Universitat Poltècnica de València. Departamento de Ingeniería Química y Nuclear, Spain)
Martón Isabel (Universitat Poltècnica de València. Departamento de Estadística e Investigación Operativa Aplicada y Calidad, Spain)
Comparison between non-parametric and parametric tolerance intervals. Application to the uncertainty analysis of a LBLOCA

ABSTRACT. The International Atomic Energy Agency’s guidance (IAEA) on the use of deterministic safety analysis for the design and licensing of nuclear power plants (NPPs), addresses four options for Deterministic Safety Analysis (DSA) applications. In Option 3, the use of best-estimate codes and data together with an evaluation of the uncertainties is considered, the so-called Best Estimated Plus Uncertainty (BEPU) methodologies. The most popular statistical method used in BEPU is Wilks’ method. Wilks’ is a non-parametric method based on order statistics with the objective to estimate a certain coverage of the Figure-Of-Merit (e.g. Maximum Peak Clad Temperature) with an appropriated confidence level, this is a tolerance interval. The use of the Wilks’ method and first-order statistics mostly leads to conservative results. An alternative to solve this problem is the use of parametric methods which assume the data are coming from a particular parametric model. However, a problem of this alternative is that the parametric tolerance intervals can be sensitive, in terms of the coverage and confidence level, to the model misspecification. In this context, the objective of this paper is to compare the results obtained with Wilk´s method and the parametric method. In the case of the parametric method, the selection of the most suitable model is carried out using a goodness-of-fit equivalence test. Then, the tolerance interval is estimated based on the selected distribution. The two approaches (non-parametric and parametric) are applied in the uncertainty analysis of a Large-Break Loss of Coolant Accident, LBLOCA, in the cold leg of a Pressurized Water Reactor using the thermal-hydraulic system code TRACE. The results obtained are compared with respect to (a) the average coverage probability, (b) accuracy, and c) bias.

12:30
Yonas Zewdu Ayele (Department of Risk, Safety and Security, Institute for Energy Technology, Norway)
Coralie Esnoul (Department of Risk, Safety and Security, Institute for Energy Technology, Norway)
Independently Verifiable Evidence for Safety Assurance – A Survey Analysis

ABSTRACT. Understanding of independently verifiable safety evidence is an important prerequisite for demonstrating compliance to digital instrumentation and control (DI&C) system safety assurance claim; and, eliminating the need for “design diversity”. In addition, the goal of evidence-informed safety assurance is to bring a high standard of evidence into the safety-assurance process for DI&C while considering factors, such as contextual and experiential that influence decisions for verifying evidence. However, collecting information about the science to determine the validity of evidence is not straightforward. And so, a great deal of effort goes into ensuring the quality of independently verifiable evidence and challenging the quality of evidence in arguments we disagree with. Without effective evidence identification and verification, for example, it is challenging for the regulators to ask the right questions about the system safety assurance claim. Moreover, the quality of independently verifiable evidence arguably drives the effectiveness of the evidence-informed safety assurance claim. Ultimately, it will affect the corrective actions the investigations team proposes. Further, improving identification of independently verifiable evidence efforts early on makes it easier to support the system safety claim.

The overarching objective of this paper is to thus perform structured survey with experts working with DI&C safety assurance to assess expert’s interpretation and understanding on the topic of independently verifiable evidence and evidence combinations for different assurance claims. To fulfill the objectives, we have developed a questionnaire; and, carried out a survey with selected subject matter experts. The majority of the questions were formulated as statements, and experts were then invited to provide their hand on experience and perspective to which the statements correspond to reality in the country of their expertise. One sub-goal is also to highlight some of the disagreements between regulators and industry on what evidence is needed to assure safety. The study is designed primarily for safety assurance practitioners, in particular licensee and regulator, but anyone working in nuclear power plant, human reliability, and other related fields will find the information useful including researchers, evaluators, technical assistance providers, and decision-makers. The initial findings depicts that: i) to address the miscommunication and/or disagreement between industry and regulator, there is a need to create clear communication process; ii) when it comes to independently verifiable evidence that would suffice for eliminating the need for “design diversity”, there is no single answer to the question, and this is a hard problem; iii) to address the possibility of unsafe subsystem interactions, the results shows that one would look at in different sources of evidence. The overall results highlighted that the regulators and industry are looking for a more precise technical basis to evaluate an assurance case, for achieving consistency across both industry and regulators/reviewers; consequently reducing the dependency on individual judgment.

14:00-15:20 Session 4A: Risk management and Covid-19 challenges
Chair:
Victor Hrymak (Technological University Dublin, Ireland)
Location: LG-22
14:00
Torgeir Kolstø Haavik (NTNU Social Research, Norway)
Stian Antonsen (NTNU Social Research, Norway)
Gudveig Gjøsund (NTNU Social Research, Norway)
Tone Merethe Aasen (Trondheim kommune, Norway)
Public administration, reliability and innovation – learnings from a municipal pandemic management case study
PRESENTER: Stian Antonsen

ABSTRACT. In the search for optimisation of public administration, organisational design, coordination principles and control parameters are central aspects. At the level of municipalities, such aspects have recently received attention in connection with management of the covid-19 pandemic, both with respect to managing external challenges of providing services to the citizens, and internal challenges of maintaining continuity of the municipality’s core tasks. Principles of similarity and responsibility have been actualised in the debates, revealing interesting inconformities between discourses of emergency preparedness within the fields of organisational safety and reliability and public administration. While in public administration mismatches between the structures of problems and problem solving networks has defined themes of reliability as wicked problems, drawing attention towards the structural, comparable discourses on organisational safety and reliability have engaged much with issues of organisational culture, and implications for organisational mode in peace time and during crises. In the intersection between these fields of public administration and organisational reliability, we have studied the phenomenon of cross-sector coordination, finding links between problem descriptions (ref wicked problems) echoing the public administration literature, and coping strategies (ref organisational culture) with resonance in the literature of organisational reliability. For around 18 months, we performed in-depth studies of the way a Norwegian municipality adapted to the leadership and continuity challenges following from the COVID-19 pandemic. Our data material consists of interviews, observation and participatory analysis of meetings with representatives of the municipality. We conducted 37 interviews with municipal leaders and representatives of the municipalities, each interview lasting between 1 and 1,5 hours. We performed observations of meetings in two groups that played key roles in adapting to the pandemic situation: 22 meetings in the municipality’s cross-sectoral collaboration group, and 51 status meetings in the municipality’s crisis management group. The combination of in situ observation of adaptation and decision-making in practice with more distanced reflection on practice through the interviews, provides a unique basis for an in-depth case study with the potential for conceptual development (Antonsen & Haavik, 2021). In the paper, we provide empirical examples and analysis of the municipality’s adaptation to the COVID-19 crisis, and the challenges experienced in the process. From this we draw theoretical implications relevant for both the literature on emergency preparedness, and broader organisational research. Interestingly, while drawing on insights from the field of organisational reliability in the addressing of public administration challenges, our search for what the observed municipal crisis management strategies are actually cases of, leads us to the innovation literature and the discourses on structural holes and middle ground competence. From here, it becomes clear that there is a significant learning potential in the successful aspects of municipal crisis management – not only for dealing with the next crisis, but also and particularly for public sector innovation to meet governance challenges in the future where structures of challenges and innovation at unprecedented scales (e.g. climate adaptation, smart cities) require innovative adaptation. Solutions that we have identified in our study relate less to structures and formalised aspects of organising than to ways of working and informal aspects of organising. Cross-fertilization between research on safety and reliability and organisation studies is not a new phenomenon; a potent example being organisational sensemaking research stemming from Weick. The present study is another such example, stretching the applicability of organisational safety research into the field of innovation when municipalities are challenged to deliver services that not only satisfices the individual citizens, but also future requirements for sustainability and societal resilience.

14:20
Stian Antonsen (NTNU Social Research, Norway)
Torgeir Kolstø Haavik (NTNU Social Research, Norway)
Gudveig Gjøsund (NTNU Social Research, Norway)
COVID-19 and uncertainty
PRESENTER: Stian Antonsen

ABSTRACT. The COVID-19 pandemic has been a situation with high stakes, and where there is uncertainty with respect to knowledge about the hazard and the severity of potential consequences. The pandemic thus presents an opportunity for empirical studies within an uncertainty-based perspective on risk that are able to highlight various faces of uncertainty involved in the ontology of the crisis, knowledge-generation, prediction, as well as decision-making. The paper presents elaborate categories for understanding and dealing with different forms uncertainty, with the aim of inviting discussion within risk and safety science on the role of empirical research within risk science.

14:40
Alexander Cedergren (Lund University, Sweden)
Henrik Hassel (Lund University, Sweden)
On the use and value of risk assessment for strengthening the response to the Covid-19 pandemic
PRESENTER: Henrik Hassel

ABSTRACT. Risk assessment constitutes a valuable tool for organisations responsible for critical societal functions in their effort of foreseeing and mitigating the likelihood and consequences of potentially harmful events. In a previous project, the researchers have developed and implemented a method for risk assessment in close collaboration with the municipality of Malmö, Sweden. This work took place over a period of more than 3.5 years, conducted through an iterative process where each step of the method was tested and evaluated in its organizational context. This process comprised more than ten iterations, ranging from small-scale development and testing with a reference group, to full-scale implementation in the entire municipal organisation. The final method draws on principles both from the area of Business Continuity Management (BCM) and Risk Management (RM). Shortly after the full-scale implementation of the method, the municipality was hit by the Covid-19 pandemic. This spurred the question of whether the municipality could see a value in the risk assessment process and make use of the risk assessment results in their response to the pandemic. To further investigate this research question, an interview study involving 15 respondents from Malmö municipality was conducted during the second half of 2021. The results show that, while the municipality had not yet analysed the specific threat of a pandemic, the risk assessment process provided some benefits for the response to the pandemic. In particular, the interviews revealed that the risk assessment process had resulted in a prioritisation of the municipal departments’ activities, which proved useful when the pandemic caused shortage of staff and the need to postpone or cancel certain activities. The interviews also showed that the risk assessment process had created a general preparedness and increased risk awareness in the municipal departments, although some respondents raised the fact that a more thorough assessment of risks and vulnerabilities prior to the pandemic would have been valuable. Following the results from this study, the paper concludes with a discussion about how the risk assessment process can be complemented with additional activities further strengthening the adaptive capacity in the face of future crises.

14:00-15:20 Session 4B: S.03: Transdisciplinary Infrastructure Asset Management for Sustainable and Resilient Infrastructure II
Chair:
Omar Kammouh (Delft University of Technology, Netherlands)
Location: CQ-008
14:00
Mohsen Songhori (Technical University of Eindhoven, Netherlands)
Claudia Fecarotti (Eindhoven University of Technology, Netherlands)
Geert-Jan van Houtum (Technical University of Eindhoven, Netherlands)
Simulation Supported Bayesian Network Approach for Performance Assessment of Complex Infrastructure Systems
PRESENTER: Mohsen Songhori

ABSTRACT. With increasing traffic demand, aging infrastructure, and higher user expectations, bridge network managers seek tools and approaches by which they can maintain the normal service performance of bridge networks. To address such managerial need, and support their decision-making, we present a simulation supported Bayesian Network modeling methodology that facilitates system level performance evaluation of bridge level repair & reinforcement decisions.

In this work, we conceptualize a bridge network as a transportation road network in which bridges (each located on a road) are the only components that deteriorate and can fail. Furthermore, we contemplate this network as a multi-state network system in which each bridge can be in one of the multiple states defined according to its residual capacity (e.g., fully operational, under maintenance, failed). In addition to these characteristics, presence of a large number of components (e.g., bridge, road) as well as interrelations among them are also the other dimensions of complexity in bridge networks. Acknowledging such complexity of bridge networks, and in order to address it, our modeling approach uses and combines processes like system decomposition, simulation and Bayesian Network modelling. Moreover, the presented approach drives probabilistic information from simulation rather than relying on experts, and therefore, it can be useful for bridge networks where access to actual conditions of bridges & their monitoring are often difficult.

Generally, in planning and evaluating a bridge network performance with unreliable components, one could consider several aspects like users travel time reliability, operations and maintenance costs and network connectivity probability. In this paper, we consider both infrastructure owner's cost and travel time as performance measures. Moreover, and for modeling traffic dynamics, we use Markov Chain Traffic Assignment method which takes a Markovian statistical approach to model network traffic at macro or aggregation level.

Our methodology initially elaborates three layers of system resolution by (i) decomposing a bridge network into road sections according to its topological structure, and (ii) recognizing each road's maintenance & service related features. Then, we analyze each road's availability & costs via modelling and simulating. Next, using results of road’s availability & costs, we simulate the whole bridge network, and evaluate its performance measures. The last step of our approach is analysis of users' travel time & costs of the whole network by integrating simulation results into a Bayesian Network model.

14:20
Neetesh Sharma (UIUC, United States)
Armin Tabandeh (UIUC, United States)
Paolo Gardoni (UIUC, United States)
Colleen Murphy (UIUC, United States)
Modeling and Evaluating the Impact of Natural Hazards on Communities and their Recovery
PRESENTER: Paolo Gardoni

ABSTRACT. Regional risk and resilience analysis requires comprehensive modeling of hazard impacts on physical systems (i.e., structures and infrastructure) and communities as well as their recoveries. A holistic approach to regional risk and resilience analysis should be interdisciplinary, integrating engineering models and social science approaches into a consistent formulation. Engineering tools model the impact of hazards on physical systems, social science approaches define the measures of societal impact, and interdisciplinary approaches translate the functionality loss of physical systems into relevant measures of societal impact. Such impacts are typically not limited to the immediate aftermath of a damaging event but can be long-term. Furthermore, aging and deterioration of physical systems, population growth, economic development in regions vulnerable to natural hazards, such as coastal regions, and climate change can exacerbate risks. Existing engineering tools and measures of societal impact have been developed in isolation, without capturing interactions between physical and socio-economic systems. This paper presents a holistic formulation for regional risk and resilience analysis that integrates state-of-the-art engineering models and social science approaches to comprehensively predict and evaluate the impact of hazards. The paper also incorporates sustainability and resilience as two essential elements in risk evaluation. Some of these concepts are explained through a realistic example.

14:40
Srijith Balakrishnan (Singapore-ETH Centre, Singapore)
Beatrice Cassottana (Singapore-ETH Centre, Singapore)
Arun Verma (National University of Singapore, Singapore)
A network clustering approach for dimensionality reduction in machine learning models for infrastructure resilience analysis

ABSTRACT. Recent studies increasingly adopt simulation-based machine learning (ML) models for the analysis and prediction of critical infrastructure system resilience. For realistic applications, ML models need to consider the component-level characteristics that influence the network response during emergencies. However, such an approach could result in a large number of features and cause ML models to suffer from the 'curse of dimensionality,' i.e., an enormous amount of training data is required to achieve a high prediction accuracy. In this study, we present a clustering-based method to minimize the issue of high-dimensionality in ML models developed for resilience analysis in large-scale interdependent infrastructure networks. The methodology is implemented in three steps: (a) simulation data generation, (b) dimensionality reduction via clustering, and (c) development of resilience prediction models. First, an integrated infrastructure simulation model is developed to simulate the network-wide consequences of various disruptive events. The relevant component-level features are extracted from the simulated data. Next, clustering algorithms are employed to group infrastructure components based on their topological and functional characteristics. Then, the cluster-level features are derived from the component-level features. Finally, ML algorithms are used to develop models that predict the network-wide impacts of disruptive events using the cluster-level features and the network recovery characteristics. The generalized applicability of the method is demonstrated on different interdependent water and transportation testbeds. The proposed method can be used to develop accurate prediction models for rapid decision-making in post-disaster recovery of infrastructure networks.

15:00
Syed Taha (Moreld Apply AS, Norway)
Ove Njå (University of Stavanger, Norway)
Jawad Raza (Moreld Apply AS, Norway)
Assessing the Quality of Comparative Studies in the Asset Management and Safety Domain – Basics of Best Practices Conceptualization
PRESENTER: Syed Taha

ABSTRACT. A best practice is a relative term that can be claimed by individuals, organizations, or standardization bodies. The interpretation and understanding of the term are subjective and depends on those employing it. Our focus is on how best practices can be transferred between the Oil and Gas and Road Tunnel industry in an Asset Management context. In this paper, we attempted to outline the fundamentals of identifying best practices and proposed an idea for the transfer process. With that, we reviewed the literature to conceptualize best practices and discussed why best practices are needed, what challenges they persist, the criteria for qualifying for best practices, and the various approaches for their identification. We then analyzed the best practice definitions from authors associated with the research and industry community. It is observed that the term ‘best practice’ is often claimed without any qualification criteria or details. The term is also misused in the industry, and organizations sometimes use it as a marketing strategy without justification. Best practices are sometimes also confused and interchangeably used with standards, guidelines, common and recommended practices, etc., and sometimes even synonymized with ‘Good’ and ‘Excellent’ practices. The analyzed best practice definitions are primarily simple in nature, with very few formally defined. It is also observed that best practices are mainly claimed from industrial experiences of organizations and individuals, which needs to be verified scientifically. With this conceptualization, we presented an idea of a best practice transfer model using the Systems-theoretic approach, which will be explored in future research.

14:00-15:20 Session 4C: S.05: Exploring new trends in Machine Learning approaches II
Chairs:
Enrique Lopez Droguett (UCLA, United States)
Stefan Bracke (University of Wuppertal, Germany)
Location: CQ-006
14:00
Cedric Seguin (Laboratoire Lab-STICC, Université Bretagne Sud, France)
Yohann Rioual (Laboratoire Lab-STICC, Université Bretagne Sud, France)
Jean-Philippe Diguet (CROSSING, CNRS, Australia)
Guy Gogniat (Laboratoire Lab-STICC, Université Bretagne Sud, France)
Data extraction and deep learning method for predictive maintenance in vessel’s engine room
PRESENTER: Cedric Seguin

ABSTRACT. Maintenance is an essential operation in the maritime domain as in all industrial sectors since it ensures systems’ reliability and security. Furthermore, maintenance costs correspond to an important part of the industrial budget and exploitation costs and thus need to be addressed with strong attention. Today the Logistics sector relies on intensive and efficient exploitation of transportation resources, which means that availability of operational vessels is mandatory.

In this context, in maritime field, systematic maintenance is always considered because parts storage areas are limited, and failures can have critical consequences on the system and moreover on people. Predictive maintenance perfects maintenance organization and maximizes the time at sea while mitigating downtime at the shipyards as well as reducing maintenance costs. Predictive maintenance is therefore a necessity as it corresponds to a major industrial challenge.

The work presented in this paper takes place in the Seanatic project (Sea Analytic Connected Boat) that aims to bring new opportunities in terms of energy efficiency, safety, and operating cost reduction in the maritime domain, applying predictive maintenance concepts using artificial intelligence tools. Although IoT is rapidly growing in marine industry, the lack of data history as well as faulty data is a major concern for implementing predictive maintenance algorithms. However, solutions based on digital twins can be set up.

In this paper, we focus on predictive maintenance applied to vessel’s engine room, introducing two contributions. First, we developed a method using a training simulator to generate synthetic data to evaluate predictive maintenance algorithms. And then, we apply a deep learning approach to estimate the remaining useful lifetime (RUL).

To build supervised datasets, we export generated data from Engine Room Simulator by Kongsberg. The simulator is used to train future maintenance officers and allows us to shape nominal and non-nominal scenarios. We identified with experts, senior officers and tutors, relevant scenarios, for instance dirty filters or maintenance on turbochargers.

Since artificial intelligence algorithms require many heterogeneous datasets for developing its models, we adopted an approach based on segments of data, influenced by faulty variables of the simulator, to randomly create unpredictable large amount of data sets.

RUL predictions are difficult with a very complex system with multiple components, multiple states, and therefore an extremely large number of parameters. Thus, we propose a two-step approach. The first step consists in a non-linear feature extraction using the kernel principal component analysis (KPCA). Then, we applied the Weibull Time to Event Recurrent Neural Network algorithm to predict the parameters of a Weibull distribution corresponding to the survival function. We use this function to determine when to schedule maintenance.

In conclusion, this paper presents works from the Seanatic project which aims to provide a maintenance assistance tool to anticipate functional breakdowns and improve the logistics of the intervention phases, limiting technical stoppages to increase days at sea. Two contributions are presented: - A methodology to collect data in absence of faulty data using a digital twin solution - A deep learning approach to predict the need for maintenance of the diesel generator

Obtained results demonstrate the benefit of such an approach to anticipate maintenance operations and to optimize equipment lifetime. A 44 meters boat is dedicated to experiments and is currently being fitted with all the instrumentation. Future perspectives will be the integration of our approach on the boat.

14:20
Simen Eldevik (DNV, Norway)
Carla Ferreira (DNV, Norway)
Christian Agrell (DNV, Norway)
Sindre Olsen Skrede (DNV, Norway)
Erling Katla (DNV, Norway)
Marie Lindmark Sandøy (Lundin Energy, Norway)
Per Jørgen Dahl Svendsen (Lundin Energy, Norway)
Safe reduction of conservatism by combining machine learning and physics-based models
PRESENTER: Carla Ferreira

ABSTRACT. Complex engineering systems are often designed using advanced physics-based models to understand critical limitations of the system prior to commencing the building, installation, or operation of the system. During this process, it is prudent to apply conservative assumptions to key aspects of the design. Some of these conservative assumptions are made to handle stochastic variability in the situations the system will experience during its lifetime. This is often denoted as aleatory uncertainty and represents uncertainties related to practically irreducible variability. A common example of such aleatory uncertainty may be the loads induced by waves, wind, and current on a structure. Assumptions related to the maximum load-induced by weather must be sufficiently conservative to make sure that the system can withstand all relevant weather scenarios it may experience during its lifetime. However, some conservative assumptions are related to aspects that are unknown during the design phase, but where it is possible to gather evidence of what it actually is as the system is operated and experience is gained. The conservatism in these assumptions is related to knowledge, often denoted epistemic uncertainty, and may be reduced as more evidence is gathered without compromising the acceptable risk level of the system. This paper describes a method to optimize critical operational decisions in high-risk systems through:

- Rigorous distinction between assumptions that include necessary conservatism and assumptions where conservatism can be reduced based on increased knowledge gained through operational experience. - Probabilistic assessment of model discrepancy between physics-based and data-driven models - Bayesian iterative update of model assumptions

It showcases how to reduce the model discrepancy between physics-based models and experience-based data-driven models (machine learning) without compromising safety.

14:40
Brian Murray (SINTEF Ocean, Norway)
Ørnulf Jan Rødseth (SINTEF Ocean, Norway)
Lars Andreas Lien Wennersberg (SINTEF Ocean, Norway)
Håvard Nordahl (SINTEF Ocean, Norway)
Armin Pobitzer (SINTEF Ålesund, Norway)
Henrik Foss (Kongsberg Seatex, Norway)
Approvable Artificial Intelligence for Autonomous Ships: Challenges and Possible Solutions
PRESENTER: Brian Murray

ABSTRACT. Artificial Intelligence (AI) is being promoted as an important contributor to ensuring the safety of autonomous ships. However, utilizing AI technologies to enhance safety may be problematic. For instance, AI is only capable of performing well in situations that it has been trained on, or otherwise programmed to handle. Quantifying the true performance of such technologies is, therefore, difficult. This raises the question if these technologies can be applied on larger ships that need approval and safety certification. The issue gains further complexity when introduced as an element in remote control centres. This paper presents an overview of the most relevant applications of AI for autonomous ships, as well as their limitations in the context of approval. It is found that approval processes may be eased by restricting the operational envelope of such systems, as well as leveraging recent developments in explainable and trustworthy AI. If leveraged properly, AI models can be rendered self-aware of their limitations, and applied only in low risk situations, reducing the workload of human operators. In high risk situations, e.g. high AI model uncertainty or complex navigational situations, a timely and effective handover to a human operator should take place. In this manner, AI-based systems need not be capable of handling all possible situations, but rather be capable of identifying their limitations and alerting human operators to situations that they are incapable of handling with an acceptable level of risk.

15:00
Fan Wu (DNV, China)
Qian Wei (DNV, China)
Yanwei Fu (Fudan University, China)
Adversarial active testing for risk-based AI assurance
PRESENTER: Fan Wu

ABSTRACT. The wide application of machine learning makes the model testing more and more important, especially for safety-critical systems. Unfortunately, the classical performance metrics on a fixed dataset, fail to satisfy the industry requirements. Particularly, the challenges for testing come from the data coverage, testing efficiency, unidentified risks, acceptance criteria and run-time metrics. In this paper, we propose an uncertainty estimation algorithm, dubbed as Adversarial Deviation Score (ADS), which is a margin-based method only relying on the model and the input. Built upon ADS, we further introduce an efficient testing framework for machine learning models called Adversarial Active Testing. Given a limited labeling budget, this framework can actively select the risky samples, which is crucial for risk-based assurance. We verify this framework in AI-based corrosion detection, a typical AI use case in industrial inspection. Experiments show that our method can effectively identify untrustworthy predictions, including out of distribution samples and adversarial examples. The uncertainty scores are well consistent with the evaluation from domain experts. The ADS algorithm can also be easily applied to run-time monitoring or assurance.

14:00-15:20 Session 4D: S.22: Reinforcement Learning for RAMS Applications
Chairs:
Enrico Zio (MINES Paris, PSL University and Politecnico di Milano, Italy)
Michele Compare (aramis, Italy)
Location: CQ-106
14:00
Zhaojun Hao (Energy Department, Politecnico di Milano, Italy)
Francesco Di Maio (Energy Department, Politecnico di Milano, Italy)
Enrico Zio (MINES Paris, PSL University and Politecnico di Milano, Italy)
Optimal Prescriptive Maintenance of Nuclear Power Plants by Deep Reinforcement Learning
PRESENTER: Zhaojun Hao

ABSTRACT. The Operation & Maintenance (O&M) of complex energy systems, such as Nuclear Power Plants (NPPs), is driven by productivity and safety goals, but it is also challenged by the need of flexibility of production to respond to uncertain demand in an economically sustainable manner. Most O&M strategies for NPPs do not directly address the flexible requirement. In this paper, we develop a Deep Reinforcement Learning (DRL)-based prescriptive maintenance approach to search for the best O&M strategy, considering the actual system health conditions (e.g., the Remaining Useful Life (RUL), and satisfying the need of flexible operation to accommodate load-following while keeping reliability and profitability high. The approach integrates Proximal Policy Optimization (PPO) and Imitation Learning (IL) for training the RL agent of prescriptive maintenance. The Advanced Lead-cooled Fast Reactor European Demonstrator (ALFRED) is considered to show the applicability of the approach proposed.

14:20
Yesmina Jaafra (IRT SystemX / Expleo Group, France)
Christophe Bohn (IRT SystemX, France)
Lucas Schott (IRT SystemX, France)
Faouzi Adjed (IRT SystemX, France)
Frédéric Pelliccia (IRT SystemX / Apsys, France)
Mehdi Rezzoug (IRT SystemX, France)
On Improving the Robustness of Reinforcement Learning Policies against Adversarial Attacks
PRESENTER: Yesmina Jaafra

ABSTRACT. With deep neural networks as universal function approximators, the reinforcement learning paradigm has been adopted in several commonplace services such as autonomous vehicles, aircrafts and domestic assistance, which is raising new safety requirements. Indeed, a deep reinforcement learning agent obtains its states through observations, which may contain natural accuracy errors or malicious adversarial noises. Since the observations may diverge from the true environment states, they can lead the agent into taking risky suboptimal decisions. This vulnerability is well-known in computer vision literature where it has been emphasized via adversarial attacks. In terms of defense, various techniques have been proposed, including heuristic and certified methods, mainly to improve the robustness of deep neural networks-based classifiers. It is therefore necessary to propose solutions adapted to this learning challenge faced by reinforcement learning agents. In this paper, we propose two defense mechanisms based on reward shaping and adversarial training as a countermeasure against attacks on environment observations. The results reported from experiments conducted on autonomous vehicles controlled by reinforcement learning policies demonstrate that our approach successfully provide sufficient information to effectively learn the task in the context of highly perturbed environments. Furthermore, the defense mechanisms improve the robustness and generalization capacities of the learning models decreasing risky decisions in the presence of adversarial attacks.

14:40
Charalampos Andriotis (TU Delft, Netherlands)
Ziead Metwally (TU Delft, Netherlands)
Optimizing resource allocation strategies for system-level inspection and maintenance planning

ABSTRACT. Determining optimal long-term Inspection and Maintenance (I&M) policies for structures and infrastructure is a challenging task due to the uncertainties associated with structural resistance versus demand degradation, the inherent noise in measurement outcomes, and the large number of components for which a multiplicity of condition states, data collection techniques and maintenance options apply at every decision step. As a result of the immense state and action spaces associated with multi-component engineering systems, an exact solution to the problem is often intractable, and the global optimization problem is reduced to a low-dimensional search of a few optimized parameters assumed to be able to sufficiently enclose the subspace of near-optimal solutions. Joint frameworks of Partially Observable Markov Decision Processes (POMDPs) and Deep Reinforcement Learning (DRL) have been shown to be able to lift this restrictive assumption, allowing for dynamic and guided search over the vast space of admissible policies, thus exploring solution regions of highly improved life-cycle costs. Building upon recent actor-critic DRL methods for decision-making in deteriorating engineering environments, this work presents a new hierarchical problem formulation and respective DRL architectures that optimize I&M resource allocation in multi-component networks. The introduced architectures are applied to assess the time-dependent performance and learn an optimized I&M policy of a deteriorating bridge network subjected to time-dependent corrosion-induced deterioration. The devised I&M plan for the considered network is shown to outperform conventional decision rules and well-established DRL architectures, furnishing several advantages in training and inference time compared to other approaches.

14:00-15:20 Session 4E: Structural Reliability I
Chair:
Jana Markova (Czech Technical University in Prague, Klokner Institute, Czechia)
Location: LG-20
14:00
Bassel Habeeb (University of Nantes, France)
Boulent Imam (Department of Civil and Environmental Engineering, University of Surrey., UK)
Emilio Bastidas-Arteaga (La Rochelle University, France)
Shock Degradation Modelling for Bridges subjected to River Scouring
PRESENTER: Boulent Imam

ABSTRACT. Climate change impacts the infrastructure in several ways. In the particular case of bridges crossed by rivers, climate change has a direct impact on the river discharge and scouring patterns, consequently, scouring is the main cause of bridge failures crossed by rivers. The significance of the scouring phenomenon is related to its effect on the reliability of the bridge as the structure capacity rapidly decreases due to scouring. In a sense, the structure becomes in a state of unserviceability before the expected lifetime. Therefore, estimating the scouring risks is important in the bridge reliability analysis. This problem is addressed in this paper by embedding a mathematical scouring model to the Thames River flow dataset to estimate the depth of the local scour. Furthermore, an accumulated shock damage model based on the Lévy process estimates the time-dependent capacity of the bridge and its expected lifetime. The accumulated shock damage model explains well the independent rapid decrease in the structure capacity, in addition, the model provides the expected lifetime of the bridge with respect to the scouring phenomenon effect.

14:20
Paulo Claude (Arcadis, France)
Frédéric Duprat (Université de Toulouse, INSA, LMDC, France)
Thomas de Larrard (Université de Toulouse, INSA, LMDC, France)
Probabilistic approach for concrete structures exposed to combined carbonation-chloride-induced corrosion
PRESENTER: Paulo Claude

ABSTRACT. Corrosion of the steel reinforcements in concrete structures is a major cause of their deterioration. In most of the cases, corrosion is induced by carbonation or chlorination and only the supposed prominent initiating phenomenon is considered. However, the combination of both can be more uncertain because in this case the chloride binding capacity as well as the pore microstructure of the cement paste are affected. The case of a carbonated concrete bridge subjected to deicing salts is dealt with in this study. A specific Finite Element Modelling was developed in the view of estimating the time to reinforcement depassivation. Furthermore the model also predicts the propagation of corrosion. Hence, the study aims at estimating the probability of effective onset of corrosion, when the amount of corrosion products exceeds a threshold value. The time corresponding to this exceeding, with a predefined probability, can be a significant milestone in the maintenance policy. Concrete properties, external environment (carbon dioxide pressure, chloride content, relative humidity) and concrete cover depth are considered random variables within the study. In order to overcome the numerical charge of the FEM in the probabilistic computations, surrogate models based on polynomial chaos expansion have been employed. A Morris method was previously used to perform a sensitivity analysis on the parameters and select the most influent. Several locations were assumed for the structure, implying various environments and durations of the frost period.

14:40
Jaebeom Lee (KRISS (Korea Research Institute of Standards and Science), South Korea)
Chi-Ho Jeon (Chung-Ang University, South Korea)
Chang-Su Shim (Chung-Ang University, South Korea)
Young-Joo Lee (UNIST (Ulsan National Institute of Science and Technology), South Korea)
Probabilistic Estimation of Pit Corrosion in Prestressing Strands for Prestressed Concrete Bridges
PRESENTER: Jaebeom Lee

ABSTRACT. This study suggests a new method for probabilistically estimating pit corrosion amounts in prestressing strands for prestressed concrete bridges. The first part of the method is defining a probabilistic relationship between mechanical properties of prestressing strands and pit corrosion amounts based on Bayes’ rule. The second part is a Bayesian inference method using a Markov chain Monte Carlo method to infer a conditional probability distribution of corrosion amounts given a certain mechanical property. Consequently, probabilistic upper and lower bounds of corrosion amounts can be derived for a given specimen. In the presentation, two examples will be introduced: (1) probabilistic estimation of corrosion amounts in prestressing strands based on tensile test results, and (2) probabilistic estimation of corrosion amounts in embedded strands in prestressed concrete girders based on bending test results.

15:00
Franck Antelme Kouassi (Institut de Mathématiques de Toulouse, France)
Jean-Yves Dauxois (Institut de Mathématiques de Toulouse; UMR5219, Université de Toulouse, France)
Frederic Duprat (Laboratoire de Matériaux et Durabilité des Constructions, Université de Toulouse, INSA-UPS, France)
De Larrard Thomas (Laboratoire de Matériaux et Durabilité des Constructions, Université de Toulouse, INSA-UPS, France)
Frabrice Deby (Laboratoire de Matériaux et Durabilité des Constructions, Université de Toulouse, INSA-UPS, France)
A Proportional Hazard Model for the time-to-carbonation of reinforced concretes

ABSTRACT. A study of the time-to-carbonation by analysing real carbonation dataset have been performed and a Weibull proportional hazard model which includes influencing factors have been proposed. The model is used for reliability assessment of reinforced concrete structure under climat change scenarios.

14:00-15:20 Session 4F: Human Factors and Human Reliability: HF in novel automation contexts
Chair:
Podofillini Luca (Paul Scherrer Institute, Switzerland)
Location: LG-21
14:00
Lars Hurlen (IFE, Norway)
Maren H. Rø Eitrheim (IFE, Norway)
Vidar Hepsø (Equinor, Norway)
Grete Rindahl (IFE, Norway)
Concepts For Operating Multiple Petroleum Facilities From a Single Control Centre

ABSTRACT. Equinor has established their first land-based surveillance and control of offshore operations. Future business cases for remote operations will depend on the number of facilities that can be controlled from one location with proper production efficiency and high level of health, safety and environmental performance. The MultiKon research project by Equinor and IFE investigated key opportunities, challenges and promising solutions for remote operation of multiple facilities from one location. This paper summarizes the results and discusses concepts that could be relevant for similar projects in the petroleum or even other industries who are doing multi-unit operation.

A design-oriented feasibility study approach was chosen to explore how multi-facility control could be realized with the greatest potential for success. It is common to differentiate between operational, technical, and financial feasibility studies – this study focused on operational issues. Key activities included design workshops with end-users, interviews with technical experts in Equinor and regular meetings with stakeholders. The end users were experienced control room personnel. Stakeholders were Equinor team leaders, managers, and domain experts. A review of experiences from other industries identified potential human performance challenges for remote operation of multiple facilities and possible means to resolve these. Promising concepts were explored and discussed in design workshops by use of mock-up environments with real facility examples. The participants were assigned operator roles in use cases covering normal operation, disturbances, and maintenance campaigns. The research project targeted concepts that involve highly integrated ways of working, as this is expected to increase flexibility, capacity and scalability. Since there are many ways of realizing this in practice, one of the key conceptual dilemmas that were explored in these workshops was the degree of technical and operational integration in a multi-facility control centre, e.g., whether multiple facilities should be controlled from shared or separate hardware and software systems, by dedicated operators or teams that flexibly allocate their resources across facilities.

A concept proposal was evaluated by end users at IFE lab facilities, utilizing a full-scale, semi-interactive mock-up of the control environment, playing out scenarios covering a range of expected and off-normal control situations. Key elements in this proposal were a compact control team that allocate tasks and responsibilities flexibly; modular and flexible workstation setups that include facility-specific overview displays; and on-demand operator resources and facilities for handling unexpected, temporary workload peaks. The proposed concept was found ambitious but feasible. Interface consistency, cross-facility navigation and alarm-reducing measures were recommended, as well as means for strengthening diagnosis and problem-solving when field operators are not available.

14:20
Mohammad Bakhshandeh (University of Stavanger, Norway)
Jayantha P Liyanage (University of Stavanger, Norway)
Advanced situation awareness, Human Vigilance, and Sensitivity in complex and dynamic Industrial systems: Perspectives towards enhancing Systems resilience under digitalization contexts

ABSTRACT. Emerging industrial systems and solutions are such that the inherent complexity of the system, coupled with highly dynamic conditions, demand operators to perceive and judge abnormal situations, predict multiple scenarios based on unwanted deviations, and take proactive measures much ahead of time. The issue of how to enhance human situation awareness (SA) in such modern highly complex, interconnected, and dynamic systems is raising concerns of systems developers, asset operators, and authorities, especially when automation and digitalization show tendencies to keep the human "out of the loop" fully or partially. Based on recent industrial incidents and observations, we argue that the contemporary understanding of SA should be developed to an advanced SA level, the so-called Advanced Situation Awareness (Ad-SA), to ensure systems resilience early by mitigating potentials for unwanted events and losses. With respect to advancing digital solutions and applications, this paper is scrutinizing human vigilance and human sensitivity as critical integral issues towards such Ad-SA. Some selected industrial cases are reviewed to support arguments and to shed light on the pragmatism towards such new thinking.

14:40
Philippe Richard (IRT Railenium, France)
Christopher Paglia (IRT Railenium, France)
Abderraouf Boussif (IRT Railenium, France)
Quentin Gadmer (IRT Railenium, France)
Human operator reliability as a support for the safety assurance of autonomous railway systems: a look at Organizational and Human Factors
PRESENTER: Philippe Richard

ABSTRACT. Within the framework of the Autonomous Train - Passenger Service project (Train Autonome – Service voyageurs – TA-SV), we are studying the reliability of the Train Driver to support safety experts in their safety demonstration of autonomous railway systems. This paper aims at presenting our progress and the methodology concluding our work.

Current systems have an overall level of reliability that combines both the reliability of the technical system and that of the operators interacting with that system. In the railway domain, the evolution/modification of the system through the integration of new technologies is constrained by the obligation to assure the non-regression of the global safety level of the railway system. In the context of the TA-SV project, the design and development of the autonomous train require to demonstrate that its deployment in the conventional railway system will not regress the current safety level. Generally, such a safety demonstration is performed with respect to the GAME principle (GAME stands for Globalement Au Moins Équivalent – Globally At Least Equivalent).

In this paper, and in order to define the reliability level of the train driver, two approaches are considered: (i) defining the level of reliability of the human operator when dealing with an unwanted event, and (ii) defining the level of reliability on an event without distinction of the reliability between the various agents involved, whether human or technical (global reliability level of the system). To do this, we have developed a methodology, in 4 steps, presenting several methods and allowing to define the level of reliability according to the precision of the result sought, the situation studied and the available input data. The considered steps are:

• Step 0: Understanding the event. This step aims to develop the evaluator's knowledge. It is optional if the expert(s) using the methodology have a good knowledge of the situation/event to be studied. It is worth noticing that this knowledge is required to conduct the following steps. • Step 1: Generate input data. The integrity of the data is a prerequisite for the achievement of the process. Several methods in this step can be considered depending on the event/situation studied. Each method presented has advantages and disadvantages that will be presented and allow, a priori, to cover all the situations/events that will be studied. Different methods may be required or used for the study of the same event/situation. • Step 2: Determine the level of human reliability. It consists in using the data generated previously to define the reliability of the human operator in a given situation, i.e., the probability of human operator error in front of the studied situation/event. • Step 3: Prepare the transposition to the technical system. It consists in transposing the data generated in step 2 into a quantitative measure allowing the safety experts to make a comparison for the same event and context when it will be covered by the future autonomous system.

In this paper, we will present an illustrative application on signal passed at danger. This illustration is based on data gathered from the research literature and safety reports. This fictitious application aims at presenting the methodology, the modalities of choice of the methods used for each step for the studied event. It also aims to illustrate the importance of the quality of the input data and their clarity in order to achieve a relevant result. A concrete application is planned but, by the time the paper will be submitted, the application will probably still be in progress. Elements of it may be incorporated into the paper. Nevertheless, this illustrative application allows us to open the discussion on the following elements:

• What should be the level of granularity of the study? The authors discuss the relevance of the data that can be used. In the context of TASV project, they wonder about the lines to be used for the study, the activities to be considered (freight, intercity, TER, etc.), the type of signal, and so on. • Which approach is the most helpful for the safety assurance process of the autonomous system? Should we evaluate of the whole event’s reliability to compare it with the future situation, or should we instead study the driver’s reliability to design the future system while the general organisation of the system can be reviewed? These discussions will be based on a reflection in connexion with the GAME principle and the Common Safety Methods (CSM). • In which level of safety demonstration should this method be used? As a safety target allocation or as a support to an explicit risk demonstration.

15:00
Magnhild Kaarstad (Institute for Energy Technology, Norway)
Robert McDonald (Institute for Energy Technology, Norway)
Micro-reactors: Challenges and Opportunities as Perceived by Nuclear Operating Personnel

ABSTRACT. This paper performs research into how challenges, opportunities, risk, and trust in micro-reactors are perceived by current nuclear operating personnel. Micro-reactors are small portable reactors with simple designs, a high degree of autonomy, and inherent safety features. Interest in these very small reactors is driven by several factors, including the need to generate power in remote locations, and in locations recovering from natural disasters. Micro-reactors are in principle self-regulated, but with the possibility of intervention from a remote location if necessary. In the nuclear industry, there are ongoing initiatives examining the feasibility of developing and implementing micro-reactors. However, there are several issues that need to be investigated before this new technology is implemented. Researchers have pointed out that a main challenge for successful integration of advanced autonomous systems, is to which degree the user trust matches the capabilities of the system (Balfe, Sharples and Wilsnon, 2018; Muir, 1994) and to which degree users have a realistic risk perception of the system (e.g., Beer, Fisk, Rogers, 2014). Too much trust and too low perception of risk may lead to a reduced likelihood of detecting and diagnosing errors in the system, while too little trust and too high levels of risk perception may result in disuse of the automation, which again may lead to degraded performance (Muir, 1994; Lee & See, 2004). For the interaction between people and automation to be reliable, it is important that people develop a calibrated level of trust and risk perception.In this study, we ask: i) Which challenges and opportunities do current nuclear operating personnel perceive related to micro-reactors? and ii) What does it take for nuclear operating personnel to trust microreactors?

These research questions were addressed in a small-scale study through a questionnaire developed in a research activity performed within the Halden Human Technology Organisation (HTO) project, supported through an international research agreement. The questionnaire contained both structured and open questions focusing on the concept of micro-reactors and was distributed digitally to current nuclear operating personnel. Sixteen nuclear operators responded to the questionnaire.

Several opportunities of micro-reactors were mentioned by the operators; The flexibility of micro-reactors was highly appreciated in that it is possible to provide rural areas with electricity. Furthermore, micro-reactors were seen as a key to future sustainable energy combined with renewable energy on a smart grid. The challenges mentioned concerned mainly public trust, vulnerability to cyber-attacks and risk of losing connectivity if controlled remotely. Regarding what it would take for the operators to trust the concept of micro-reactors, proven design, research that answers safety issues, as well as knowledge of their construction, operation and safe shutdown was most often mentioned. In addition to the open-ended questions on trust, operator trust was also looked upon through a scale (Skjerve et al., 2001; Strand, 2001; Skjerve et al., 2005). The operators rated their overall trust to colleagues, conventional NPPs and micro-reactors at an equally high level. It should be noted that the operator rating of trust in micro-reactors is not based in actual experiences of this concept, and their trust in microreactors is therefore not directly comparable to trust in current nuclear reactors and trust in colleagues.

This paper will present these empirical findings in more detail, and discuss the findings based on relevant theories and models. Findings from this study will provide knowledge to guide the planning of the operation and supervision of micro-reactors and point to areas within this topic where further research is needed.

References:

Balfe, N., Sharples, S., & Wilson, J. R. (2018). Understanding is key: An analysis of factors pertaining to trust in a real-world automation system. Human Factors, 19. Advance online publication. https://doi.org/10.1177/0018720818761256 Beer, J., Fisk, A.D., Rogers, W.A. (2014) Toward a framework for levels of robot autonomy in human-robot interaction, Journal of Human- Robot Interaction, vol. 3, no. 2, p. 74, 2014. Lee, J.D., See. K.A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50-80, 2004. Muir, B.M. (1994). Trust in automation: Part I. Theoretical issues in the study of trust and human intervention in automated systems. Ergonomics, 37, 1905-1922, 1994. Skjerve, A.B., Strand, S., Skraaning, G.Jr., Nihlwing, C. (2005). The extended teamwork 2004/2005 exploratory study. Preliminary results. OECD Halden Reactor Project (HWR-812) Skjerve, A.B. G. Andresen, R. Saarni, G. Skraaning (2001). The influence of automation malfunctions on operator performance. Study plan for the human-centered automation 2000 experiment. OECD Halden Reactor Project (HWR-659). Strand, S. (2001). Trust and Automation: The Influence of Automation Malfunctions and System Feedback on Operator Trust. OECD Halden Reactor Project (HWR-643).

14:00-15:20 Session 4G: Prognostics and System Health Management I: industrial applications
Chair:
Fink Olga (ETH Zurich, Switzerland)
Location: CQ-009
14:00
Zdenek Vintr (University of Defence, Czechia)
Anh Dung Hoang (University of Defence, Viet Nam)
Methodology for Accelerated Tests of Electronic Elements Based on Multifactor Stress
PRESENTER: Zdenek Vintr

ABSTRACT. Currently, modern combat vehicles are independent combat complexes, versatile in performing a variety of combat tasks. Therefore, these vehicles are increasingly integrated with modern complex electrical and electronic systems to ensure the fighting ability independently, as well as to cooperate with the others in the battle formations. Vehicle survivability depends heavily on the reliability of each system in the vehicle, then their quality and life requirements must be continuously improved. One of the technical solutions to this problem is the use of suitable, reliable, and long-life electronic components for these systems. This also requires electronic elements to be tested for reliability and lifetime before being used on these vehicles. However, testing reliable and long-life elements is becoming complicated and expensive, due to the long duration of the test. So, Acceleration Test is an effective solution, widely used in reliability testing of electronic components in both civilian and military vehicles. The paper gives a methodology for accelerated tests of electronic elements in combat vehicles, in which the increased stresses are used in combination. An accelerated test based on multifactor stress with a specified electronic element is also introduced, as an application of the methodology. The experiment results will show the effect of stress combination in comparison with single stress in an accelerated test.

14:20
Dominik Brüggemann (University of Wuppertal, Germany)
Christoph Rosebrock (University of Wuppertal, Germany)
Stefan Bracke (University of Wuppertal, Germany)
Further analysis of a circular metal sawing process

ABSTRACT. The most appropriate separation technique for the processing of solid metal parts with large dimensions is sawing. The cutting tools used in this machining process are exposed to very high mechanical and thermal loads, yet the highest precision, product quality and process stability must be guaranteed. With regard to process optimisation, the prediction of tool failure and the estimation of the remaining useful life is an important goal. In order to be able to describe and predict this tool degradation, a degradation model is to be developed in the long term. In this work, the method for detecting the separation points was revised. For this purpose, the cause of the previously unnoticed systematic offset is explained and a method is presented that eliminates it. In addition, a method is developed that makes it possible to adjust the key figure, that appears to be closely related to the wear of the tool, in case of abruptly changing feed rates on the basis of a mathematical model. In this way, the continuous degradation of the workpiece can be followed despite strongly fluctuating process parameters. Based on this adjusted key figure, a first approach for the prognosis of the condition of the circular saw blade is presented. In addition, some problems and possibilities related to the application context are described and explained.

14:40
Geert-Jan van Houtum (Eindhoven University of Technology, Netherlands)
Ipek Dursun (Eindhoven University of Technology, Netherlands)
Alp Akcay (Eindhoven University of Technology, Netherlands)
How good must failure predictions be to reduce maintenance costs drastically?

ABSTRACT. The ideal situation for maintenance of technical systems is that all failures are predicted, that no false predictions are generated, and that the predictions are made sufficiently far in advance. In that case, for all upcoming failures, a spare part can be sent to the system from a central location and the failing component can be replaced preventively before the failure would occur. There would be no unplanned downtime and no expensive local spare parts stocks are needed. The Internet of Things and Artificial Intelligence (AI) will bring us closer to that ideal situation. But how close do we need to be to that ideal situation in order to have really low maintenance costs?

We will investigate a setting with multiple technical systems that are supported by a local stockpoint with spare parts. We consider a single critical component in that system and assume a specific degradation process, being the delay time model. We assume that a signal is generated when a defect occurs, and we then have the delay time to execute a preventive replacement before the failure occurs. We will formulate a general model with the fraction of false positives (false signals), the fraction of false negatives (not all defects are observed), and the delay time as input parameters. We derive an optimal policy for maintenance actions and the spare parts inventory control. Next we compare the optimal costs to the optimal costs in the ideal situation with zero false positives, zero false negatives, and a delay time that is equal to the replenishment lead time of the stockpoint. In the worst case, we have no predictions at all, and from there we can look how the optimal costs reduce to the minimal optimal costs in the ideal case. We will see that the opposite of the 80-20 rule applies. With predictions that are at 80% of perfect predictions, we get only 20% of the costs reduction. This has serious implications for black box prediction methods, for which it is generally hard to get close to perfect predictions.

15:00
Xingheng Liu (Norwegian University of Science and Technology, Norway)
Jørn Vatn (Norwegian University of Science and Technology, Norway)
Erosion state estimation for subsea choke valves considering valve openings
PRESENTER: Xingheng Liu

ABSTRACT. Choke valves are extensively used in the offshore oil industry, where they regulate the flow of hydrocarbon fluids from the oil wells and reduce the wellhead pressure. They are subject to continuous erosion that results from the impingement of solid particles in the hydrocarbon fluids. Since maintenance and inspections are costly for subsea choke valves due to the reduced accessibility, it is crucial to evaluate the erosion state of chokes accurately.

One health indicator of erosion is the difference between the theoretical and estimated valve flow coefficients (Cv), a relative measure of the efficiency at allowing fluid flow. Traditionally, the Cv deviation is fit by a Gamma process. We show why this approach is unrealistic in practice before proposing a model that uses the historical valve openings and process parameters to calibrate raw Cv measurements. This allows us to estimate the erosion rate at different valve openings and reveal the ``true" erosion state, which differs from the raw Cv. The least-squares method is used to estimate the baseline shape of the Cv deviation curve. We apply our method to Equinor's choke valve erosion data, showing that the new method, compared to traditional ones, gives a more accurate estimation of the erosion state which can then be used to provide decision support for production and maintenance managers.

14:00-15:20 Session 4H: S.01: Advances in Well Engineering Reliability and Risk Management: novel tools for reliability analyses
Chair:
Feliciano Silva (Petrobras, Brazil)
Location: CQ-105
14:00
Márcio Moura (Center for Risk Analysis, Reliability Engineering and Environmental Modeling (CEERMA), Federal University of Pernambuco, Brazil)
Eduardo Menezes (Center for Risk Analysis, Reliability Engineering and Environmental Modeling (CEERMA), Federal University of Pernambuco, Brazil)
Isis Lins (Center for Risk Analysis, Reliability Engineering and Environmental Modeling (CEERMA), Federal University of Pernambuco, Brazil)
Feliciano da Silva (CENPES - Research Center Leopoldo Américo Miguez de Mello - Petrobras, Brazil)
Marcos Vinicius Nóbrega (CENPES - Research Center Leopoldo Américo Miguez de Mello - Petrobras, Brazil)
Analysis of compromising operational conditions for intelligent completion using Digital Twin
PRESENTER: Eduardo Menezes

ABSTRACT. The intelligent completion (IC) has been implemented in new oil wells, especially in the pre-salt exploration areas. IC encompasses a whole set of technologies, which enable a more precise and controlled well production, with zonal control, pressure and temperature measurements, and predictability of production and maintenance actions. One of the most important components of IC is the interval control valve (ICV), used to select production zone. This valve can be electric or hydraulically controlled. A typical configuration includes using three hydraulic ICVs in conjunction with a subsea control module (SCM), dubbed intelligent completion hydraulic directed (CI-HD). In the wells with CI-HD, some operational problems have arisen with the ICVs, including the spurious movement of the valves, undesired gas flux, and other abnormal conditions. In this context, this paper discusses the development of a digital twin for the CI-HD in pre-salt wells, describing the model, assumed hypotheses, and preliminary results. In fact, this digital simulation model aims at being a virtual representation of CI-HD encompassing its whole lifecycle and operational conditions. With the CI-HD digital twin, it will be possible to obtain the system’s responses to diverse operational inputs/conditions and, thus, evaluate system performance and potential failures.

14:20
July Macedo (Universidade Federal de Pernambuco, Brazil)
Caio Souto Maior (Universidade Federal de Pernambuco, Brazil)
Isis Lins (Universidade Federal de Pernambuco, Brazil)
Rafael Azevedo (Universidade Federal de Pernambuco, Brazil)
Márcio Chagas Moura (Universidade Federal de Pernambuco, Brazil)
Manoel Feliciano da Silva (Petrobras S.A., Brazil)
Marcos Vinícius Nóbrega (Petrobras S.A., Brazil)
A Bayesian prior distribution for novel on-demand equipment based on experts’ opinion: A case study in the O&G industry
PRESENTER: July Macedo

ABSTRACT. The operation of Oil and Gas (O&G) industries involve complex and novel equipment, in which reliability estimation is essential to allow forecasting costs, planning maintenance, and estimating system availability. However, especially for technologies under development, reliability data is frequently absent, scarce, or insufficient because tests experiments are usually very costly. Alternatively, generic databases and expert opinions often provide valuable prior knowledge about such equipment. In the Bayesian framework, the prior knowledge about a system’s reliability is updated as new field and/or test data are gathered. This paper proposes an approach to define informative prior distributions for the reliability function of equipment that works on-demand and is in standby mode most of the time. Specifically, we used experts’ opinions extracted through elicitation forms to estimate the occurrence probability of the failure mechanisms that may emerge during actuation and operation. We aggregated the elicited probabilities using population variability analysis, employing the maximum likelihood method. Thus, the proposed methodology enables considering how probability estimates vary among experts and to define a Bayesian informative prior distribution using only experts’ knowledge. Finally, we present a case study involving a novel sliding sleeve valve of large diameter to illustrate the applicability of our methodology.

14:40
Rafael Azevedo (CEERMA, Brazil)
Marcio Das Chagas Moura (Federal University of Pernambuco, Brazil)
Isis Lins (Universidade Federal de Pernambuco, Brazil)
July Macêdo (UFPE, Brazil)
Manoel Feliciano da Silva Jr (Petrobras, Brazil)
Marcos Vinícius Nóbrega (Petrobras S.A., Brazil)
Caio Maior (CEERMA, Brazil)
The use of Weibull-GRP virtual age model for addressing degradation due to demand-induced stress in reliability analysis of on-demand systems
PRESENTER: Isis Lins

ABSTRACT. The reliability modeling of valves has long been a topic of major concern, especially when it comes to safety systems. Regarding reliability, these components normally have two main types of failure causes that contribute to the occurrence probability of a failure mode: (1) demand-caused and (2) exposure time-related failures. The reliability model of the former is normally associated with a demand failure probability ρ, while a failure rate λ is used for the latter. Some works have focused on explicitly addressing component degradation due to demand-induced stress and aging on those models. This paper proposes a new model for the failure rate that accounts for aspects of demand-induced stress and aging effects based on the Weibull-GRP virtual age model. A case study is included on an application to a mechanical Formation Isolation Valve (FIV) to be installed in a Brazilian oil field.

15:00
Danilo Colombo (Federal Fluminense University (UFF) / PETROBRAS, Brazil)
Gilson Brito Alves Lima (Federal Fluminense University (UFF), Brazil)
João Papa (São Paulo State University (UNESP), Brazil)
Leandro Aparecido Passos (University of Wolverhampton, Brazil)
Marcos Cleison Silva Santana (São Paulo State University (UNESP), Brazil)
A novel approach to well barriers survival analysis using machine learning
PRESENTER: Danilo Colombo

ABSTRACT. Avoiding accidents, oil spills, greenhouse gases emissions, and environmental pollution are significant concerns in any oil and gas operating company. As such, well integrity management and digitalization are essential to well engineers, for a primary challenge regards the reliability estimation of well barriers and proper correlation between environmental and operational conditions. When applied to the engineering domain, survival analysis, aka reliability prediction, differs from other statistical problems in the presence of censored data, for the component's failure may not occur within the study's timeframe. Such a scenario is likely acknowledged in well barrier management, for the barrier might be working correctly when data is collected or replaced with no failure reported. Simply not considering censored data may bias the outcomes and neglect the safety barrier reliability estimation. In engineering, for we need to estimate the failure time reasonably, it is pretty common to simulate it using the so-called Accelerated Failure Time (AFT). However, AFT models are parametric, restricting their application to specific situations. Other widely regarded approaches, such as the Kaplan-Meier (KM) estimator and the Cox Regression (CR), are among the most used models to cope with survival analysis. On the other hand, KM does not consider covariates in the model, and CR assumes the risks are proportional and the relation between the covariates and the log risk is linear. The literature introduces several other models based on the Cox Regression, but they do not easy the assumptions mentioned above. Alternatives to such classical approaches have their foundations on the machine learning paradigm. Attempts to incorporate learning capabilities in survival analysis are recent and primarily focus on medicine and biology; we are not aware of such relationship in well barrier reliability analysis. Lack of data limits the application of deep learning in this context. This paper proposes a novel approach that can capture interaction among variables and non-linearities: a greedy algorithm composed of two regression models and a nonparametric strategy can handle the problem nicely. The first regression model captures the relationship between the covariates and the time to failure (TTF), and the second characterizes TTF variability with more flexibility than statistical-driven methods. A modification in the loss function is proposed to consider the censored data and balances it concerning the entire dataset (i.e., non-censored data). Last but not least, a metaheuristic technique optimizes two models by maximizing a likelihood function on the training data. We validate the proposed approach for reliability estimation of downhole safety valves, one of the most important and must-have well barrier components. The results show that the proposed model is competitive and outperforms classical approaches, besides being more generic and customizable. We can either use the model to forecast the reliability of new valves or identify the most appropriate model and manufacturer according to the well features. It can help understand better how specific covariates influence the failure so that laboratory tests can be further improved.

14:00-15:20 Session 4I: S.06 A: Safety and Reliability in Road and Rail Transportation: Design
Chair:
Vikram Pakrashi (University College Dublin, Ireland)
Location: CQ-107
14:00
Tianqi Sun (Norwegian University of Science and Technology (NTNU), Norway)
Jørn Vatn (Norwegian University of Science and Technology (NTNU), Norway)
A Markov-based Bridge Maintenance Optimization Model Considering User Costs
PRESENTER: Tianqi Sun

ABSTRACT. An efficient maintenance strategy is critical for the bridge network to maintain an acceptable performance level under limited financial resources. Bridge management systems (BMSs) have been developed to assist this task, with different mathematical maintenance models for degradation prediction and evaluating impacts of various intervention/maintenance strategies. However, they are usually based on pure maintenance models. The potential impacts for the road users due to different maintenance tasks are not taken into consideration. Therefore, maintenance strategies obtained from the current BMSs cannot ensure an optimal level of service for road users. One approach to overcome this is to formally define such impact as part of the BMSs. Given the discrete condition ratings for bridges in Norway, this paper proposed a Markov-based maintenance model considering the impact on road users. The maintenance model is based on a previously published Markov model where inspection intervals and due dates for the improvement measures depend on the current condition. A weakness of the existing model is that all repairs are perfect and restore the system to an as-good-as-new state. This paper presents an improved model considering different levels of repairs and the situation that the planned improvement level is not always reached. The impacts on road users are measured in monetary values, namely road user cost (RUC), based on the current practice in the Norwegian Public Road Administration (NPRA) and literature findings.

14:20
Eivind H. Okstad (SINTEF Digital, Norway)
Ola Løkberg (SINTEF Digital, Norway)
Robert Bains (SINTEF Digital, Norway)
Evaluating non-functional qualities in railway by applying the quality triage method: A case study
PRESENTER: Eivind H. Okstad

ABSTRACT. Railway projects traditionally focus on success factors like cost-effective deliverables and achievement of functional requirements from the infrastructure company’s or railway operator’s point of view. In addition, passenger safety is much emphasised as the main societal requirement to any transportation means, and that is closely followed up by the regulatory authorities. There are other qualities as well that could benefit by attention in projects from different stakeholders and related work processes. Examples of such are security or cybersecurity, scalability, reliability, availability and sustainability. As an example, cybersecurity and security management become important issues in railway projects and operations as technology development and implementation of new technology speed up. Such qualities have gained less attention in projects but typically revealed later, after the system is put in service. Each of these qualities could be addressed by applying separate methods, but this might be a demanding approach to quality-requirement management. Therefore, the quality triage method (Brataas et al., 2020) was introduced as a simplified, low-demanding approach to decision support. It applied ‘user stories’ as basis for identifying quality risks and making multiple quality areas explicit. The original motivation for the quality triage method was experiences from software (SW) development projects applying agile development that have been prone neglecting quality requirements (Alsaqaf, et al., 2017). The quality triage method intends to meet these challenges in a way that makes it easier to balance and prioritise between different qualities in progress-, or regular project meetings. The present article argues this way of thinking could be valuable in other domains than SW-development as well. The case study presents results from testing the quality triage method in a railway-project environment. A light-rail company was contacted, and experience was shared with the research team on cyber-security issues, and how the railway company plan to deal with cybersecurity as a quality requirement in upcoming projects.

14:40
Abhimanyu Tonk (IRT Railenium, France)
Abderraouf Boussif (IRT Railenium, France)
Operational Design Domain or Operational Envelope: Seeking the suitable concept for autonomous railway systems
PRESENTER: Abhimanyu Tonk

ABSTRACT. Autonomous vehicles are the chauffeurs of transportation in the future. The path to such an omniscient self-driving end-state requires development on two fronts: functional capabilities (autonomous) and the ability to execute these capabilities during operations. From a safety point of view, the risk associated with the execution of any one of these autonomous capabilities in two different operational scenarios is not equivalent; making it harder to establish confidence in self driving operations. Thus, to justify the overall operational safety of such autonomous driving systems, Operational Design Domain (ODD) [1] is used as an instrument to define all the operational conditions (weather, time of day, infrastructure, etc.) within which all or a part of the autonomous capabilities (previsualized as per the Grade of Automation - GoA [2]) may be safely executed. The predefined ODD plays an essential role in the design, development and testing of self-driving transportation systems (cars, ships, trains, etc.) [3].

The concept of ODD was initially introduced in the autonomous vehicles domain and later standardized through the standard SAE J3016 (2016). The maritime industry introduced the so-called “Operational Envelope (OE)” concept as a measure to put operational constraints on functional capabilities [4]. The use of the term ODD was also debated before reverting to usage of OE [5]. The compulsion for this evolution from ODD to OE originates due to the diversity and dissimilarity in operational interactions between a ship (sailing in the sea) from a car (driving on the road), e.g., long journeys, larger obstacles detectable at greater distances, slow speed, etc. Also, the high cost-risk associated with ships necessitates a continuous or close monitoring (through onshore control centers or on-board humans) in complex situations which requires that OE shall account for both human and automation capabilities, and responsibilities [6].

Moving on to the railway domain, the latest attempts to introduce autonomous and remote train operations in the railways industry have once again raised the need to have a specified operational context (and conditions) [7]. In a first look, both ODD and OE seem to bring some additional advantages (over one another) to the railways. Yet there might be several disadvantages as well. For instance, unlike the case of autonomous ships, an on-board human is not considered as a part of the safety control loop for autonomous trains (at least for GoA4). Moreover, the operational environment of trains is already highly restricted in comparison with automotive or maritime industries, which might render any one of the ODD or OE more suitable. Thus, considering the several ambiguities, there is a need to extensively discuss both the concepts (ODD and OE) in order to identify the appropriate concept for the safe operations of autonomous trains. This is the main theme of this paper.

Firstly, we review and discuss the established definitions of autonomy in the three domains, and their impacts on the implementation of autonomous technologies. Then, we discuss both the ODD and OE concepts in detail while identifying and comparing the aspects which might be advantageous or disadvantageous for autonomous train operations. Based on the discussion, we assess and propose the necessity of a novel instrument (based on ODD, OE, hybrid or completely independent) in the autonomous railway domain. Finally, while shedding a brief light, on the existing Grade of Automations in the railways domain, we conclude while highlighting the possible future works that might originate as a result of the analysis provided in this manuscript.

References: [1] Tonk, A., Boussif, A., Beugin, J. and Collart-Dutilleul, S., 2021, September. Towards a Specified Operational Design Domain for a Safe Remote Driving of Trains. In ESREL 2021, 31st European Safety And Reliability Conference (p. 8p).

[2] IEC. IEC 62267:2009 Railway applications - Automated urban guided transport (AUGT) - Safety requirements, 2009 Test § (2009). Retrieved from https://webstore.iec.ch/publication/6681

[3] Thorn, E., Kimmel, S. C., Chaka, M., & Hamilton, B. A. (2018). A framework for automated driving system testable cases and scenarios (No. DOT HS 812 623). United States. Department of Transportation. National Highway Traffic Safety Administration.

[4] Rødseth, Ø.J. and Tjora, Å., 2014, May. A system architecture for an unmanned ship. In Proceedings of the 13th international conference on computer and IT applications in the maritime industries (COMPIT). Verlag Schriftenreihe Schiffbau, 2014 Redworth, UK.

[5] Rødseth, Ørnulf & Nordahl, Håvard. (2017). Definitions for Autonomous Merchant Ships. 10.13140/RG.2.2.22209.17760.

[6] Operational Design Domain for Cars versus Operational Envelope for Ships: Handling Human Capabilities and Fallbacks

[7] Lagay, Rémy, and Gemma Morral Adell. "The Autonomous Train: a game changer for the railways industry." In 2018 16th international conference on intelligent transportation systems telecommunications (ITST), pp. 1-5. IEEE, 2018.

15:00
Dana Prochazkova (Czech Technical University in Prague, Technicka 4, 160 00 Praha 6, Czechia)
Jan Procházka (VUT, Purkynova 464, 61200 Brno, Czechia)
Jana Martincova (VUT, Purkynova 464, 61200 Brno, Czechia)
Tomas Kertis (VUT, Purkynova 464, 61200 Brno, Czechia)
MEASURES FOR TUNNEL SAFETY MANAGEMENT
PRESENTER: Tomas Kertis

ABSTRACT. Tunnels on surface routes belong to critical elements of traffic infrastructure, which is important part of critical infrastructure of each country. The article summarizes the requirements for tunnels on surface routes from the safety viewpoint. Each tunnel is represented as socio-cyber-physical (technical) system, the structure of which is system of systems. Based on real data on tunnel failures and accidents, a database of the sources of the risks, which were the causes of the tunnels´ failures, was established. The list of causes of failures was compared and supplemented by findings from research accessible in the scientific literature. The database contains data on 965 road tunnel failures and 53 case studies since the beginning of the 19th century; the terrorist attacks in the underground railway have been recorded since 1883. By critical analysis of data on failures´ impacts and responses´ procedures, it is determined: - risk sources separation and display using a fish bone diagram, - tools for determination of integral risk of tunnels on surface routes which enabling the risk management towards safety at tunnel design and operation.

Based on knowledge and experiences with management of complex systems with socio-cyber-physical nature, the tunnel safety management generic model was establish using the principles of risk-based design and risk-based operation. Tunnels on the roads are technical facilities that are excavated in various geological subsoils, from soil to hard rock. Therefore, the tunnel´ safety begins with the location and it is determined by: - the quality of the specifications (terms of references), which must consider both, the geotechnical conditions in the subsoil and the sources of other risks at the site, and the requirements of the expected operation, i.e. in particular the limits and conditions given by the material used and the structure used, - high-quality construction, - operation in accordance with the stipulated limits and conditions set, as well as proper maintenance throughout the life cycle. In all cases, the requirements of valid norms and standards must be supplemented by knowledge obtained from all site-specific risks´ analyses and measures for qualified integral risk management towards integral safety during the tunnel life cycle. Generic model for tunnel safety management contains activities to ensure the safety; roles of all items which are involved in safety; tasks specified in the safety management system; and items of risk management process directed to safety and their order.

14:00-15:20 Session 4J: "Risk Blindspots and Hotspots in Public Understanding of Risk" Workshop organised by LRF Institute for the Public Understanding of Risk, National University of Singapore
Chair:
Olivia Jensen (LRF Institute for the Public Understanding of Risk, National University of Singapore, Singapore)
Location: CQ-007
14:00
Olivia Jensen (LRF Institute for the Public Understanding of Risk, National University of Singapore, Singapore)
Carolyn Lo (LRF Institute for the Public Understanding of Risk, National University of Singapore, Singapore)
Leonard Lee (LRF Institute for the Public Understanding of Risk, National University of, Singapore)
Risk Perception Gaps: A Conceptual Framework
PRESENTER: Olivia Jensen

ABSTRACT. This paper sets out a preliminary conceptual framework to explain gaps between professional risk assessments and public risk perceptions in relation to the same source of risk. The proposed framework incorporates three sets of factors that can affect the presence and severity of such gaps: factors related to differences in perspectives between professionals and lay people; characteristics specific to the risk domain; and societal level characteristics with asymmetrical impacts on professional and lay risk assessment. The paper also reports on initial feedback elicited from professionals in risk assessment and risk communication to strengthen the validity and applicability of the framework.

14:00-15:20 Session 4K: Nuclear safety: area and external event analysis
Chair:
Sebastian Martorell (Universitat Politècnica de València, Spain)
Location: CQ-010
14:00
Dae Il Kang (Korea Atomic Energy Research Institute, South Korea)
Young Hun Jung (Korea Atomic Energy Research Institute, South Korea)
Development of a Fire PSA Program for Korean Nuclear Power Plants
PRESENTER: Dae Il Kang

ABSTRACT. In the case of applications of NUREG/CR-6850 methodology to nuclear power plants (NPPs), the level of effort significantly increases because it requires a lot of cable data and modeling works of spurious operation scenarios. In order to reduce errors and increase work efficiency, the use of computerized tools is inevitably required. KAERI (Korea Atomic Energy Research Institute) developed a fire PSA (Probabilistic Safety Assessment) program, ProFire-PSA (Program for Fire Event PSA)__INT, consisting of ProFire-PSA_Model and ProFire-PSA:Support. The ProFire-PSA_Model was developed for a PSA analyst to facilitate identifying and modeling fire-induced component failure modes and constructing a one-top Fire PSA model. The ProFire-PSA:Support was developed for the estimation of fire ignition frequency and the generation of fire scenarios. The key functions of the ProFire- PSA_Model are a generation of fire events and logic to be modeled in an internal event PSA model, and the generation of input files for the domestic two PSA programs, AIMS-PSA (for KAERI and Korean regulatory body) and SAREX(for Korean NPP stakeholders and contractors). From the pilot study on the application of the ProFire- PSA_Model to the fire-induced SLOCA (Small Loss Of Coolant Accident) scenarios of the Korean reference NPP, it was confirmed that reasonable quantification results could be obtained without a detailed circuit analysis.

14:20
Jaroslav Holy (UJV Rez, Czechia)
Stanislav Hustak (UJV Rez, Czechia)
Roman Aldorf (UJV Rez, Czechia)
Ladislav Kolar (UJV Rez, Czechia)
Jan Kubicek (UJV Rez, Czechia)
Milan Jaros (UJV Rez, Czechia)
Recent risk analysis of external hazards impact on Czech Nuclear Power Plants operation - lessons learned
PRESENTER: Milan Jaros

ABSTRACT. During the last decade, a number of specific analyses of external hazards have been made in the Living PSA projects for NPPs in Czech Republic. The methodology of the analyses followed the recommended approaches, but it was found that some new challenges appear, which generate the need to go beyond traditional methods. These challenges may be related, for example, to the (rare) data analysis, human reliability, specific features of modelling, application of inputs from deterministic analysis, integration of results from various risk contributors, real occurrence of an event, which was supposed to be very rare, etc. In addition, the processes of risk-oriented decision making based on the developed and employed PSA model have to fit the scope, level of detail, data, level of uncertainty and other attributes of the PSA models under development, review, and update.

The paper describes the main features of the development of external hazards part of PSA models in Czech PSA studies, topics addressed, challenges met, and some qualitative and quantitative results obtained. It is focused mainly on the non-seismic natural hazards with possible impact on plant sites in Czech Republic, where the increasing risk potential can be indicated by more dynamic natural processes related to climate change (tornado, extreme wind, extremely high/low temperature, extreme snow).

14:40
Gilberto Francisco Martha Souza (Polytechnic School - University of São Paulo, Brazil)
Cesar Augusto Gabe (Polytechnic School - University of São Paulo, Brazil)
Methodology for Risk Assessment of Blackout on Marine Based Nuclear Reactors

ABSTRACT. Nuclear power can contribute significantly to maritime transportation. However, the economical and the regulatory issues intimidate the deployment of nuclear powered commercial shipping. There are several discussions regarding the economic feasibility of marine based small modular reactors (SMR), but no nuclear powered commercial ship will be deployed if the acceptable level of safety is not demonstrated to regulatory committees. Risk informed approach would be appropriate for licensing of marine based small modular reactors, providing a country neutral method for reviewing safety plans. The risk-informed approach on blackout accident permits to explore the state-of-art on dynamic reliability best estimate modeling as well as a dose based consequence analysis. The power loss accident on marine based SMR has a different risk profile from large-scale land based reactors. There are SMR empowered with passive and inherent safety features, but their application on maritime context is not suitable. Moreover, the large feedback and technological readiness of Light Water Reactors (LWR) based on active safety systems can significantly reduce the duration of deployment and licensing. This work proposes a design neutral and dose based methodology to assess the risk of blackout on a marine based nuclear power plant on early design stage. The methodology is based on a probabilistic safety analysis, including short term equipment repair, and dose exposure analysis in a post-accident scenario. The methodology is applied on a hypothetical pressurized water reactor. The results are compared with a representative Generation II LWR (Surry). The methodology estimates a frequency of core damage frequency by long station blackout in 2.24 x 10-5 reactor.year for the hypothetical reactor. NUREG-1150 estimates the Surry long station blackout core damage frequency in 8.2 x 10-6 reactor.year. Regarding the environmental dose exposure, the total effective dose for whole body at 3.2 Km (in 24 hours) is 0.34 and 8.9 Sieverts for hypothetical and Surry respectively, considering no containment failure and no early large releases. The higher likelihood of blackout on marine context has been balanced by lower radiological dose exposition.

15:00
John Hanna (US Nuclear Regulatory Commission, United States)
All Creatures Great & Small: A Brief Survey of the Impact of Flora/Fauna on Nuclear Power Plants

ABSTRACT. The US Nuclear Regulatory Commission (NRC) licenses and regulates the nation’s civilian use of radioactive materials to provide reasonable assurance of adequate protection of public health and safety, and to promote the common defense and security, and protect the environment. The impacts of Nuclear Power Plants (NPPs) on the environment and specifically on neighboring flora/fauna are considered/evaluated in the design/licensing process for these facilities. Some of these impacts have been analyzed in scientific articles, e.g., service water cooling systems affecting fish populations, seaweed, etc. But what do we do when the vector/threat is in the opposite direction and the environment poses a threat to a NPP? Flora and fauna have caused a number of safety significant events and/or conditions at NPPs. This paper surveys the wide variety of biological challenges to our facilities and describes, where possible, the risk significance of those events and/or conditions. The current state-of-the-art of Probabilistic Risk Assessment (PRA) modeling is briefly described and potential PRA modeling improvements are touched on. Potential operational and design enhancement that may mitigate these potential risk impacts - and which are described in other scientific papers - are referenced.

15:40-17:00 Session 5A: Economic Analysis in Risk Management
Chair:
Paolo Gardoni (University of Illinois at Urbana-Champaign, United States)
Location: LG-22
15:40
Arne Bang Huseby (Department of Mathematics, University of Oslo, Norway)
OPTIMIZING MULTIPLE REINSURANCE CONTRACTS

ABSTRACT. An insurance contract implies that risk is ceded from ordinary policy holders to companies. Companies do the same thing between themselves, and this is known as reinsurance. The problem of determining reinsurance contracts which are optimal with respect to some reasonable criterion has been studied extensively. Different contract types are considered such as stop-loss contracts where the reinsurance company covers risk above a certain level, and insurance layer contracts where the reinsurance company covers risk within an interval. The contracts are then optimized with respect to some risk measure, such as value-at-risk (VaR) or conditional tail expectation (CTE). Some recent work in this area are Lu et al (2013), Cheung et al (2014), Cong and Tan (2016), and Chi et al (2017). Huseby and Christensen (2020) considered the problem of minimizing VaR in the case of multiple insurance layer contracts, and proved that the optimal solution must satisfy certain conditions. In the present paper we investigate this problem further and show that the optimal solution depends on the tail hazard rates of the risk distributions. If the tail hazard rates are decreasing, which is the case for heavy tailed distributions like lognormal and pareto distributions, the optimal solution is balanced. That is, reinsurance contracts for identically distributed risks should be identical insurance layer contracts. However, if the tail hazard rate is increasing, which is the case for light tailed distributions like truncated normal distributions, the optimal solution is typically not balanced. Even for identically distributed risks, some contracts should be insurance layer contracts, while others should be stop-loss contracts. In the limiting case, where the hazard rate is constant, i.e., when the risks are exponentially distributed, we show that a balanced solution is optimal. We also present an efficient importance sampling method for estimating optimal contracts.

16:00
Kristina Rognlien Dahl (Department of Mathematics, University of Oslo, Norway)
Arne Bang Huseby (Department of Mathematics, University of Oslo, Norway)
Marius Helvig Havgar (Department of Mathematics, University of Oslo, Norway)
OPTIMAL REINSURANCE CONTRACTS UNDER CONDITIONAL VALUE-AT-RISK

ABSTRACT. An insurance contract implies that risk is ceded from ordinary policy holders to companies. However, companies do the same thing between themselves, and this is known as reinsurance. The problem of determining reinsurance contracts which are optimal with respect to some reasonable criterion has been studied extensively within actuarial science. Different contract types are considered such as stop-loss contracts where the reinsurance company covers risk above a certain level, and insurance layer contracts where the reinsurance company covers risk within an interval. The contracts are then optimized with respect to some risk measure, such as value-at-risk (VaR) or conditional value-at-risk (CVaR). Some recent work in this area are Lu et al (2013), Cheung et al (2014), Cong and Tan (2016), and Chi et al (2017). In the present paper we consider the problem of minimizing conditional value-at-risk in the case of multiple stop-loss contracts. Such contracts are known to be optimal in the univariate case, and the optimal contract is easily determined. See Cheung et al (2014). We show that the same holds in the multivariate case, both with dependent and independent risks. The results are illustrated with some numerical examples.

16:20
Joaquim Rocha dos Santos (University of São Paulo, Brazil)
Marcelo Ramos Martins (University of São Paulo, Brazil)
Danilo Taverna Martins Pereira de Abreu (University of São Paulo, Brazil)
Risk in Dynamic Decision Making – an application in the Oil and Gas Industry

ABSTRACT. In the oil and gas industry, investment decision-making has to consider the uncertainty of all relevant factors when dealing with the exploration and development of new or mature oil fields due to the volume of capital invested. In addition to this uncertainty, there is an increasing need to innovate by implementing new technologies to maintain competitiveness since oil companies have been operating with reduced profit margins due to their market characteristics. On the one hand, the insertion of these new technologies may bring higher profits that are highly desirable; on the other hand, their implementation may get an even greater level of uncertainty since the implementation of new technologies may fail, making the company lose money and get back to old technologies, with delays in the first oil. Implementing new technologies is a problem with a high level of uncertainty that develops over time – strong dynamics – considering that project managers may access the development of the latest technology and may interrupt the project and come back to established ones. One situation of particular interest occurs when the decision-maker can access the development of the project and may change his course of action after its evaluation. The possibility of changing the course of action can modify the risk profile of the alternatives and thus the Expected Value of each course of action. This decision problem may be analyzed through the Dynamic Decision Analysis and can be addressed through Decision Analysis (DA) and simulation. The Decision Analysis under uncertainty is an established paradigm well suited for problems with a high level of uncertainty. However, it offers little to assist in issues with high levels of dynamics because of the static characteristic of its models – influence diagrams and decision trees. By contrast, while dynamic modeling is well suited to complex decision problems, policy selection using traditional dynamic models can be misguided when used in the context of uncertainty over time. It is interesting to have a tool that combines the ability to analyze the uncertainties of decision analysis and the dynamic analysis of simulation models to address these problems. One architecture to address such issues is using hybrid models combining Decision Analysis and dynamic simulation. This work presents an application of Dynamic Decision Making for an investment problem, considering using new technologies to develop an oil field that comprises three main parts: a dynamic model, a decision analysis model, and a spreadsheet. The dynamic model uses the System Dynamics paradigm, whereas the decision analysis model uses a Decision Tree. Both models interchange data through the spreadsheet. After that, the article shows the simulation results, discussing its implication. The last part of the paper discusses its limitations, highlights the main points already achieved, and presents suggestions to continue the research.

16:40
Mei Ling Fam (Singapore Institute of Technology, Singapore)
Zhi Yung Tay (Singapore Institute of Technology, Singapore)
Dimitrios Konovessis (University of Strathclyde, UK)
Techno-economic analysis for decarbonising of container vessels
PRESENTER: Mei Ling Fam

ABSTRACT. Objective There are growing concerns on the effect of climate change and the environment. Based on the Fourth IMO Greenhouse Gas Study (Faber and Et al, 2021), the share of shipping emissions in global anthropogenic emissions has increased from 2.76% in 2012 to 2.89% in 2018. In addition, the pace of carbon intensity reduction has slowed since 2015 with the average annual percentage changes ranging from 1 to 2%. In the same report, it is highlighted that operating speeds of vessel remain a key driver of trends in emissions. It is predicted that in 2050 that 64% of the reduction in CO2 is contributed by the use of fuel alternatives. Thus the objective of this paper is to determine the most cost effective option of alternative fuels in order to meet decarbonising goals specified in the Initial IMO Strategy on Reduction of GHG Emissions from Ships.

Methodology The Levelised Cost of Mobility (LCOM) index is used to consider different options on a level field. This index comprises the CAPEX of the engines and tanks, the OPEX of the engines, the cost of the lost cargo space, fuel cost and CO2 cost. A Monte Carlo simulation is used to obtain the final unit of comparison of the LCOM which is expressed in Euros/1000DWT-km. The values utilised are sourced from literature review, or from a trained Artificial Neural Network (ANN) based on telemetry data of a 9000 TEU container vessel. The ANN from a recent previous study (Fam et al., 2021) is specifically used to predict the key factor of fuel consumption based on a vessel’s specific operating profiles.

Results Expected results are that LCOM values provide an indication of the cost that ship owners have to bear to consider alternative fuels, or how policies may be invoked to encourage alternative fuels to be economically feasible to mineral fuels. Finally, given that vessels greater than 5000 gross tonnes must install fuel consumption sensors from 1 January 2019, this paper presents a framework on how telemetry data can be incorporated into a Machine Learning pipeline that can help answer specific business questions

References Faber, J., Et al, 2021. Fourth IMO Greenhouse Gas Study, International Maritime Organization. Fam, M.L., Tay, Z.Y., Konovessis, D., 2021. An Artificial Neural Network based decision support system for cargo vessel operations, in: Bruno Castanier, Marko Cepin, David Bigaud, and C.B. (Ed.), Proceedings Ofthe 31st European Safety and Reliability Conference. Research Publishing, Singapore, Angers, pp. 3391–3399. https://doi.org/10.3850/978-981-18-2016-8 758-cd

15:40-17:00 Session 5B: Resilience: quantitative modeling
Chair:
Matteo Broggi (University of Hannover, Germany)
Location: CQ-008
15:40
Nicola Tamascelli (Norwegian University of Science and Technology, Norway)
Alessandro Dal Pozzo (University of Bologna, Italy)
Yiliu Liu (Norwegian University of Science and Technology, Norway)
Valerio Cozzani (University of Bologna, Italy)
Nicola Paltrinieri (Norwegian University of Science and Technology, Norway)
Integration Between Data-Driven Process Simulation Models and Resilience Analysis to Improve Environmental Risk Management in the Waste-to-Energy Industry

ABSTRACT. Municipal Solid Waste Incineration plants must comply with stringent emissions standards. Flue gas treatment technologies are essential to ensure compliance and protect human health and the environment. Although the most recent research has focused on estimating the risk for human health and comparing different gas treatment strategies, few efforts have been directed toward the definition of a thorough methodology for identifying critical scenarios and evaluating safety barriers. In this context, this study aims at filling this knowledge gap and investigating how traditional hazard identification techniques and novel approaches (data-driven process simulation models and Resilience analysis) may be used to (i) identify critical events that may lead to an overrun of emission limits, (ii) identify additional safety barriers that may prevent/mitigate such events, (iii) simulate the system behavior with and without additional safety barriers, and (iv) quantify the gain in performance and resilience and support decision-making. The methodology has been tested on a single-stage Dry Sorbent Injection (DSI) system. Actual data from a waste incineration plant have been used to develop the data-driven model. The results suggest that the method is particularly suited for evaluating and comparing design alternatives in industrial facilities where field tests are impractical or dangerous due to strict regulations and the inherent dangerousness of operations.

16:00
Tetsushi Yuge (National Defense Academy, Japan)
Yasumasa Sagawa (National Defense Academy, Japan)
Natsumi Takahashi (National Defense Academy, Japan)
Operational resilience of network considering common-cause failures
PRESENTER: Tetsushi Yuge

ABSTRACT. In modern society, it is indispensable to ensure the reliability of networks or infrastructure such as communication networks and electric power networks. Although reliability techniques such as structural redundancy have been taken to deal with the problem, large-scale disasters that break the redundancy causing great damage to the networks have occurred frequently in recent years. For such damage, it is strongly required to control the impact and recover quickly after disruption.

We studied a method for evaluating networks by focusing on resilience. Resilience in this study is an index that comprehensively evaluates the four properties of the evaluation target called "R4 framework", that is, robustness, redundancy, resourcefulness, and rapidity. Most of the previous studies on resilience are deterministic analysis of the occurrence of failures in the evaluation target and the scale of the damage. In addition, the analyses were mainly case studies according to the situation of each system. Even when considering stochastic recovery, simulation study is adopted to analyze resilience, and analysis using stochastic processes necessary for resilience evaluation for general systems has not been established. That was due to the difficulty to model stochastically the occurrence of large-scale disasters, to link the relation between component failures and the system performance, and to model recovery in which various factors exist, as a stochastic process.

In this study, we discuss the resilience of networks based on graph theory. The electric power network where the performance of the network is measured by the ratio of connected nodes, nodes (vertices) that exist a pass from source node, is supposed for the target network. We also assumed that the occurrence of disaster occurred by the occurrence of the simultaneous failure of edges caused by the common cause failures. Marshall-Olkin type shock model is incorporated to model the common cause failures, where the external shocks originating from shock sources and occurring at random times are considered. No node failure occurs for simplicity. For the restoration, under the constraint that the resources are limited, the failed edge will be repaired one by one. As the repair strategy, the order of the repair of several failed edges is determined with the priority to the edge that the amount of increasing system performance is the largest after the completion of repair. Then we propose two resilience measures, “operational resilience” and “resilience in recovery phase”. The operational resilience evaluates the ability of network resilience during the entire operational period. It contains the ability that maintains 100% network performance. On the other hand, the resilience in recovery phase shows the resilience in the recovery period within the entire operational period. Because the time duration with 100% performance is dominant in most networks, the difference of operational resilience between the networks or repair strategies is relatively small. The resilience in recovery phase is a measure focused on performance during recovery phase. These two resilience measures make it possible for us to evaluate network resilience adequately. Both are derived by Markov process under the assumption that the occurrence rate of common cause failures and repair rate are constant. Since the method is based on the multidimensional Markov analysis with the number of edges as the dimension, the number of states increases exponentially as the network size increases, then the calculation becomes difficult accordingly. As a countermeasure for the problem, the number of states is reduced by the one-dimensional Markov process focusing only on the number of failed edges without considering the location of failed edge. The expected number of operational nodes weighted by the system performance at each failure location is considered for the lost information by the reduction. We verified the applicability and accuracy of the approximation method by numerical examples, then confirmed that the approximation error compared with the result of the exact method or Monte Carlo simulation was relatively small.

16:20
Tobias Demmer (Institute for the Protection of Terrestrial Infrastructures, German Aerospace Center (DLR), Germany)
Daniel Lichte (Institute for the Protection of Terrestrial Infrastructures, German Aerospace Center (DLR), Germany)
Kai-Dietrich Wolf (Institute for Security Systems, University of Wuppertal, Germany)
Jens Kahlen (Institute for the Protection of Terrestrial Infrastructures, German Aerospace Center (DLR), Germany)
Towards the Prediction of Resilience: An Equation-based Resilience Representation
PRESENTER: Tobias Demmer

ABSTRACT. The performance curve of a system during a disruption is widely used in the literature as an illustration of the system’s resilience capabilities, especially in socio-technical works. To improve the resilience of a system, an important step is to develop methods and techniques to properly quantify relevant resilience metrics. Despite the importance, no final consensus has been reached regarding the mathematical definition of the concept of resilience. Against this backdrop, this works presents an analytic equation to fit the whole evolution of the system’s performance curve during a disruption. This enables a decision maker to determine model parameters that are directly linked to the system’s resilience capabilities. It can additionally serve as a base to predict resilience curves in future works. We propose to use two sigmoid functions to represent the resilience of a generic system.

16:40
Vasily Lubashevskiy (Tokyo International University, Japan)
Assessment of satisfactory recovery level after disaster using probabilistic modeling of residents’ behavior: case of residential electricity demand

ABSTRACT. Most countries all over the globe nowadays rely on the service industry, which is very fragile and depends on the social and infrastructural parts of cities. It is why the resiliency of urban systems is crucial for today's societies' safety and economic security. There are various approaches enabling the recovery management of socio-technical systems. Some of them are based on heuristic algorithms, integral process optimization, a predetermined set of responses, and others, but all of those methods of recovery management require the defined level of system functionality up to which the system must be restored. Moreover, the separation of the restoration process for short-term and long-term recovery phases implies that the number of satisfactory levels is more than one. Indeed, the overvaluation of the satisfactory level during the short-term recovery leads to overuse of recovery resources and human resources, which are limited and, as a result, it makes the recovery process less efficient. Undervaluation of the satisfactory level brings the system to the under-recovered state, which may cause relapsing of the system back to the critical condition. Since each disaster is a unique event, one of the critical problems of the satisfactory level assessment is the lack of information and records that characterize the level of critical/minimal/functional demands for infrastructure and services. The number of residents residing in the affected area and who didn't evacuate, change of their consumption and living behavior, the meteorological conditions, temperate season, and other factors make the previous record irrelevant. The purpose of the present research is to demonstrate how the satisfactory recovery level may be assessed using the available information about the number and types of remaining in the area residents, data from governmental statistical bureaus, and probabilistic behavior modeling on the example of residential demand for electricity in Japan, which may be extended and applied to the demand assessment for any critical lifeline. Modeling of electricity demand in the residential area starts from the analysis of residents, their behavior, and household compositions which are taken from the governmental statistical bureau, in the case of a present work – from the Statistics Bureau of Japan, Ministry of Internal Affairs and Communications. The collected data was used to simulate each occupant's behavior taking into account such factors as gender, age, working hours, the necessity to contribute to the child- or elderly- care, and others. This simulation is repeated for each household multiple times. The result is averaged to smooth the resulting pattern and remove the noise caused by the low resolution of statistical data. The aggregated activity pattern is analyzed through the prism of corresponding electric appliances usage and its electricity usage, including both operation and standby power consumption regimes. As a result of modeling, the detailed residential electricity demand for a representative household is constructed and may be analyzed. The first, what should attract the attention is the composition of the demand, electricity demand for heat/cooling of the space, for cooking activities, for light use, and others. This information may be used to determine the minimum satisfactory level of electricity supply in the affected area, taking into account the changed number of residents and their adapted consumption behavior. Such information is crucial for the efficient management of the short-term recovery of the residential sector. Second, modeling of the future consumption behavior and assumption about the after-recovery population of the residential sector enables to determine the satisfactory level for the long-term recovery of the affected area, including nadirs and peak demands, daily, monthly, and yearly integral demands for electricity, the information about which is important for not only recovery but strategic planning of infrastructure restoration and development after the disaster.

15:40-17:00 Session 5C: S.27: Transfer Learning methods for Prognostics and Health Management
Chair:
Piero Baraldi (Politecnico di Milano, Italy)
Location: CQ-006
15:40
Guo Shi (Department of Management Science, University of Strathclyde, UK)
Bin Liu (Department of Management Science, University of Strathclyde, UK)
Lesley Walls (Department of Management Science, University of Strathclyde, UK)
Extending the applicability of deep learning algorithms to predict system failure with limited observed data
PRESENTER: Guo Shi

ABSTRACT. Deep learning (DL) algorithms, such as deep neural networks, can be used to predict the failure of systems, and have shown the advantages over classical time-series prediction methods. However, the performance of DL algorithms depends on the size of the data set with a risk of overfitting for smaller data sets. To extend the applicability of DL algorithms in failure prediction with such relatively small datasets, we propose to use data augmentation (DA) methods to increase the data volume by effectively generating artificial data. Different from the existing studies that simply mix synthetic and real data without considering the selection of synthetic samples or the weight of each sample, we propose a novel method to automatically generate, select and reweight synthetic data to improve the prediction accuracy. After generating a collection of time-series data with multiple DA methods, we develop an influence function to select the effective synthetic data and then reweight the selected samples using the gradient descent method. To improve the accuracy of DL algorithms with the mixture of synthetic and real data, we pre-train the DL models drawing upon the idea of transfer learning and use real data set to adjust the model parameters with a small learning rate. To test the effectiveness of the proposed method, we describe a case study that involves predicting the value of health indicator in a real wastewater treatment plant with the Long Short Term Memory (LSTM) model as the baseline, where the data of system health indicator is daily collected. Root mean square error and mean absolute percentage error are used to evaluate the prediction accuracy of the models. Compared with classical forecasting models, the enhanced LSTM indicates better performance in system failure prediction.

16:00
Hyojin Kim (Chosun university, South Korea)
Jonghyun Kim (Chosun university, South Korea)
Long-Term Prediction and Uncertainty Estimation for Multiple Parameters Using BiLSTM based CVAE with Attention Mechanism
PRESENTER: Hyojin Kim

ABSTRACT. The correct situation awareness (SA) of operators is important in managing nuclear power plants (NPPs), particularly in accident-related situations. Among the three levels of SA suggested by Ensley, Level 3 SA (i.e., projection of future status of the situation) is a challenging task because of the complexity of the NPP as well as the uncertainty of an accident. To help the operator’s prediction, this study suggests an algorithm that can predict the multivariate and long-term behavior of plant parameters for 2 h with 120 steps as well as provide the uncertainty of the prediction. The algorithm applies bidirectional long short-term memory (BiLSTM), attention mechanism, and conditional variational autoencoder (CVAE). BiLSTM and attention mechanism enable the algorithm to predict the precise long-term trend of parameters and increase the accuracy of prediction. A CVAE is utilized to provide uncertainty information for network prediction. The algorithm is trained, optimized, and tested using a compact nuclear simulator for a Westinghouse 900 MWe NPP. The mean absolute percentage error of test results is 2.7770%, which showed that the proposed algorithm accurately predicted 120 steps of multiple parameters using a single network. Hence, this algorithm can be applied to an operator support system to improve prediction for the operator during emergency situations in NPPs.

16:20
Bingsen Wang (Polimitecnico di Milano, Italy)
Piero Baraldi (Politecnico di Milano, Italy)
Ahmed Shokry (Center for Applied Mathematics, Ecole Polytechnique, France)
Enrico Zio (MINES Paris, PSL University and Politecnico di Milano, Italy)
Comparison of CNN and LSTM-based Domain Adaptation Method for the Prediction of the Remaining Useful Life of Industrial Equipment
PRESENTER: Bingsen Wang

ABSTRACT. The development of deep learning-based methods for fault prognostics requires: i) the availability of historical run-to-failure data for model training; ii) the similarity between the distributions of the test data, which the model is applied to, and the training data, which the model is calibrated with. However, these two conditions are not met in several industrial applications, where failure data are rare and operating conditions are changing in time or from one component to another, causing differences in the distributions of the data used for training and test. In this work, we consider the common situation in which the data for model training come from a source domain containing few run-to-failure trajectories whereas the target domain, to which the model is applied, is constituted by in-field data collected from partially degraded components working under different operating conditions. To properly address this problem, we develop a Domain Adaptation (DA) method for the prediction of the components Remaining Useful Life (RUL). Two different types of encoders are considered for extracting features able to capture the time-dependent behavior of the monitored signals: one based on a Long-Short Term Memory (LSTM) neural network and the other on Convolutional Neural Network (CNN). Then, the Maximum Mean Discrepancy (MMD) metric is used to measure the discrepancy between the distributions of the data in the source and target domains, and eventually as loss function to obtain a domain-invariant feature space. Finally, a Fully-Connected Layers Network (FCLN) is applied for the prediction of the RUL of components. The data of the Aramis Data Challenge is used to compare the two approaches.

16:40
Zhen Chen (Shanghai Jiao Tong University; Politecnico di Milano, China)
Lanxiang Liu (Harbin Institute of Technology; Politecnico di Milano, China)
Enrico Zio (MINES Paris, PSL University and Politecnico di Milano, Italy)
Ershun Pan (Shanghai Jiao Tong University, China)
Collaborative Kernel-based Nonlinear Degradation Modeling with Transfer Learning for Remaining Useful Life Prediction
PRESENTER: Zhen Chen

ABSTRACT. A novel nonlinear collaborative modeling method for remaining useful life (RUL) prediction is proposed. This method uses a kernel-based Wiener process (KWP) model, which formulates a nonlinear drift function with the weighted combination of kernel functions. Compared with the existing Wiener process models, this kind of modeling method can characterize the non-linearity of degradation more accurately and flexibly. To address the problem of error accumulation and lack of data in long-term prediction, a transfer learning scheme based on the KWP models is developed by leveraging multiple historical degradation trends from different units to collaboratively model the degradation process of interest with limited data. The positive transfer learning is realized by introducing cross-correlations into the drift functions for obtaining a more robust and accurate results than those obtained by constructing multiple individual models, one for each unit. The unknown model parameters are estimated by a Bayesian algorithm. Then, based on the KWP model, a close-form expression of the RUL distribution is derived for uncertainty quantification. An online framework is also proposed to iteratively predict the RUL. Finally, the proposed method is verified on lithium-ion battery datasets and compared to existing methods. The outcomes demonstrate the effectiveness and superiority of the proposed method for RUL prediction.

15:40-17:00 Session 5D: S.12: Dynamic risk assessment and emergency techniques for energy system I
Chairs:
Huixing Meng (Beijing Institute of Technology, China)
Mimi Zhang (Trinity College Dublin, The University of Dublin, Ireland)
Location: CQ-106
15:40
Jan Soedingrekso (Gesellschaft für Anlagen- und Reaktorsicherheit (GRS) gGmbH, Germany)
Tanja Eraerds (Gesellschaft für Anlagen- und Reaktorsicherheit (GRS) gGmbH, Germany)
Martina Kloos (Gesellschaft für Anlagen- und Reaktorsicherheit (GRS) gGmbH, Germany)
Jörg Peschke (Gesellschaft für Anlagen- und Reaktorsicherheit (GRS) gGmbH, Germany)
Josef Scheuer (Gesellschaft für Anlagen- und Reaktorsicherheit (GRS) gGmbH, Germany)
Cluster Analysis on Dynamic Event Trees using the Restructured Software Tool MCDET
PRESENTER: Jan Soedingrekso

ABSTRACT. The use of dynamic event trees within probabilistic safety analyses provides insights in the effects of uncertainties on time-dependent processes for complex systems. The approach thereby overcomes the limitations of a classical probabilistic safety analysis (PSA) with a predefined fixed order of events. The effects of the high-dimensional parameter space induced by state and time variations of events can be simulated and represented using a Monte Carlo approach in combination with the dynamic event tree simulation. This however leads to large samples of event trees and time-dependent scenarios requiring machine learning algorithms to analyze the amount of data produced. The first step of such a data analysis is a selection of relevant features, e.g. time sequences of parameters, in order to reduce the dimensionality and the redundancy of information. In a second step, an unsupervised classification is applied to group the different scenarios in several clusters, which can then be further analyzed. Parameterizing these clusters can provide further insights in the influence of uncertainties on the PSA results. The software tool MCDET (Monte Carlo Dynamic Event Tree) has recently been restructured and modularized to a tool based on python with further enhancements including generic feature selection and cluster identification algorithms for the post-processing. In this contribution, recent developments of MCDET including the software restructuring and the data analyzing tools are presented. In addition, case studies for a simplified tank overflow and a steam generator tube rupture scenario are shown.

16:00
Tarannom Parhizkar (University of California, Los Angeles, United States)
Saeed Nozhati (University of California, Los Angeles, United States)
Ali Mosleh (University of California, Los Angeles, United States)
Jon Eric Thalman (Pacific Gas and Electric Company, California, USA, United States)
WILDFIRE RISK ASSESSMENT AND MANAGEMENT OF POWER GRIDS

ABSTRACT. Wildfire events have been growing in frequency and intensity in California in recent years. They not only threaten public safety but can result in billions of dollars in direct and indirect damages for single events. This presentation provides an integrated methodology for the assessment and management of risks due to wildfires caused by different drivers including equipment and vegetation failures in electrical distribution lines. The presentation offers a scenario-based approach that is rooted in a fundamental and popular risk theory and forms the basic platform for the integration of techniques and models needed to identify the wildfire risk scenarios and quantify their probabilities. The backbone of the framework is the Hybrid Causal Logic (HCL) approach, a multi-layered structure that integrates event sequence diagrams (ESDs), fault trees (FTs), and Bayesian Networks (BNs) and allows for the inclusion of computer simulation models for phenomenological events such as fire spread. The method is applied to an electric power grid composed of transmission and distribution lines over a vast territory subject to local characteristics that need to be considered in the models for fire initiation and speared and in terms of other risk factors such as the ability to evacuate communities threatened by approaching fire. As is the case with nearly all quantitative risk assessments of complex open systems, there are significant uncertainties due to limitations of some critical data and state of knowledge about key phenomena, and uncertainties stemming from needed simplifications and approximations. The proposed methodology covers this important aspect in a formal and systematic way.

16:20
Mingjun Yin (Beijing Institute of Technology, China)
Huixing Meng (Beijing Institute of Technology, China)
Xu An (Beijing Institute of Technology, China)
PERT-based emergency response program for fire accidents of electric vehicles
PRESENTER: Mingjun Yin

ABSTRACT. Electric vehicles equipped with Lithium-ion batteries are playing a key role in daily life. The accidents related to electric vehicles usually generate significant losses. Hence an efficient emergency response program can reduce losses in accidents. In this paper, we employed the program evaluation and review technique (PERT) to depict the emergency response procedure of a fire accident of an electric vehicle in a parking lot. By considering the logical and sequential relationships, we developed the corresponding PERT model. Subsequently, based on the expected completion time of emergency response and working duration of each activity, the time parameters of the PERT model are obtained. Eventually, we attained the critical path and probability of completion of emergency response under different time constraints.

16:40
Lunhu Hu (School of Reliability and Systems Engineering, Beihang University, China)
Xing Pan (School of Reliability and Systems Engineering, Beihang University, China)
Rui Kang (School of Reliability and Systems Engineering, Beihang University, China)
A Simulation Method for Dynamic Risk Assessment of Uncertain Random System
PRESENTER: Lunhu Hu

ABSTRACT. Uncertain random system (URS) is a system in which aleatory uncertainty and epistemic uncertainty are expressed by probability and uncertain measure respectively. Plenty of recent research has stated that the URS is representative of almost every practical system and has proposed many analytical methods of URS regarding topics such as reliability analysis, risk assessment, and lifetime prediction. However, at present, there is no method that is specialized for dynamic risk assessment of URS. This paper proposes a simulation method for dynamic risk assessment of URS, where discrete evolution of system state and continuous evolution of process variables are integrated. The main attention is given to the development of an algorithm for addressing joint propagation of probability and uncertain measure in dynamic accident scenarios of URS. A hypothetical dynamic system is analysed as a case study and its result shows the effectiveness of proposed method. This work could be seen as a pioneering work for dynamic risk assessment of URS. Several improvements can be brought to the proposed method, such as integrating repairs and human errors into its evaluation process, which are expected to be done in our future studies.

15:40-17:00 Session 5E: Structural Reliability II
Chair:
Jana Markova (Czech Technical University in Prague, Klokner Institute, Czechia)
Location: LG-20
15:40
Meng-Ze Lyu (College of Civil Engineering, Tongji University, China)
Jian-Bing Chen (College of Civil Engineering, Tongji University, China)
First-passage reliability analysis for high-dimensional nonlinear systems via the physically driven GE-GDEE
PRESENTER: Meng-Ze Lyu

ABSTRACT. First-passage reliability assessment of engineering structures under disastrous stochastic dynamic excitations is of paramount importance for the performance-based decision-making of structural design. However, it is still of great challenge due to the coupling of nonlinearity and randomness in the high-dimensional systems. In the present paper, a globally-evolving-based generalized density evolution equation (GE-GDEE) is derived in terms of only one or two response quantities of interest in a high-dimensional nonlinear system. The established GE-GDEE is just a one- or two-dimensional partial differential equation (PDE) with respect to the transient probability density function (PDF) of the quantities of interest. The effective drift coefficient(s) in the GE-GDEE represents the physically driving force for evolution of the PDF in the global sense, and can be identified mathematically as the conditional expectation of the original drift coefficient(s) in the high-dimensional equation of motion. For this purpose, the proposed approach can be called as the physically driven GE-GDEE. Some representative deterministic analyses of the underlying physical system can be performed to provide data for the identification of effective drift coefficient(s), and then the GE-GDEE can be solved numerically. For the purpose of first-passage reliability, the GE-GDEE with respect to the absorbing boundary processes (ABPs) can be established to obtain the remaining PDF in the safe domain and time-variant first-passage reliability further. A numerical example is illustrated to verify its efficiency and accuracy.

16:00
Francesca Turchetti (University of Strathclyde, UK)
Enrico Tubaldi (University of Strathclyde, UK)
Edoardo Patelli (University of Strathclyde, UK)
Paolo Castaldo (Politecnico di Torino, Italy)
Deborah Di Pilato (Politecnico di Torino, Italy)
Christian Málaga-Chuquitaype (Imperial College London, UK)
Damage assessment of bridge piers subjected to multiple earthquakes: Markov model vs regression models

ABSTRACT. A large percentage of the world’s infrastructure is located in earthquake-prone regions where they are subjected to repeated seismic excitations during their design life. Multiple earthquakes can, over a long period of time, result in a progressive reduction in structural capacity and this can eventually lead to the collapse of the structure with a devastating impact in terms of human lives and economic losses. The problem of damage accumulation under repeated events close in time has been experienced several times in the past: during the Umbria-Marche earthquake on September 1997 and the Christchurch-New Zealand on September 2010. In both cases, the weakening of structural capacities following a main shock seismic event lead to collapse under the following less intense aftershocks. The life cycle analysis of civil constructions has taken a central role in research and practice over the last years. Moreover, due to the variability involved in the estimation of the destructive potential of future events, the seismic risk assessment of critical infrastructure, such as bridges, has to be carried out with the aid of adequate probabilistic models able to best predict future scenarios and consider the high uncertainties involved in the analysis. This study aims to review, discuss and compare two recently developed methodologies for the prediction of damage accumulation in structures subjected to multiple earthquakes within their lifetime. In particular, a method based on a probabilistic seismic demand model (PSDM) and a Markov-chain-based approach are considered. In order to assess these methods, a simulation-based approach for evaluating multiple stripe analysis is employed. This simulation-based approach provides a reference solution against which the other methods are compared. A stochastic earthquake hazard model is considered for generating sample sequences of ground motion records that are then used to estimate the probabilistic distribution of the damage accumulated during the time interval of interest. Besides evaluating the effectiveness of each approach, some possible improvements of the cumulative demand model are tested. The comparison between these methodologies is carried out by examining two reinforced concrete (RC) bridge model with a single pier of different height and the Park-Ang damage index (1985) is considered to describe the damage accumulation. The study results demonstrate the importance of considering the possibility of occurrence of multiple shocks in estimating the performance of structures and highlight strengths and drawbacks of the investigated methodologies.

16:20
Seungjun Lee (Ulsan National Institute of Science and Technology (UNIST), South Korea)
Jaebeom Lee (Korea Research Institute of Standards and Science (KRISS), South Korea)
Young-Joo Lee (Ulsan National Institute of Science and Technology (UNIST), South Korea)
An efficient method employing sequentially-updated surrogate model for repeated finite element reliability analysis
PRESENTER: Seungjun Lee

ABSTRACT. As risk-informed design and maintenance is used in various ways for structures, it is often necessary to perform finite element reliability analysis (FERA) repeatedly with various values of load intensity. Although the first order reliability method (FORM) has been widely adopted to perform FERA efficiently, the computational cost can still be large, particularly when the target structure is nonlinear or complicated. In this research, a new method is proposed to perform FORM-based FERA more efficiently. In the proposed method, a kriging-based surrogate model is constructed using the results of prior FORM analyses, and the surrogate model allows the next FORM analysis to start from the optimal values of random variables. It can reduce the computational cost significantly, compared with the conventional method where the FORM analysis generally starts from the mean values of random variables. In addition, as FORM-based FERA is repeated with multiple load intensity values, the surrogate model is sequentially updated and further optimized starting values can be provided. The proposed method was applied to calculate flood fragility estimates for a bridge with various values of water velocity, and it was observed that the analysis cost was less than 60% of the original one, with the similar level of accuracy of the analysis results.

16:40
Morten Gustavsen (Institute for Energy Technology, Norway)
Lucas Stephane (Institute for Energy Technology, Norway)
Ole Jakob Ottestad (Norwegian Nuclear Decommissioning, Norway)
Robert Ganz (Cowi Norway, Norway)
Situated risk integration with 3D-BIM: design and user evaluation insights
PRESENTER: Morten Gustavsen

ABSTRACT. The recent adoption at scale of Building Information Modeling (BIM) across multiple industries and the expansion of the number of BIM dimensions require various improvements. Two main BIM aspects that currently require improvements are identified as: (1) risk and safety integration with BIM, and (2) end-of-life (EOL) integration with BIM. While the latter aspect is usually related to a specific stage in the asset lifecycle, the former is intertwined with all BIM dimensions. Indeed, BIM data should be enhanced with risk and safety management for optimizing the protection of stakeholders involved, i.e., owners, contractors as well as public end users.

Therefore, our main research question is: How to enhance 3D-BIM with user-friendly risk management features for construction and deconstruction phases? The research question was answered by developing and evaluating a software prototype that integrates a 3D-BIM environment for visualization and matrix-based risk assessment aligned with Common Safety Methods for Risk Assessments. Furthermore, the project conducted two user evaluations with separate teams with experienced risk managers. For the deconstruction and decommissioning case, a team with nuclear background from the Halden Reactor performed risk-assessments using the software for a practical nuclear decommissioning case. The second evaluation was conducted with a team of experienced Reliability, Availability, Maintainability, and Safety (RAMS) experts performing risk-assessments based on the on-going Nygårdstangen-Bergen-Fløen railway project.

Lessons learned from IFE’s research initiated for integrating safety with nuclear power plant control room 3D design was generalized to transportation and nuclear decommissioning industry use-cases. The overarching concept of integrating safety and risk information repositories with the 3D BIM environment was tailored accordingly from a user-centered perspective, following the design thinking stages. In the transportation use-case, RAMS management was integrated with BIM and 3D geological models, enabling safety-case managers to anchor and work with RAMS information at specific locations in the 3D environment. In the nuclear decommissioning EOL use-case, hazard identification, risk assessment and risk reducing measures were integrated with BIM models, enabling safety-case managers to anchor and work with risk related information directly in the 3D model. Technical integration provides both novel features to safety-case managers and to the organizational workflow in terms of teamwork, communication, sharing and traceability. As such, it is focused on enhancing shared and situated situation and risk awareness at scale and optimizing the overall working processes. The risk-BIM integrated 3D tools were evaluated with expert users in both transportation and nuclear decommissioning in terms of situation awareness, risk awareness and usability, and the results from our initial expert users’ samples are very encouraging. Future applications are foreseen to be included in current work-processes and contribute at a larger scale to the BIM-community. This research was sponsored by the Research Council of Norway in partnership with Bane NOR, Norwegian Public Roads Administration, Cowi, Multiconsult, and by the Norwegian Nuclear Decommissioning agency.

15:40-17:00 Session 5F: Human Factors and Human Reliability: HF in Crisis & Emergency response
Chair:
Scott MacKinnon (Chalmers University, Sweden)
Location: LG-21
15:40
Jaehyun Kim (Chosun University, South Korea)
Jonghyun Kim (Chosun University, South Korea)
Development of a method for assessing the reliability of emergency response organizations in Korea
PRESENTER: Jaehyun Kim

ABSTRACT. In Korean nuclear power plants (NPPs), an emergency response organization (ERO) is established when the events that have the potential of releasing radiation occurs. The ERO consists of many sub-organizations such as the regulatory body, local government, hospital, fire stations, and utility’s organizations including technical support center (TSC) and emergency operation facility (EOF). Although the reliability of ERO is crucial for reducing the radiational risk to the public in the accident, methods to assess the reliability of ERO in NPPs have not been developed so far.

This study aims to develop a method for assessing the reliability of ERO in Korean NPPs, based on the concept of resilience. First, this study identified the contributing factors to the reliability of ERO by the literature survey on resilience or resilience engineering. Then, the contributing factors are evaluated and modified by using the Delphi method. Twenty of the subject matter experts and members of Korean EROs have participated in this Delphi method. Through the literature survey and Delphi method, a hierarchical structure of contributing factors has been developed. Next, the relative importance of contributing factors was evaluated by using the Analytic Hierarchy Process (AHP). Finally, quantitative measures to evaluate the lowest factors were also suggested.

16:00
Cyril Orengo (IMT Mines Alès, France)
Florian Tena-Chollet (IMT Mines Alès, France)
Sophie Sauvagnargues (IMT Mines Alès, France)
Medium-sized city pedestrian evacuation in a flood context: simulation using an agent based model.
PRESENTER: Cyril Orengo

ABSTRACT. Background When disaster mitigation cannot rely on civil engineering structures, authorities need to involve population to ensure their own safety. civils evacuation can be an efficient response to avoid casualties (CEPRI, 2014). Possibly involving thousands of peoples, it requires strong exercising. Launching an evacuation exercise at the scale of a city can be challenging, in terms of logistic. In addition, it goes along with high uncertainty levels due to hazards complexity making accurate spatial extension and severity measurement difficult and heterogeneous population compliance towards instructions. However, rising of artificial intelligence technology allow numerical simulation of the process (Banos, 2010), and implementation of various scenario to study the impact of these human and contextual factors on an evacuation process.

Aims Enhancing knowledge on the evacuation proceeding is still a challenging topic. Hard to exercise in real conditions, we aim to propose a numerical simulation of the process. This study aim to simulate the egress of several thousands of civilians, travelling by foot, out of a risk area. Also, the matter is to study the impact of various hazards and population behaviour scenarios on evacuation time. To that end, a multi-agent system framework is used, allowing intelligent agents implementation (Chaib-draa, 2001; Drogoul, 1993).

Methods Using a multi-agent simulation framework (GAMA), the environment is integrated using a geographic database. The simulation rely on Dirk Helbing social force model (Helbing & Molnár, 1995), rendering realistic pedestrian behaviour. Several events scenarios will be applied on the simulation.

Results The case study is the Agglomération d’Alès, threatened by a flood wave that would be caused by a failure of the nearby dam of Sainte-Cecile d’Andorge. Total evacuation time will be compared for all scenarios, trying to highlight determinants among human and contextual factors in evacuation process.

Conclusion This work intend to help crisis management organisations to consider the possibility of launching an evacuation involving thousands of people. Outputs of the simulation are showing impacts of several human and contextual facets on evacuation times.

16:20
Sean Loughney (Liverpool John Moores University, UK)
Kenneth Ngwoke (Liverpool John Moores University, UK)
Serdar Yildiz (World Maritime University, Sweden)
Jin Wang (Liverpool John Moores University, UK)
Özkan Uğurlu (Ordu University, Turkey)
INVESTIGATION AND EVALUATION OF MARINE ACCIDENTS IN TERMS OF GROUNDING AND CONTACTS/COLLISIONS IN THE ENGLISH CHANNEL UTILISING THE HFACS APPROACH
PRESENTER: Sean Loughney

ABSTRACT. World shipping/maritime activities are handled majorly in a complex and perilous environment. Marine accident undoubtedly the most serious type of ship mishap, not only suffers huge economic losses to shipping companies, but is also frequently accompanied by casualties and ecological damage. Therefore, it is crucial to identify the relationship between the main factors and casualties of marine accidents for the purpose of mitigating such occurrences. This research is concerned with accident investigation and evaluation in terms of grounding, sinking and collision incidents in the English Channel. Human Factor Analysis and Classification System (HFACS) is applied to collision/contact and grounding accidents in the English Channel, particularly the Dover Strait. The period for accident data gathering and analysis is 2004 to 2020. 17 grounding and 10 collision/contact incidents have occurred in the study area in the stated time frame. The results show that the HFACS structure is compatible with collision/contact and grounding accidents and has identified that Errors, under Unsafe Acts and Substandard Team Members under Preconditions for Unsafe Acts account for the majority of accident causes.

16:40
Eric Rigaud (Mines Paristech, France)
Laurentiu-Marian Neagu (University Politehnica of Bucharest, Romania)
Improving resilience performance through Intelligent Tutoring System
PRESENTER: Eric Rigaud

ABSTRACT. The Resilience Engineering perspective on safety management provides concepts, models, and tools prototypes aiming at enhancing organizations' capacity to respond to the diversity of situations that may arise. One actual challenge is the appropriation by organizations of Resilience Engineering perspective and associated tools. An Intelligent Tutoring System (ITS) is a self-paced, learner-oriented, highly adaptable, and interactive learning environment. It aims to provide digital functionalities to enhance the student learning experience and the overall learning process. Adaptive algorithms aim to provide immediate personalized guidance or feedback by adapting to each student's knowledge, learning abilities, mood, emotion, and needs. They also manage teaching strategies and correctly diagnose student learning at any time on teaching and teaching issues. The design of a resilience engineering curriculum with dedicated methods and tools to facilitate their appropriation by organization can be a solution to facilitate the development of organizational resilience. The objective of the paper is to contribute to the definition and the design of a resilience engineering training framework. A resilience engineering-based model of sociotechnical resilience skills is first of all presented. Then, a dedicated resilience competence development project management is illustrated and finally requirement of a dedicated intelligent tutoring system is presented. The first section of the paper describes a set of essential skills result of the analysis of the resilience engineering literature. The second section presents a dedicated competence framework. The third section discuss challenges associated to the development of an Intelligent Tutoring System dedicated to Resilience Engineering.

15:40-17:00 Session 5G: Prognostics and System Health Management II: Neural Networks
Chair:
Bruno Castanier (Université d'Angers, France)
Location: CQ-009
15:40
Fabian Mauthe (Hochschule Esslingen - University of Applied Sciences, Germany)
Marcel Braig (Hochschule Esslingen - University of Applied Sciences, Germany)
Peter Zeiler (Hochschule Esslingen - University of Applied Sciences, Germany)
Performance Evaluation of Neural Network Architectures on Time Series Condition Data for Remaining Useful Life Prognosis Under Defined Operating Conditions
PRESENTER: Fabian Mauthe

ABSTRACT. The prognosis of the remaining useful life is one of the key tasks in prognostics and health management. This paper compares the performance of currently very popular neural network architectures in time series forecasting for re-maining useful life prognosis under defined operating conditions. These include long short-term memory networks, gated recurrent unit networks, as well as temporal convolutional networks. In addition, feedforward neural networks and one-dimensional convolutional neural networks are considered. Furthermore, metrics for performance evaluation of remaining useful life prognosis are reviewed in this paper. Established metrics and new metrics specially adapted to the remaining useful life prognosis are both used for the performance evaluation of the neuronal network architec-tures. By the specific metrics, the requirements for the prognosis are considered more strongly. The evaluation of the networks for remaining useful life prognosis in this paper complements previous general assessments regarding the suitability of the approaches for time series forecasting. As it turns out, there are significant performance differences between the architectures.

16:00
Chenyang Lai (Politecnico di Milano, Italy)
Piero Baraldi (Politecnico di Milano, Italy)
Ibrahim Ahmed (Politecnico di Milano, Italy)
Enrico Zio (MINES Paris, PSL University and Politecnico di Milano, Italy)
Alejandro Del Cueto (BSH Electrodomésticos España, Spain)
Javier Gil (BSH Electrodomésticos España, Spain)
Sergio Llorente (BSH Electrodomésticos España, Spain)
Monitoring Degradation of Insulated Gate Bipolar Transistors in Induction Cooktops by Artificial Neural Networks
PRESENTER: Chenyang Lai

ABSTRACT. Insulated Gate Bipolar Transistors (IGBTs) are among the most critical components of inverters of induction cooktops. Their degradation is mainly caused by the thermal stress which they are subject to. Since the thermal stress is proportional to the IGBT case temperature, this work develops a method for predicting the case temperature of IGBT modules using signals monitored during in-field operation of induction cooktops. The main challenge to be addressed is that induction cooktops are typically used under variable, user-dependent settings, which generate very different evolutions of the temperature profiles. The proposed method is based on the selection of the measured signals to be used as input of the prediction model through a wrapper feature selection approach which employs a multi-objective genetic algorithm (MOGA). Then, an Artificial Neural Network (ANN) is used to predict the case temperature. The proposed method has been verified using real data collected by BSH Electrodomésticos España (BSHE) in laboratory tests. The obtained results show that the developed ANN model is able to provide accurate estimations of the case temperature, which is at the basis of the condition monitoring of induction cooktop IGBTs.

16:20
Fatemeh Hosseinpour (Politecnico di Milano,, Italy)
Ibrahim Ahmed (Politecnico di Milano,, Italy)
Piero Baraldi (Politecnico di Milano,, Italy)
Mehdi Behzad (Sharif University of Technology, Iran)
Enrico Zio (MINES Paris, PSL University and Politecnico di Milano, Italy)
Horst Lewitschnig (Infineon Technology Austria AG, Siemensstrasse 2, 9500 Villach, Austria, Austria)
An Unsupervised Method for Anomaly Detection in Multi-Stage Production Systems Based on LSTM Autoencoders

ABSTRACT. In multi-stage production systems, products are manufactured on a lot-basis through several processing steps, possibly involving various machines in parallel. In case of production of defective items, it is needed to identify the production step responsible of the problem so as to be able to take the proper countermeasures. In this context, the objective of the present work is to develop a model for the detection of anomalies in the operation of a machine of a multistage production system. The main difficulties to be addressed are the lack of labeled data collected while anomalies are occurring in the considered production stage, and the large number of monitored signals in the system, that can be considered for the detection. We, then, formulate the anomaly detection problem as unsupervised classification of multi-dimensional time series and we propose an approach which consists of: a) a model for the reconstruction of time-series, utilizing Deep Long Short Term Memory (DLSTM) autoencoders, for catching the highly non-linear dynamics of the signals. b) the definition of an abnormality indicator based on the residuals, i.e., the differences between the measured and the reconstructed signal values. The proposed method is verified considering benchmark data from a plasma etching machine used in the semiconductor manufacturing industry.

16:40
Tiago G. Rosa (University of São Paulo, Brazil)
Arthur H. A. Melani (University of São Paulo, Brazil)
Fabio N. Kashiwagi (University of São Paulo, Brazil)
Miguel A. C. Michalski (University of São Paulo, Brazil)
Gilberto F. M. Souza (University of São Paulo, Brazil)
Gisele M. O. Salles (Companhia Paranaense de Energia - COPEL, Brazil)
Emerson Rigoni (Federal University of Technology – Parana, Brazil)
Data Driven Fault Detection in Hydroelectric Power Plants Based On Deep Neural Networks

ABSTRACT. Condition-based maintenance (CBM), whose primary objective is to identify upcoming equipment failure so that maintenance is proactively scheduled only when necessary, has been increasingly used in the industrial sector to improve asset's reliability, safety and increase overall system availability. Critical to the application of CBM, fault detection methods have been extensively studied, but industrial applications in complex rotating machines are still in an early stage of development. However, with the increasing presence of sensors in industrial plants and the increasing ease of storing and managing monitoring data, the feasibility of applying the CBM strategy in the industrial context has risen significantly. Recently, deep learning-based techniques for fault diagnosis have gained a lot of attention due to their versatility and efficiency in extracting features from monitored data. Deep neural networks (DNN), in particular, have been increasingly applied in fault detection due to their ability to perform sensor data fusion, i.e., to combine different monitored variables aiming at increasing accuracy over the detection results. In this paper, a set of fault detection methods that use variations of autoencoder based DNN was implemented over simulated data that emulates the behavior of a generating unit of a hydropower plant. These variations comprise the modulation of different hyperparameters, numbers, and types of layers, such as dense, long-short term memory (LSTM) and convolutional neural network (CNN). The use of advanced abnormality detections techniques for this kind of machinery, in special the deep learning related, have not been so explored if compared to the ones focused on assets of other power generating modalities. Hence, this study aims to investigate the feasibility and compare the performance of each one of the proposed methods in order to select potential candidates to be implemented in real operational scenarios.

15:40-17:00 Session 5H: S.01: Advances in Well Engineering Reliability and Risk Management: data collection and quantitative methods
Chair:
Everton Lima (Petrobras, Brazil)
Location: CQ-105
15:40
Everton Lima (Petrobras S/A, Brazil)
Lucas Carvalho (Petrobras S/A, Brazil)
Beldo Macedo (Petrobras S/A, Brazil)
Guilherme Naegeli (Petrobras S/A, Brazil)
Danilo Colombo (Petrobras S/A, Brazil)
Methodology for subsea component reliability data collection according to international references: learnings and challenges.
PRESENTER: Everton Lima

ABSTRACT. Reliability data for subsea components and systems are of fundamental importance for the integrity, availability and, consequently, the profitability of offshore oil and gas industry. Especially in the well and subsea systems of equipment and engineering, this entrepreneurship starts at the very beginning of conceptual design phase. To assure that an asset will perform accordingly to its designed intentions, engineers and technicians must estimate the behavior of an asset which do not exist yet. This is done by means of calculations and simulations of material strength and its resistance to severe environmental conditions, flow assurance, and of course, reliability, availability, and maintainability alongside the lifecycle. Nowadays there are some very important sources of reliability data to use in reliability analysis, as for example, WellMaster and the OREDA JIP, which was the basis for the ISO 14224 standard. This standard was first issued in 1999, defining taxonomies and information levels within related disciplines required to make reasonable and reliable comparisons, as a kind of metric. The operating companies, however, due to many different reasons, in many cases do not have data collected according to international standards and formats, adopting different assumptions instead. The main objective of this paper is to present a methodology based on the lessons learned, and the main challenges, related to the data survey for the subsea component “Flowline”. After the Introduction, where a brief description of the problem and the proposed solution are presented, the first section describes the core definitions of the standard and the data-base requirements. The second section presents the exercise made as the first step, to track the information interrelations in different software tools available, formatting it according to ISO 14224 standard and comparing with the OREDA requirements to achieve a quality assurance level. In the next section the authors present a view of what could be improved in this process, including possible suggestion for information systems improvements. The paper closes presenting the efforts are being done to build a powerful tool which extracts information in natural language, available in reports, to be later converted into the ISO and OREDA format, with expert support, using spreadsheets and written document in pdf format files. The message in this paper is useful for those operators and technicians who must obtain reliable data to estimate design and operation indicators for subsea equipment and components to support safety and efficiency.

16:00
João Mateus Santana (Federal University of Pernambuco, Brazil)
Caio Maior (Federal University of Pernambuco, Brazil)
Isis Lins (Federal University of Pernambuco, Brazil)
Márcio Moura (Federal University of Pernambuco, Brazil)
Rafael Azevedo (Federal University of Pernambuco, Brazil)
Eduardo Menezes (Federal University of Pernambuco, Brazil)
David Martins (Petrobras, Brazil)
Feliciano Da Silva (Petrobras, Brazil)
Marcus Vinicius Magalhães (Petrobras, Brazil)
Reliability-based Guidelines for Elaborating Technical Specifications of New Technologies

ABSTRACT. New technologies pose many challenges due to the various uncertainty factors commonly present, such as new designs, new and unknown environment, introduction of interfaces, and failure modes. The formulation of technical specifications is not necessarily elaborated by a professional with a reliability engineering background but often by someone with a functional perspective. Such situations may result in technical specifications that do not consider the required tests and analysis to assess equipment reliability suitably throughout its development. This work proposes a set of guidelines that support the creation of technical specifications focusing on reliability. These guidelines aim to be used as a basis for the elaboration of technical specifications with a reliability perspective while also not relying on the ability and/or knowledge about reliability engineering. This is achieved by defining the reliability requisites of the new technology and encouraging the use of all available data, information, and documents that could enlighten about equipment reliability and suggest approaches for choosing adequate testing requirements for the technology. This work also proposes a minimal set of requirements for tests to be performed and tools to be used for each stage of technology readiness level. By applying this set of guidelines when creating technical specifications, we believe that the development efforts can more efficiently assess and guarantee equipment reliability, thus reducing failures, accidents, and rework.

16:20
Rune Vikane (University of Stavanger, Norway)
Jon Tømmerås Selvik (University of Stavanger, Norwegian Research Centre, Norway)
Eirik Bjorheim Abrahamsen (University of Stavanger, Norway)
On the new acceptance criteria in NORSOK D-010 for plug and abandonment of wells
PRESENTER: Rune Vikane

ABSTRACT. As the Norwegian Continental Shelf matures numerous wells will require permanent plug and abandonment (PP&A). Norwegian petroleum regulations outline how to ensure an acceptable level of well leakage risk post PP&A and refer repeatedly to NORSOK D-010. NORSOK D-010 is a national standard which contains requirements, recommendations and examples of acceptable approaches to well PP&A. The standard requires that “permanently abandoned wells shall be plugged with an eternal perspective…”, and a new inclusion in the standard is a criterion which quantifies the acceptable level of permeability for permanent well barrier materials. A key question is whether the new acceptance criteria in the standard are appropriate from a risk management perspective. To qualify as appropriate the acceptance criteria should be precise, evaluable, approachable, motivating and logically consistent. This forms the basis for an evaluation of the appropriateness of key acceptance criteria in NORSOK D-010. The evaluation indicates that there are issues related to precision and whether the criteria sufficiently motivate continuous improvement. The evaluation also indicates that the NORSOK D-010 Standard contains examples of acceptable approaches to PP&A which may not fully comply with the requirements found in the standard. We conclude that the 2021 revision of NORSOK D-010 improve on previous revisions and lean towards concluding that the criteria are appropriate from a risk management perspective. Key issues should however be addressed in future revisions.

16:40
Andressa Nicolau (Federal University Of Rio de Janeiro, Brazil)
Maximiano Martins (Federal University Of Rio de Janeiro, Brazil)
Paulo Fernando Frutuoso E Melo (Federal University Of Rio de Janeiro, Brazil)
Marcelo Martins (Engineering Department, University of Sao Paulo, Brazil)
Adriana Schleder (Department of Industrial Engineering,, Brazil)
Leonardo Barros (Research and Development Center of Petrobras, Brazil)
Rene Thiago Orlowski (Research and Development Center of Petrobras, Brazil)
Component Criticality Rating of a Subsea Manifold using FMECA

ABSTRACT. Subsea manifold is a metallic structure fixed to the seabed, in which valves and accessory equipment are installed that make it possible to couple Wet Christmas Trees and also other production systems (such as Pipeline End Manifold, Pipeline End Termination and risers); it is required during the completion and production of oil and gas wells. The failure of the subsea manifold can cause serious damage to the environment and human health, as well as stop production causing the loss of large foreign exchange, so ensuring its integrity is extremely important for the sector. In this article a Failure Mode, Effects and Criticality Analysis is presented for the criticality classification of the components of a typical subsea manifold cluster, PipeLine End Manifold and PipeLine End Termination. The final effect of equipment failure in the manifold was evaluated as a function of production interruption, production reduction, rupture, environmental pollution, plugging, leakage and structural collapse. Different means of detection were considered. The results of this study provide information about the weakness of the subsea manifold equipment. It is expected that this paper can contribute to future analysis of similar equipment and means to contribute to future work involving risk analysis, and inspection maintenance plan

15:40-17:00 Session 5I: S.06 B: Safety and Reliability in Road and Rail Transportation: Users' perceptions
Chair:
Vikram Pakrashi (University College Dublin, Ireland)
Location: CQ-107
15:40
Ajeni Ari (Technological University Dublin, Ireland)
Joseph Mietkiewicz (Hugin, Denmark)
Maria Chiara Leva (Technological University Dublin, Ireland)
Lorraine D'Arcy (Technological University Dublin, Ireland)
Mary Kinahan (Technological University Dublin, Ireland)
Women perception of personal safety on Public Transport in Ireland
PRESENTER: Ajeni Ari

ABSTRACT. The last four decades are bookended by reactions to well publicized world events concerning women's safety, with discussion surrounding fear, safety, and the threat of sexual violence toward women are at the heart of the literature context on issues of gender in transport [Stanko, 1993; Levy, 2013; Lewis 2018]. Women report a more prominent level of fear and concerns for safety than men in their use of public transport. With respect to environmental interactions and mobility, concerns for personal safety manifest as behavioural changes in when, how and why women interact with public transport (PT). Harassment and violence toward women are an endemic issue, with heightened concerns particularly linked to night hours, transport culture, security (tech & man) and system design that promote feelings of isolation and/or vulnerability [Gekoski, et al, 2015; Easton & Smith, 2003]. There is seen need for an environment that provides convenience and comfort while transiting on public transport. Yet this is seen to dwindle given the occurring need undertaken by women to protect themselves in their travel practises. Adoption of a second-nature (precautionary) behaviours or change conduct or mode (private vehicle) are utilized to mitigate risk however much as could be expected and driven by fears of perceived/possible happenings, even by women who haven’t effectively encountered some form of harassment or antisocial behaviour against them [Stanko 1993; Easton & Smith, 2003; Reid & Konrad 2004; Smith & Clarke, 2000; Dhillon & Bakaya, 2014; Stark 2018]. This paper aims to showcase the current perception of women as users of rail transport. Study investigates the views of safety for women while accessing public transport and the relationships between women perceptual conditioning and travel choice. Further research will analyse gender mobility differences in view of safety and security, highlighting factors that affect and discriminate against the use of public transport. To further inform research paper, literature intends to outline state-of-the-art on safety and public transport, psychological matters of gender and safety as well as women mobility and accessibility.

16:00
Isabelle Roche Cerasi (SINTEF Community, Norway)
Trond Foss (SINTEF Community, Norway)
Hampus Karlsson (SINTEF Community, Norway)
Dagfinn Moe (SINTEF Community, Norway)
Safe circulation of pedestrians and cyclists at roadworks.

ABSTRACT. To achieve climate and transport political goals, urban areas should become more attractive to pedestrians and cyclists. In Norway, when the weather is clement, the number of roadworks increases greatly disrupting the circulation of vulnerable road users (VRUs). Performed on behalf of the Norwegian Public Roads Administration (NPRA), this study was carried out to examine the current roadwork practices to identify new measures that can be implemented to ensure the safe circulation of pedestrians and cyclists at roadworks. This study is based on interviews with stakeholders and an examination of the current national regulations. The results highlight a general lack of knowledge among stakeholders regarding the signage and markings that provide clear and useful information to pedestrians and cyclists on how to navigate quickly and safely through streets disrupted by roadworks. We also examined the behaviour of drivers, workers and vulnerable road users with cameras at a particular roadwork in the municipality of Trondheim in Norway. In general, passing by roadworks should be easy, comfortable, safe and intuitive. However, poor accessibility to quick and safe routes in all directions for pedestrians and cyclists is far too common. Temporary routes also lack signage and safety equipment, and hence conflicts between road users occur. Inappropriate signage greatly affects the circulation of pedestrians and cyclists. This is because when they seek information to understand where to go or cycle, this confusion creates risky situations and alters the flow of traffic of both motorised vehicles and VRUs. Generally speaking, VRUs should be guided through corridors when no other safe alternative route is available. The width of temporary corridors should always be adapted to the number and movement of VRUs. Clear separation between VRUs and motorised traffic should also be realised, even if this entails reducing the space available for car traffic. Moreover, the potential dangers of using prohibited shortcuts should be explained to VRUs, although it is recommended to prevent shortcuts from being obvious to them. This is because whenever they find an opportunity for a shortcut, they often take it or follow others who take it. Overall, temporary routes should be similar to and as safe as the pavement that is closed because of the roadwork. If the route deviates too much from the original one, it is likely that a number of VRUs will choose to ignore the instructions and cross the roads outside the crossings. Hence, acceptable deviation conditions should be defined in the guidelines provided to the contractors. In case of long deviations, the measures implemented should prevent the VRUs from taking shortcuts. In general, contractors should be knowledgeable regarding the needs of road users and the measures that provide safety and facilitate the circulation of road users. They should also be aware of the challenges associated with travelling from one point to another, as well as the human attentional mechanisms that play key roles in understanding the environment: where one is and where one is going. They should also be knowledgeable regarding what to look for at work sites in order to create convenient routes for VRUs and place redirection signs with arrows at strategic locations. Moreover, solutions should be presented to prevent VRUs from crossing roads outside pedestrian crossings. In addition, new temporary crossings and cycle paths should be envisaged even if they are expensive. This paper also presents the action plan provided to NPRA to improve the practices applied at roadworks on the basis of the stakeholders’ roles and responsibilities. We highlight the need for improving the whole process related to roadworks, including the tender documentation, warning plan application and roadwork planning and execution. It was also observed that the lack of clear requirements and regulations led to tasks being not prioritised at roadworks, such as controlling the level of compliance to regulations. Consequently, some deficiencies are often overlooked. Therefore, tenders should specify new requirements and safety measures related to the safe circulation of VRUs, which should be priced on equal terms by the entrepreneurs offering their services. Generally, handbook N301, published by the Road Directorate of the NPRA, provides the national requirements regarding roadworks. This manual contains information on how roadworks should be planned and executed, as well as which safety and warning instructions should be observed. Handbook N301 should provide the best practices with examples of safe signage for pedestrians and cyclists and should also describe the potential impacts of inconvenient temporary routes. User-friendly digital solutions for creating diagrams should be made available to contractors to allow them to easily position recommended signs in the roadwork warning sign plan. Traffic data should also be included in the diagrams (direct data integration from the national road database), as well as information about the circulation of VRUs in the area, if available. Data on the common routes or bus stops used by the VRUs or aggregated data for predicting their movement in the area should be made available thanks to digital solutions. Guidelines and technical support regarding which signs and markings should be included in the sign plan based on the contextual traffic and roadwork conditions should also be made available. In addition, the roadwork logbook and any changes in it should be better digitalised and made accessible to all stakeholders. Finally, the roadwork warning training course should include sessions focussing on the measures required to ensure the safe circulation of VRUs under different roadwork and traffic situations. It is also worth mentioning that the regulations outlined in the Construction Client Regulations and Sign Regulation should be improved by including the description of the requirements regarding the responsibilities and competences of the stakeholders.

16:20
Stefania Rasulo (Nord University; Norwegian University of Science and Technology (NTNU), Norway)
Gunhild Birgitte Sætren (Nord University, Norway)
Audrey L. H. van der Meer (Norwegian University of Science and Technology (NTNU), Norway)
Speed perception and its implications in road safety: a high-density EEG study
PRESENTER: Stefania Rasulo

ABSTRACT. Due to their limited experience in traffic scenarios and their immature perception-action responses, children are particularly vulnerable to road accident. In fact, according to the World Health Organization (2013), traffic accidents are the second most frequent cause of death in children between 10 and 14 years, worldwide. In addition, at the age of 12 pedestrians are at most risk of accidents, since an adult-level response to visual motion does not occur until 16-18 years of age.

To properly assess potential traffic hazards, an accurate perception of speed is crucial. In fact, speed is a key factor in in both road traffic injuries and deaths, since in places where the average speed limit exceeds 40 km/h, the risk of accidents involving child pedestrians is nearly three times higher compared.

We carried out this study to assess if the perception of speed is different in children than in adults. Evoked and oscillatory brain activity has been investigated in response to forward visual motion at three different ecologically valid driving speeds of 30, 45 and 60 km/h, simulated through an optic flow pattern consisting of a virtual road with moving poles at either side of it. We tested a total of 24 participants, divided into two groups of 12 adolescents at 12 years, and 12 young adults.

Adolescents and adults displayed similar N2 latencies for low, medium, and high speed of around 260 ms, with no significant difference between groups, in line with previous findings (Rasulo, Vilhelmsen, van der Weel and van der Meer, 2020). However, 12-year-olds displayed slightly longer latencies than adults, and were not able to distinguish between motion speeds, suggesting that myelination is still going on in the adolescent brain.

In addition, further post-hoc analyses found a speed effect in the adult group, where low speed had significantly shorter latency than medium speed. This effect was not present in the adolescent group, suggesting that in a traffic scenario, adolescents may not be able to proper evaluate the danger associated with a car approaching at 30 or at 45 km/h.

Our findings suggest that motion speed perception is still not fully developed in adolescence, and this emphasizes the need for traffic injury prevention interventions for school children.

16:40
An-Magritt Kummeneje (SINTEF Community, Norway)
Isabelle Roche-Cerasi (SINTEF Community, Norway)
Change in self-reported cycling habits, safety assessments, and accident experience in Norway over the last decade

ABSTRACT. Increasing the number of cyclists and pedestrians is a national climate and political goal prioritised by the Norwegian authorities. However, cycling facilities, weather and traffic safety conditions are important factors influencing bicycle commuting. During the last ten years, SINTEF carried out for the Norwegian Public Roads Administration several studies to map the use of bicycle for transportation by municipalities, regions or at national level. We propose in the present study to examine 10-year results and to give a general overview of the progress made to date.

The main objectives are to examine the differences in cycling habits, perceptions, and experience and to compare the development of various aspects related to bicycle use over the last decade. Bicycle accidents are underreported in the official traffic safety statistics and these figures do not reflect how accidental risks are perceived by the population and how perception is important in understanding behaviour. Data were collected through several surveys (2010, 2014, 2017, 2018) among municipalities, regions or at national level.

The respondents were asked if they cycle and in which periods of the year, they do it. Winter cycling can be perceived as physically demanding in Norway. Temperatures and weather changes greatly vary over the seasons. The geographic location of the municipalities and the available cycling facilities are therefore decisive factors for bicycle use. In that context, we select and compare two municipalities with similar geographic location and population. The results showed that the share of bicycle use or commuting is not as high as expected when examining the evolution over the last decade. One of the explanations could be the increase of other green modes such as electric cars and city electric scooters, that slow down the use of bicycle. Other explanations are also discussed in the paper.

The respondents were asked about their accident experiences and accident type. The results showed differences between municipalities and with the official statistics. The share of accidents and accident types were examined over time for the two selected municipalities and compared to the national data. The share of self-reported cycling accidents was found to be half of the share reported in the national survey. The results are discussed, and more locally adapted traffic safety interventions are proposed.

The respondents were asked how well or poorly adapted to cycling they thought the cycling facilities are in their municipality and for the actual routes they used. In addition, those who had not cycled in the last year or cycled less than 3-4 days were asked whether it would be relevant for them to cycle more with some conditions in place. Better infrastructure for cycling was the most popular answer. The results are discussed and showed that other missing conditions such as secure parking infrastructure may vary according to the municipal context. This study showed the importance for the municipalities to be aware of the level of bicycle commuting in their own municipality and to understand the perceived barriers to bicycle use.

The present paper proposes effective measures for encouraging cycling in seasonal periods when the weather is not considered as nice, clear, and cooperative. This paper is of great interest for the municipalities and transport authorities focusing in increasing the number of cyclists and cycle-friendly facilities, improving traffic safety for cyclists, and developing pro-cycling interventions and policies.

15:40-17:00 Session 5J: S.10: Human-Robot collaboration: The New Scenario for a Safe Interaction I
Chair:
Mario Di Nardo (Università di Napoli, Italy)
Location: CQ-007
15:40
Mario Di Nardo (Università di Napoli "Federico II", Italy)
Liberatina Carmela Santillo (Università di Napoli "Federico II", Italy)
Silvia Carra (Italian Workers‘ Compensation Authority, Italy)
Luigi Monica (Italian Workers‘ Compensation Authority, Italy)
Sara Anastasi (Italian Workers‘ Compensation Authority, Italy)
Maryam Gallab (MIS-LISTD Laboratory, Computer Science Department, Rabat, Morocco)
THE MAINTENANCE IN INDUSTRY 4.0 : ASSISTANCE AND IMPLEMENTATION
PRESENTER: Mario Di Nardo

ABSTRACT. We are witnessing the fourth industrial revolution in recent years, summarised in the acronym 4.0. This revolution includes both the development of existing technologies and new ones. The basis of the new model is the implementation of sensor systems, which must be more precise and efficient to be combined with the new technology. The new technology is based on "big data", "internet of things", "neural networks" and "augmented reality". Maintenance and the human factor retain a fundamental role. As in any industrial model, ordinary and extraordinary maintenance cannot be ignored; in this paper, maintenance will be defined as a hybrid between "preventive" and "on condition", taking the positive aspects of both. An implementation in industry 4.0 through new human-machine interfaces (supported by augmented reality)is proposed. It will allow interventions even remotely. All this is connected to man's role in industry 4.0, which, at first glance, seems to be decreasing, but in reality, new and ever greater skills are required. With this in mind, the new ways of implementing maintenance optimise organisational processes.

16:00
Jing Wu (Technical University of Denmark, Denmark)
Xinxin Zhang (Technical University of Denmark, Denmark)
A teaching framework for safety and reliability of robotic and automation systems
PRESENTER: Jing Wu

ABSTRACT. With the digitalization and automation development, robotic and automation systems are designed and used widely in industries, for greater efficiency, increased product quality, and improved safety and security, even for human health. However, unpleasant accidents regarding robotic and automation systems may happen. On one hand, emerging digital and automation technology creates new safety challenges. On the other hand, engineers should embrace safety engineering knowledge to achieve safer design and avoid misuse for foreseeable hazardous scenarios in the industry. However, there is a knowledge gap between education and industry for engineering students. Students should be prepared during their studies for improving their safety awareness to solve such realistic safety problems, which are closely related to robotic and automation systems. This paper presents such demands from perspectives of standardization organization, robotics designers and users, consultancy, and students. A teaching framework is developed by the authors. In the framework, concerning robotic and automation systems, there are five topics, which include safety requirements from legislation and regulations, safety and reliability basics, safety analysis methods, reliability methods, and Risk assessment. The teaching framework aims to teach students systematic methods and tools for identifying, analyzing, and managing reliability and safety issues in robotic and automation systems so that it can prepare students to solve new safety problems and uptake the legal, societal, and ethical aspects in the era of industry 4.0.

16:20
Giovanni Luca Amicucci (INAIL, Italy)
Fabio Pera (INAIL, Italy)
Ernesto Del Prete (INAIL, Italy)
Risk assessment of collaborative robotic applications

ABSTRACT. Robotic applications are divided into tasks, according to a sequence aimed at achieving maximum production efficiency. Some of these tasks, which allow interaction between operator and robot within the same "collaborative workspace", are called "collaborative tasks". During collaborative tasks there is a hazard of collisions between the operator and the robot. The risk associated with this hazard varies depending on the application and depends particularly on the speed and characteristics of the part of the robot that comes into contact with the operator. The safety-related part of the control system performs safety functions which, together with other protective measures, reduce the probability of occurrence of the hazard or the severity of its consequences. The robotic cell integrator must carry out a detailed analysis of the risks associated with each specific application for which the robotic cell is used. The robotics sector standards (ISO 10218-1 and ISO 10218-2) provide a default value for the target failure measure of almost all the typical safety functions of a robot (PL d, according to ISO 13849-1, or SIL 2, according to IEC 62061) and an adequate architecture (category 3, according to ISO 13849-1, or HFT = 1, according to IEC 62061). However, the aforementioned standards recognize the possibility that some more risky applications may have to require more restrictive target failure measures (PL e or SIL 3). If such a possibility exists, specific risk assessments based on the estimation of risk parameters required by ISO 12100 are also permitted. ISO 12100 uses four of these parameters, in fact, alongside the severity of harm, it provides three parameters for assessing the probability of occurrence of that harm: the duration of the operator's exposure to the hazard; the frequency of occurrence of the hazardous event; the possibility of avoiding the hazard or limiting the harm. When the robot cell is connected to a network, then cybersecurity hazards have to be taken into account and a series of measures have to be implemented such as firewalls, creation of a recovery plan and a continuous lifecycle approach for software threats and vulnerabilities. This work provides indications for the application of risk assessment to collaborative robotic applications.

16:40
Luigi Monica (Italian Workers’ Compensation Authority (INAIL), Italy)
Sara Anastasi (Italian Workers’ Compensation Authority (INAIL), Italy)
Marianna Madonna (Italian Workers’ Compensation Authority (INAIL), Italy)
Mauro Platania (Italian Workers’ Compensation Authority (INAIL), Italy)
Mario Di Nardo (Department of Materials Engineering and Operations Management, Faculty of Engineering, Italy)
Implications of cybersecurity on safety of new machineries to mitigate to Human-Robot collaboration risks
PRESENTER: Luigi Monica

ABSTRACT. In continuity with the previous regime, the new proposal for a regulation of the European Parliament and the Council on machinery products foresees the obligation for the manufacturer to carry out a risk assessment on machines in the design and construction phase according to a structured and iterative process as described in the ISO 12100 standard that also provides the aspects to be taken into account to identify all hazards and the associated risk. However, the rapid technological development that characterises the fourth industrial revolution leads to equipping the machines with increasingly sophisticated sensors and technologies, which implies new assessments on the safety aspects related also to Human-Robot collaboration. In particular, the risks related to new digital technologies are those provoked by malicious third parties that impact the safety of machinery products. In fact with the increased connectivity and use of standard communications protocols that come with Industry 4.0, the need to protect machineries from cybersecurity threats increases dramatically. In this respect, machineries manufacturers should be required to adopt proportionate cybersecurity measures to protect the safety of the machinery product. Therefore, this paper aims to show the risk assessment process by proposing the well-suited techniques used in each of its phases and highlighting the new safety aspects related to cybersecurity to be taken into account to mitigate to Human-Robot collaboration risks.

15:40-17:00 Session 5K: Nuclear Industry: Small Modular Reactors
Chair:
Curtis Smith (organisation, Albania)
Location: CQ-010
15:40
Robbie Houldey (University of Bristol, UK)
Lavinia Raganelli (Corporate Risk Associates, UK)
John May (University of Bristol, UK)
Garth Rowlands (Corporate Risk Associates, UK)
Exploring site risk for a Multi Unit SMR Site

ABSTRACT. The UK government is keen to push for development of Small Modular Reactors (SMRs) and Advanced Modular Reactors. The IAEA defines SMRs as reactors with an output of less than 300MWe. SMRs are envisioned to be present some advantages over traditionally sized stations: shorter construction time, modularity, lower initial capital cost. SMRs could be placed in clusters in existing nuclear sites (for example to replace decommissioned or never built nuclear power stations). Placing SMRs in a cluster, with a total generated electricity that is equivalent to that of a new generation reactor will lead to a different risk profile to that of a single large sized reactor. The purpose of this study was to develop a method for evolving risk from a single unit on an m unit site to a site risk for an n unit site, where n > m. This was achieved through accounting for various unit dependencies, including system cross-ties and common-cause failure, whilst also grouping risk based on the number of reactors an IE would impact following its occurrence. A selection of input parameters were varied such as the ratio of Core Damage Frequency between conditional IEs to CDF from independent IEs; the probability of interaction between units; extent of system redundancy, and the probability of component failure due to common-cause IEs. The output was Site CDF (SCDF) for an n unit site, given an input CDF for one unit on an m unit site. An exponential relationship between SCDF and unit number was found, with the rate of change being determined by the extend dependency between the units. Furthermore, it was found that the exponential nature was driven by conditional IEs which impact unit numbers between 1 and all units on the site. A key finding was that, for a 15-unit site, the least conservative estimate of risk (limited unit dependencies) was still nearly 4 times larger than the risk estimated if each reactor is assessed in isolation, which is how most PSAs are currently assessing core damage frequency.

16:00
Kilyoo Kim (Korea Atomic Energy Research Institute, South Korea)
Sangbaik Kim (Korea Atomic Energy Research Institute, South Korea)
Seokjung Han (Korea Atomic Energy Research Institute, South Korea)
Omar Natto (K.A. CARE (King Abdullah City for Atomic and Renewable Energy), Saudi Arabia)
A Study of Emergency Planning Zone Determination for a Korean Small Modular Reactor considering US and IAEA Criteria
PRESENTER: Kilyoo Kim

ABSTRACT. In Korea, a SMR called ‘SMART-100’ was developed and it has been discussed to be exported to Saudi Arabia. To prepare the case of constructing the SMART-100 in Saudi Arabia, a joint study between Korea and Saudi Arabia is being performed to determine the EPZ for the SMART-100 in Saudi Arabia. In the joint study, a recent US and IAEA criteria were studied to determine the emergency planning zone (EPZ), and the results are presented in this paper.

In US criteria, NUREG-0396 [1] is still back born for the SMRs’ EPZ as well as for the commercial large nuclear reactors’ one, and for the SMR EPZ case, DG 1350 [2] was issued to accept scalable EPZ and aggregation of the accident sequence frequencies. In the NuScale SMR Design Certification (DC), NuScale clarified the confusing words, ‘more severe’ and ‘less severe’ accident, in the NUREG-0396 EPZ criteria, according to the NEI guidance [3]. To calculate EPZ distance of SMART-100 according to the recent US criteria, i.e., NuScale DC approach, we used the following assumptions;

1) The frequencies of accident sequences are aggregated, 2) Severe accidents with intact containment are less severe accidents in NUREG-0396, 3) Severe accidents with containment failure or containment bypass are more severe accidents in NUREG-0396.

By using the above assumptions, the EPZ distance of SMART-100 can be reduced within 1 km.

Recently, DG-1350 [2] becomes regulatory guide 1.242 [4] which also clarifies the confusing words in a different fashion. However, the EPZ distance derived from the NuScale approach does not change even though we use regulatory guide 1.242 approach for the confusing words in the SMART-100 case.

After Chernobyl accident, IAEA developed new EPZ criteria including PAZ (precautionary action zone) and UPZ (urgent protective action planning zone) with the different dose criteria from NUREG-0396. The main EPZ dose criteria of IAEA is shown in the Reference [5]. If we calculate PAZ and UPZ distance by using the IAEA dose criteria and the aggregation of accident sequence frequencies, UPZ distance is also within 1 km, and PAZ is negligibly shorter within the site boundary. Thus, we may say that PAZ for SMR is negligible as the PAZ for test reactor is not defined in the current IAEA EPZ criteria. Therefore, for SMART-100 case, the EPZ distance is similarly within 1 km with US criteria or with IAEA criteria, and PAZ is meaningless since it is within site boundary. Maybe similar EPZ results would be derived in the other SMRs. In this paper, multi module EPZ is also discussed.

REFERENCES. [1] U.S. NRC, "Planning Basis for the Development of State and Local Government Radiological Emergency Response Plans in Support of Light Water Nuclear Power Plants," NUREG-0396/EPA 520/1-78-016, De-cember 1978. [2] U. S. NRC, Draft Regulatory Guide, DG-1350, “Performance Based Emergency Preparedness for Small Modular Reactors, Non-Light-Water Reactors, and Non-Power Production or Utilization Facilities,” May 12, 2020 (ML18082A044). [3] NEI, Proposed Methodology and Criteria for Establishing the Technical Basis for Small Modular Reactor Emergency Planning Zone, Dec. 2013 [4] U. S. NRC, “Pre-Decisional, Final Rule: Regulatory Guide 1.242, ‘Performance-Based Emergency Prepar-edness for Small Modular Reactors, Non-Light-Water Reactors, and Non-Power Production or Utilization Facilities’,” October 15, 2021 (ML21285A035). [5] IAEA, “Actions to Protest the Public in an Emergency due to Severe Conditions at a Light Water Reac-tor”, May 2013

16:20
Shahen Poghosyan (IAEA, Austria)
Dennis Henneke (GE Hitachi Nuclear Energy, United States)
Probabilistic Safety Assessments for Small Modular Reactors: unique considerations and challenges
PRESENTER: Shahen Poghosyan

ABSTRACT. Continuously increasing interest of Member States in small modular reactors (SMRs) is followed by the various efforts aimed on safety assessment of various SMR designs. As with traditional NPPs, a probabilistic safety assessment (PSA) has shown itself as a systematic and powerful tool for understanding the plant risk profile and improvement of an SMR design. Development and use of PSA for SMRs in general follow existing approaches, which are well developed and successfully applied for traditional NPPs. However, in the context of SMRs, unique considerations bring new challenges involving particular PSA modelling issues and concepts. These considerations are related to the various factors inherent to SMR designs, such as modularity, use of innovative technologies, new fuel concepts as well as the lack of operational data.

This paper describes the considerations that are unique for SMRs in terms of implementation of PSA tasks and explores potential challenges in PSA development process. In addition, the paper explores potential applications of PSA for SMRs. The paper is based on the insights gained from ongoing IAEA initiative on development of an IAEA Safety Report on safety analysis for SMRs.

16:40
Dana Prochazkova (Czech Technical university in Prague, Technicka 4, 160 00 Praha 6,, Czechia)
Jan Procházka (Czech Technical university in Prague, Technicka 4, 160 00 Praha 6,, Czechia)
Vaclav Dostal (Czech Technical university in Prague, Technicka 4, 160 00 Praha 6,, Czechia)
GENERIC MODEL FOR SAFETY MANAGEMENT OF POWER PLANTS WITH SMALL MODULAR REACTORS
PRESENTER: Dana Prochazkova

ABSTRACT. Power plants with small modular reactors (SMRs) are increasingly used in practice, as they are cheaper compared to large nuclear power plants and their area of emergency planning is smaller. Because, like all technical installations, they are threatened by risks caused by harmful phenomena: in the locality in which they are located; originating in the technical design of the components, their interconnections and their wear over time; associated with the human factor, in particular in the design and operation control, last but not least, the possibilities of humans to anticipate sudden changes in the development of the world. Power plants with SMRs are, therefore, critical objects and it is necessary to manage not only their nuclear safety, but also the integral (overall) safety, because it ensures the safety and development of human society in the area of their use (the cost of their operation must be acceptable to society). Practice and research of technical installations have shown that the causes of their incidents, accidents and failures are in about 80% a combination of harmful phenomena in a short time interval, so it is not enough to manage only partial risks, but it is also necessary to monitor integral risk. Integral safety is not limited to unilateral solutions to problems such as repression, but deals with situations affecting a certain level of safety through the so-called safety chain, which consists of the following parts: proactivity (elimination of structural causes of uncertainties that undermine safety, i.e. threaten security and sustainable development); prevention (elimination of direct causes, if possible, of an uncertain situation violating the existing safety); preparedness (to deal with a situation in which safety is damaged); repression (response - to manage safety damages and stabilize the situation); and recovery (to ensure conditions for the restoration and growth of security). Based on current knowledge and experience with the safe operation of critical technical facilities, a generic model of management of the integral safety of power plants with SMR is created based on the principles of risk-based design and risk-based operation. Integral safety respects the systemic understanding the monitored object and changes in time and space. It is based on a systemic, proactive and strategically targeted approach. It is understood as an emergent property of an object on which the existence of an object depends; i.e. it is the most hierarchically determining property of an object. It is a set of measures and activities that, considering the nature (essence/nature) of the critical object understood as a system of systems and all possible risks and threats, aims to ensure the functioning of the elements, links and flows of the object so that under no circumstances do they fail to endanger themselves or their surroundings. The generic model of the SMR power plant includes: definition of the objective and focus of safety management; description of accidents and failures; proposals for risk management decision-making; discussing the package of measures and activities with key actors; monitoring principles and lessons learned for correction applications. The SMR power plant safety management includes: the concept of increasing safety; the definition of safety-related roles and their tasks; a risk management process for the benefit of safety; a system for operational risk management decision support, including a value scale to determine the level of risk that the SMR plant poses to its surroundings and a value scale to determine the degree of contribution of the SMR power plant to its surroundings; division of responsibilities; and safety documentation.

17:00-17:45 Session 6: Plenary Session:Hybrid operational digital twins for complex systems: Fusing physics-based and deep learning algorithms for fault diagnostics and prognostics (Prof. Olga Fink, Laboratory of Intelligent Maintenance and Operations Systems, EPFL)

Plenary session:Hybrid operational digital twins for complex systems: Fusing physics-based and deep learning algorithms for fault diagnostics and prognostics (Olga Fink,  Laboratory of Intelligent Maintenance and Operations Systems, EPFL)

Chair:
Simon Wilson (Trinity College Dublin, Ireland)
Location: CQ-006