ESREL 2022: 32ND EUROPEAN SAFETY AND RELIABILITY CONFERENCE (ESREL) - DUBLIN 2022
PROGRAM FOR WEDNESDAY, AUGUST 31ST
Days:
previous day
next day
all days

View: session overviewtalk overview

08:30-09:30 Session 14: Plenary session: Neuroergonomics and the changes in understanding and assessing Human Factors Prof. Frederic Dehais Neuroergonomics, Human Factors Lab, DCAS & EEG and human performance: learning by doing Dr. Ivan Gligorijevic MbrainTrain

Plenary session:

Neuroergonomics and the changes in understanding and assessing Human Factors, Prof. Frederic DehaisNeuroergonomics, Human Factors Lab, DCAS

&

EEG and human performance: learning by doing Dr. Ivan Gligorijevic MbrainTrain

Chair:
Maria Chiara Leva (Technological University Dublin, Ireland)
Location: CQ-006
09:30-10:50 Session 15A: Panel Session: The International Workshop on Autonomous Systems Safety (IWASS) ​

Special Final Panel for the The International Workshop on Autonomous Systems Safety (IWASS) ​2022 joint session within ESREL 2022

Chairs:
Marilia Ramos (University of California Los Angeles, United States)
Christoph Thieme (SINTEF Digtial, Norway)
Location: CQ-009
09:30-10:50 Session 15B: Functional Resonance Analysis Method
Chair:
Boris Petrenj (Politecnico di Milano, School of Management, Italy)
Location: CQ-008
09:30
Mariachiara Piraina (Politecnico di Milano, Italy)
Paolo Trucco (Politecnico di Milano, Italy)
Federico Sciuto (Politecnico di Milano, Italy)
Applying Functional Resonance Analysis Method to emergency management capability assessment in the context of interdependent critical infrastructure

ABSTRACT. The disruption of Critical Infrastructure (CI) systems is potentially critical for national security, economy, health and safety. Due to the inherent complexity of interdependent CI systems, effective emergency response is not simple. It is fundamental to make CI systems resilient enough to deal with different types of threats and avoid possible service interruptions. Among different Emergency Management (EM) approaches, capability-based EM is nowadays advocated by researchers and authorities to promote higher systemic resilience. However, this approach can be further developed to fit the peculiarities and needs of managing emergencies when interdependent complex CI systems are involved. It becomes relevant to assess EM capabilities, so that it will favour the detection and improvement of the critical aspects that can be leveraged in order to enhance resilience. This work proposes a novel approach to assess the operational capabilities needed to recover CI systems from disruptions, thus enhancing their resilience against unexpected events. The Functional Resonance Analysis Method (FRAM) is suggested as a powerful tool to model systems and their connections (Hollnagel, 2012). However, it has been rarely used to assess EM systems, and most of all, it has never been applied to large and complex systems that include many organizations and hundreds of functions. This work presents a pilot application of an enhanced FRAM based methodology where organizations’ EM capabilities (i.e. an organization’s ability to do something) are modelled as FRAM functions. The approach is presented by means of a realistic case related to the SICt project, a cooperation project carried out between Italy and Switzerland with the aim of increasing the cross-border resilience capabilities of the transportation infrastructure (road and rail). Interviews were conducted with key informants and were progressively coded so that the capabilities of each organization were identified. The MyFRAM software (Patriarca et al., 2018b) was used to model and assess these capabilities. These were then analysed through the Resilience Analysis Matrix (RAM) (Hosseinnia et al., 2019; Patriarca et al., 2018a) that provided some first insights on the interactions among actors, representing in a squared matrix the 270 inter-functional links identified. Furthermore, with the use of the FRAM Model Visualiser (FMV) (Hill and Hollnagel, 2016) all the 91 capabilities were displayed along with their connections. However, it emerged that the representation was overcrowded of information and resulted almost unreadable and not exploitable. To rationalize the results, the ROC (Rank Order Clustering) technique was applied, obtaining homogeneous clusters to be further analysed (Gu et al., 2019). Some new indicators were introduced to provide valuable information on the resilience properties of single clusters and of the overall system. The application of the enhanced FRAM approach brought positive outcomes to the understanding and the improvement of this complex system. It was possible to get an overview of the organizational structure of the stakeholders involved identifying their EM capabilities. This allowed to determine how organizations are mutually involved in performing a task or in providing specific capabilities. Moreover, the results obtained from the analysis of the clusters offered insights on the possibility to leverage on some capabilities in order to improve the overall system resilience.

References Gu, X., Angelov, P., Zhao, Z., 2019. A distance-type-insensitive clustering approach. Applied Soft Computing 77, 622–634. Hill, R., Hollnagel, E., 2016. Instructions for use of the FRAM Model Visuliser (FMV). Advances in Human Aspects of Transportation, Springer 399–411. Hollnagel, E., 2012. FRAM: The functional resonance analysis method: Modelling complex socio-technical systems. Fram Funct. Reson. Anal. Method Model. Complex Socio-technical Syst. 1–142. https://doi.org/10.3357/asem.3712.2013 Hosseinnia, B., Khakzad, N., Patriarca, R., Paltrinieri, N., 2019. Modeling Risk Influencing Factors of Hydrocarbon Release Accidents in Maintenance Operations using FRAM. 2019 4th Int. Conf. Syst. Reliab. Safety, ICSRS 2019 290–294. https://doi.org/10.1109/ICSRS48664.2019.8987694 Patriarca, R., Del Pinto, G., DI Gravio, G., Costantino, F., 2018a. FRAM for Systemic Accident Analysis: A Matrix Representation of Functional Resonance. Int. J. Reliab. Qual. Saf. Eng. 25. https://doi.org/10.1142/S0218539318500018 Patriarca, R., Di Gravio, G., Costantino, F., 2018b. MyFRAM: An open tool support for the functional resonance analysis method. 2017 2nd Int. Conf. Syst. Reliab. Safety, ICSRS 2017 2018-Janua, 439–443. https://doi.org/10.1109/ICSRS.2017.8272861

09:50
Jefferson Santos de Oliveira (INSTITUTO TECNOLÓGICO DE AERONÁUTICA, Brazil)
Marcelo Vitor José Alves (INSTITUTO TECNOLÓGICO DE AERONÁUTICA, Brazil)
Leandro Sette Linhares de Azevedo (INSTITUTO TECNOLÓGICO DE AERONÁUTICA, Brazil)
Moacyr Machado Cardoso Junior (INSTITUTO TECNOLÓGICO DE AERONÁUTICA, Brazil)
Ligia Maria Soto Urbina (INSTITUTO TECNOLÓGICO DE AERONÁUTICA, Brazil)
Application of Functional Resonance Analysis Method to identify variabilities in the maintenance process of rotary-wing aircraft engines

ABSTRACT. This article aims to investigate the variability in the maintenance of rotary-wing aircraft engines by applying the Functional Resonance Analysis Method (FRAM). Applying the four steps of the method, the process used by the engine maintenance service section of the Brazilian Army's Aviation Maintenance and Supply Battalion was analyzed. In the research, a FRAM model for the workshop was obtained and its potential variability was identified. Based on this information, possible unwanted results in the maintenance process from the occurrence of functional resonances within the model were analyzed. And, finally, proposals were made to control the consequences of uncontrolled variability that were found during the research.

The aircraft maintenance process is an underexplored sector regarding continuous improvement and application of Resilience Engineering. In recent bibliometric analyses, it was observed that very little was applied in this sector. Identifying this opportunity, it was decided to develop the application of FRAM in an aeronautical engine maintenance process, to identify improvement points to reduce the risks of unwanted events, optimize equipment downtime and, consequently, reduce maintenance cost.

To perform the FRAM, Hollnagel (2012) presents four steps, in addition to a preliminary step before the application of the method, which is defining whether the analysis is related to the investigation of an event that has already occurred, or to a risk assessment focused on something that may occur in the future. Then, applying the FRAM in the maintenance process of aeronautical engines of the Brazilian Army, we identified in the preliminary analysis that the method would be applied to investigate the variability of the engine workshop in the maintenance process of Arriel 1D1 engines used in rotary-wing aircraft.

Accomplishing the first step, to draft the diagram of the tasks performed by the engine maintenance section, the researchers conducted interviews with militaries, of the Army Aviation Maintenance and Supply Battalion, who work on the engine of rotary-wing aircraft. From this interview, the flow of tasks that are performed in the engine workshop was mapped and the six relevant components (input, output, prerequisites, resources, time, and control) that impacted the variability of each function were identified.

The FRAM model obtained was drawn with the help of the FRAM Model Visualizer (FMV) software version 2.1.6, which details the relationship of the 22 tasks, mapped by the researchers, within the maintenance process of the Arriel 1D1 engine. Among them, seventeen were classified as human tasks and five as organizational tasks. No technological tasks were identified in this process.

On the second step, after the elaboration of the FRAM model, an investigation was carried out to identify the potential variability of the mapped tasks, noting that most of the functions identified in the previous step were human actions and that these functions are characterized by their high variability.

Performing the third step, to carry out the analysis of the aggregation of variability, we sought to analyze the dependencies between the functions, identifying the expected and unexpected connections between them, and the expected ones are those that refer to the normal functioning of the system, leading to obtaining of the expected results in the desired conditions and the unexpected ones are those that occur in certain situations, even if they should not exist (Almeida, 2008). Then, two undesired results of the effects of triggering functional resonances within the system were identified. The first undesired result would be the occurrence of maintenance failure and the need for rework with the engine returning to the engine section without having been used. The second failure that could trigger a functional resonance would be the one resulting from the loss of the license to perform the maintenance on the engines, by the mechanics, causing too much delay, and even interrupting, the engine maintenance process.

The fourth and last step consisted of analyzing and proposing ways to manage the variability that will affect the performance of the mapped maintenance process, identified by the functional resonances that were found in the previous steps. For Patriarca (2017), this variability of performance can lead to positive and negative results and the most fruitful strategy is to amplify the positive effects, that is, to facilitate their happening without losing control of the activities, and to dampen the negative effects, eliminating and preventing those happening. Some proposals include organizational and procedure changes and budget allocation of financial resources for qualification courses.

The analysis of the results of this work, from the model built through the identification of variabilities and even the proposition of ways to manage them, allows observing the potential of FRAM in applications in aeronautical maintenance, as the methodology facilitated the elaboration of a valuable analysis of the accomplished process within Army Aviation, with potential to be applied in other aviation engine maintenance centers. This can be exemplified by the systematic analysis that was made of the daily performance of the aeronautical engine maintenance section by the Army Aviation, in which points that, until now, had been underestimated were highlighted. This was possible due to the potential of this method as a tool for analyzing and understanding complex systems, such as the aeronautical maintenance process.

10:10
Vinicius Bigogno-Costa (Aeronautics Institute of Technology, Brazil)
Moacyr Machado Cardoso Junior (Aeronautics Institute of Technology, Brazil)
Tarcísio Abreu Saurin (DEPROT/UFRGS (Industrial Engineering and Transportation Department), Federal University of Rio Grande do Sul, Brazil)
Tor Olav Grøtan (SINTEF Digital, Safety Research, Norway)
Using the Functional Resonance Analysis Method for modelling social interactions in socio-technical systems: an exploratory study

ABSTRACT. The Functional Resonance Analysis Method (FRAM) has been widely adopted as a Resilience Engineering approach for the modelling of work-as-done in socio-technical systems (STSs). FRAM allows the identification of the functions and variabilities in a system of interest, as well as the analysis of variability propagation and the development of means to variability management. In fact, STSs are composed of multi-layered and interwoven networks that involve a myriad of interactions of information, materials, and equipment, among others. In that sense, FRAM does not capture the social connections that emerge from the functional connections in STSs, especially in systems reliant on teamwork, and its implications to the system resilience. Tracking functional and social couplings would allow an enhanced understanding of sources of variability by approaching together these intricate, interwoven networks, and assessing how both sets of couplings can mutually affect each other. This paper, as a part of a larger research project for jointly addressing the social and functional dimensions of system performance, aims to give a first step towards bridging this gap in FRAM by proposing a method for converting the FRAM WAD representation into the social network of the actors in the system. The resulting social network is a directed, weighted network, with its nodes corresponding to the actors, and edges consist of their upstream-downstream couplings, weighted by the number of functional couplings. A case study carried out in a product development system, hosted by a public technology-development facility in Brazil, exemplifies the use of the method. The case study consists of 26 people performing functions to achieve the project’s objective to develop two hardware-software systems and an application suite in a dynamic environment. Results pointed out the actors who are key to the system performance, and those more closely connected to each other.

10:30
Huan-Huan Cui (School of Safety Engineering, Beijing Institute of Petrochemical Technology, China)
Renyou Zhang (School of Safety Engineering, Beijing Institute of Petrochemical Technology, China)
Zikai Chen (School of Safety Engineering, Beijing Institute of Petrochemical Technology, China)
Ran Cong (School of Safety Engineering, Beijing Institute of Petrochemical Technology, China)
Cheng Wang (School of Safety Engineering, Beijing Institute of Petrochemical Technology, China)
Bayan Nuerlanhan (School of Safety Engineering, Beijing Institute of Petrochemical Technology, China)
A Safety-II approach for safety management during conducting a chemical-related experiment in laboratory
PRESENTER: Renyou Zhang

ABSTRACT. The safety of experiments in chemical-related laboratories is a new significant topic for people to concern. However, compared to the development of the research on industrial safety, the work on the safety of the experiments in laboratories is lagging behind. As young researchers and students are often highly involved in those experiments in laboratories, once an accident happens, it is highly potential to cause injury and even fatality. The death of those talents is a huge loss to our society and to their family, so particular efforts should be done. This study provides a Safety-II approach for finding the crucial factors for managing and improving safety during a chemical-related experiment in laboratories. Different to the traditional Safety-I method which just considers what can trigger an accident, Safety-II approach not only considers what goes wrong but also what goes right. A typical high-risk experiment is selected as the case, and the Safety-II based Functional Resonance Analysis Method (FRAM) is adopted as the support to figure out the crucial factors for managing and ensuring experiment safety. The result indicates that the Safety-II approach (FRAM) is an effective way for comprehensively modeling the selected experiment and to identify key factors for guaranteeing safety during conducting a chemical-related experiment in laboratory.

09:30-10:50 Session 15C: S.14: Digital twin: recent advancements and challenges for dealing with uncertainty and bad data I
Chair:
Marco De Angelis (University of Liverpool, UK)
Location: CQ-007
09:30
Marco Behrendt (Leibniz University Hannover, Germany)
Marco de Angelis (University of Liverpool, UK)
Liam Comerford (University of Liverpool, UK)
Michael Beer (Leibniz University Hannover, Germany)
Assessing the severity of missing data problems with the interval discrete Fourier transform algorithm
PRESENTER: Marco Behrendt

ABSTRACT. The interval discrete Fourier transform (DFT) algorithm can propagate in polynomial time signals carrying interval uncertainty. By computing the exact theoretical bounds on signal with missing data, the algorithm can be used to assess the worst-case scenario in terms of maximum or minimum power, and to provide insights into the amplitude spectrum bands of the transformed signal. The uncertainty width of the spectrum bands can also be interpreted as an indicator of the quality of the reconstructed signal. This strategy must however, assume upper and lower values for the missing data present in the signal. While this may seem arbitrary, there are a number of existing techniques that can be used to obtain reliable bounds in the time domain, for example Kriging regressor or interval predictor models. Alternative heuristic strategies based on variable (as opposed to fixed) bounds can also be explored, thanks to the flexibility and efficiency of the interval DFT algorithm. This is illustrated by means of numerical examples and sensitivity analyses.

09:50
Enrique Miralles-Dolz (University of Liverpool / Culham Centre for Fusion Energy, UK)
Ander Gray (University of Liverpool, UK)
Marco de Angelis (University of Liverpool, UK)
Edoardo Patelli (University of Strathclyde, UK)
Interval-Based Global Sensitivity Analysis for Epistemic Uncertainty

ABSTRACT. The objective of sensitivity analysis is to understand how the input uncertainty of a mathematical model contributes to its output uncertainty. In the context of a digital twin, sensitivity analysis is of paramount importance for the automatic verification and validation of physical models, and the identification of parameters which require more empirical investment. Yet, sensitivity analysis often requires making assumptions, e.g., about the probability distribution functions of the input factors, about the model itself, or relies on surrogate models for the evaluation of the sensitivity that also introduce more assumptions.

We present a non-probabilistic sensitivity analysis method which requires no assumptions about the input probability distributions: the uncertainty in the input is expressed in the form of intervals, and employs the width of the output interval as the only measure. We use the Ishigami function as test case to show the performance of the proposed method, and compare it with Sobol' indices.

 

Full paper available here: https://rpsonline.com.sg/rps2prod/esrel22-epro/pdf/S14-04-180.pdf

10:10
Krasymyr Tretiak (The University of Liverpool, UK)
Scott Ferson (The University of Liverpool, UK)
Non-probabilistic regression analysis under interval data in the dependent variables
PRESENTER: Krasymyr Tretiak

ABSTRACT. We propose a new iterative method using machine learning algorithms to fit an imprecise linear regression model to data that consist of intervals rather than point values. The method seeks parameters for the optimal model that minimize the mean squared error between the actual and predicted interval values of the dependent variable. The method incorporates a first-order gradient-based optimization to minimize the loss function and interval analysis computations to model the measurement imprecision of the data. Interval operations provide results that are rigorous in the sense that they bound all possible answers. The results are best-possible when they cannot be any tighter without excluding some possible answers. When a mathematical expression includes multiple instances of an interval variable, failing to account for their perfect dependence can lead to results that are not best-possible and wider than they could be, which is called the dependency problem. In this case, the obtained results undergo significant inflation of the interval bounds. Our method is designed in such a way as to reduce the effect of repeated variables on each iteration and overcome the dependency problem in an interval-valued extension of the real function by applying a second-order inclusion monotone approximation. The method captures the relationship between the explanatory variables and a dependent variable by fitting an imprecise regression model, which is linear with respect to unknown parameters. We consider the explanatory variables to be precise point values, and the dependent variables are presumed to have uncertainty, which is characterized only by interval bounds without any probabilistic information. Thus, the imprecision is modeled non-probabilistically even if the scatter of dependent values is modeled probabilistically by homoscedastic Gaussian distributions. The proposed iterative method estimates the lower and upper bounds of the expectation region, which depicts the envelope of all possible precise regression lines obtained by ordinary regression analysis based any configuration of bivariate scalar points from x-values and the respective intervals. The expectation region is the analogue of the point prediction from a simple linear regression, which others have variously called 'identification region' and 'upper and lower bounds on predictions'. We verify the proposed iterative method by probabilistic Monte Carlo simulations and compare it to another approach to reliable regression.

10:30
Ander Gray (United Kingdom Atomic Energy Authority, UK)
Marcelo Forets (Universidad de la República, Montevideo, Uruguay, Uruguay)
Christian Schilling (Aalborg University, Denmark)
Luis Benet (Instituto de Ciencias Físicas, Universidad Nacional Autónoma de México (UNAM), México, Mexico)
Scott Ferson (University of Liverpool, UK)
Rigorous time evolution of p-boxes in non-linear ODEs
PRESENTER: Ander Gray

ABSTRACT. We combine reachability analysis and probability bounds analysis, which allow for multivariate p-boxes to be specified as the initial states of a dynamical system in addition to intervals. In combination, the methods allow for the temporal evolution of p-boxes to be rigorously computed, and they give interval probabilities for formal verification problems, also called failure probability calculations in reliability analysis. The methodology places no constraints on the input probability distribution or p-box and can handle dependencies generally in the form of copulas.

09:30-10:50 Session 15D: Cybersecurity
Chair:
Ralf Mock (Step Commerce AG, Switzerland)
Location: CQ-106
09:30
Jan Prochazka (Brno University of Technology, Czechia)
Petr Novobílský (Q-media, Czechia)
Dana Prochazkova (Brno University of Technology, Czechia)
Cybersecurity Design for Railway Products
PRESENTER: Jan Prochazka

ABSTRACT. The use of new communication and control technologies leads to the rise of new risks. These risks need to be addressed within cyber-physical systems because they often have origin in cyberspace and impact in physical space. At the same time, new technologies provide us with tools and procedures to deal with these risks. An example is the IEC 62443 (2019) standard, which contains a set of tools and procedures to ensure the cybersecurity of control systems in the development and in use phase of the product, from the point of view of the system and individual parts. There are many new possibilities in the field of cybersecurity, and they can be implemented with different levels of security. When choosing them, it is therefore necessary to take into account the efficiency and sustainability during operation and the economic side of things. In connection with the IEC 62443 standard, we can combine a suitable security concept with a so-called security vector or (cyber)security design. Cybersecurity design values individual chapters of cybersecurity in relation to the addressed system or product. Design determination is specific to different types of cyber-physical systems. For example, railway infrastructure has a number of its own technical standards that govern it. Many of these standards can also have an impact on the process of determining cybersecurity design. The TS 50701 (2020) standard addresses cybersecurity on the railway and it is ideal basis for cybersecurity design. The methodology for determination of cybersecurity design must respect the technical standards used on railway, Ciancabilla (2021). However, it must also include other tools that manage risks in accordance with the requirements of the amendment to ISO 9001 (2015). Important tools are – determination of the product context within the system, the entire risk management (identification, analysis, measures, implementation), the process of selecting requirements, the security design report. The article deals with the methodology of determining the cybersecurity design of a product intended for railway infrastructure based on the mentioned standards and processes from the context in the systems, through risk analysis to the selection and fulfillment of requirements. Ciancabilla A., Magnanini G., Sperotto F., Amato D. (2021). Application of FprTS 50701, ENISA-ERA Conference: Cybersecurity in Railways. IEC 62443. (2019). Security for Industrial Automation and Control Systems. International Electrotechnical Commis-sion / International Society of Automation. IEC and ISA. ISO 9001. (2015). Quality management systems. International Organization for Standardization.

09:50
Jon-Martin Storm (University of Oslo, Norway)
Janne Hagen (University of Oslo, Norway)
Sigrid Haug Selnes (University of Oslo, Norway)
How has cybersecurity regulation altered the electric power industry's use of intrusion detection systems?
PRESENTER: Jon-Martin Storm

ABSTRACT. In 2011, Stuxnet became a game-changer concerning Industrial Control Systems Security. By manipulating process system software, attackers changed the physical processes of enriching uranium in a nuclear power plant in Natanz, Iran. Since then, the methodological sophistication of cyber-attacks has evolved. The need for intrusion detection capability has grown accordingly. Critical infrastructures, like the energy sector, are operated with the support of industrial control systems, and there are urgent demands for securing these systems. Governmental regulations, supervisions, and reviews are measures to enforce cybersecurity in critical infrastructures. However, it is not well studied how effective such regulation is. This paper examines the impact of prescriptive statutory requirements for cybersecurity in the Norwegian electric power system. For about ten years, the Norwegian government has enforced detailed cybersecurity regulations for electric utilities. The study examines and discusses the change in the industry's implementation of cybersecurity measures, emphasizing intrusion detection systems. Based on audit reports and survey data, the study reveals an improvement in implementing intrusion and detection systems during the last ten years.

10:10
Pierre-Marie Bajan (IRT SystemX, France)
Martin Boyer (IRT SystemX, France)
Anouk Dubois (IRT SystemX, France)
Jerome Letailleur (IRT SystemX, France)
Kevin Mantissa (IRT SystemX, France)
Yohann Petiot (IRT SystemX, France)
Jeremy Sobieraj (IRT SystemX, France)
Mohamed Tlig (IRT SystemX, France)
Illustration of Cybersecurity and Safety co-engineering using EBIOS RM and IEC 61508

ABSTRACT. Nowadays, complex Cyber-Physical Systems are starting to be widespread. In this context, risk analyses represent a persistent challenge, both in Functional Safety and in Cybersecurity. While those two domains have always been independent, this independence is now questioned. Indeed, while Functional Safety has benefited from decades of feedback and a mature normative environment, the emergence of Cybersecurity risks with potential impact on the Safety analyses – we think for example of « killwares » - acts as a serious incentive to evolve the conventional methods and risk culture. Cybersecurity naturally takes a bigger place in the context of highly connected industries. However, it suffers from the youth of its knowledge and/or lack of feedback regarding the industrial normative environment compared with Functional Safety. This has led several works to consider the pertinence of making Cybersecurity and Functional Safety experts work together. Those works have identified potential synergies with the proposal of some diverse hybrid methods. For example, SAHARA (Security-Aware HAzard and Risk Analysis) methodology brings together the Safety HARA (Hazard Analysis and Risk Assessment) and Cybersecurity STRIDE methods to enrich the number of potential system failures; SSM (Six-Step Model) method permits to determine links between Safety and Cybersecurity aspects on the system. The objective of this article is to define the potential links between Functional Safety and Cybersecurity. First, we brought together Functional Safety and Cybersecurity teams to exchange on their respective methods and processes. Based on their respective expertise, they agreed on various points of divergence and convergence. Then, we worked on the Cybersecurity EBIOS RM risk analysis method (recommended by French Cybersecurity authority), and determined how it could fit into a Cybersecurity / Safety methodology. Finally, from these discussions, we applied this methodology on simple use cases in order to determine the benefits and limits of Cyber / Safety co-engineering. From this work, we then recommend processes and improvements enabling a proper collaboration between both teams. New synergies and tentative processes have been deduced, illustrating the interrelations between Cybersecurity and Safety activities. Those case studies show that Cybersecurity can provide inputs to Safety. The first case study is the Safe Remote Control (SRC), used in robotics and heavy equipment industry, it is a simple use case with mature Safety background. Through a risk analysis, the Cybersecurity team must provide additional control measures in cooperation with the Safety team. We started by analyzing feared events from Safety for SRC. Using EBIOS RM methodology, the Cybersecurity team identified potential attacker profiles along with cybersecurity assets and vulnerability points associated to SRC. They proposed dedicated measures to improve the system resilience. Those measures were then checked by the Safety team to ensure their coherence with SRC safety principles. For example, the application of security patches must comply with Safety constraints. Security patches must be verified in order not to introduce new safety-related risks. After further discussions, the new cybersecurity measures can then be integrated by the Safety team and included in the Safety requirements. The second study case is a completed Safety HARA analysis of an autonomous system. The goal of the Cybersecurity team was to identify, in an ulterior review, new cybersecurity related scenarios that could challenge the Safety score of Safety feared events. This use case has two steps. First step, we present the Cybersecurity team with the HARA methodology for the score of Safety feared events. It is based on the IEC 61508 standard, with the SIL score based on the parameters C (Consequence), P (Potential to avoid risk), F (Frequency of exposition), and W (probability of feared situation occurrence). Cybersecurity team then gave to Safety team a feedback of their supposed impacts on those parameters. For example, Cybersecurity can impact F if we consider that attackers execute attacks mimicking a failure whenever there is a distant connection. It is more far-reaching that the hypothesis of Safety of considering stress environment that can induce a particular failure. Second step, we present the SIL of an actual Safety use case and ask the Cybersecurity team to review the SIL scores. This resulted in several scenarios getting worse SIL scores because of the cybersecurity impact. It raised the question on how to consider a change of SIL score from cybersecurity inputs: do we apply additional Safety measures to cover a degraded SIL score or can we keep the original SIL score by applying necessary cybersecurity measures? This activity has reinforced the need for Cybersecurity continuous collaboration in Safety studies from the beginning of the analysis, until its completion.

This work has shown the potential of convergence between both teams. This offers multiple perspectives, going from the inclusion of the Cybersecurity team in the activity of risk assessment conducted by the Safety Team, or the emergence of standards allying both Safety and Cybersecurity in a development process. Existing standards start to acknowledge this matter: the Functional Safety standard ISO 26262 in automotive recommends in its 2018 edition activities to coordinate with Cybersecurity. A recent Technical Report in automotive, ISO/TR 4804, mentions both Functional Safety and Cybersecurity aspects. However, there is no standard offering concrete recommendations to link Functional Safety and Cybersecurity.

The main idea regarding this approach is to define the main points of convergence and divergence between Cybersecurity and Safety teams. It gives a perspective to envision if this can be integrated within a more global methodology, with both Safety and Cybersecurity collaborating at each step of the design lifecycle. However, challenges still remain: from the need to incite cultural changes in both teams, to incompatibilities between some Safety / Cybersecurity principles, such as vocabulary or corrective measures. It is one of the objectives of our work to highlight these hard points, sometimes missing in the current state of the art regarding Safety and Cybersecurity co-engineering.

10:30
Tamara Oueidat (University Grenoble Alpes - G-SCOP Laboratory, France)
Jean-Marie Flaus (University Grenoble Alpes - G-SCOP Laboratory, France)
François Masse (INERIS - Direction des risques accidentels, France)
A new way to generate automatically the attacks scenarios and combine them with safety risks in the same analysis
PRESENTER: Tamara Oueidat

ABSTRACT. The digitization of the critical industries (chemical industry, energy production and storage) by integrating new technologies in the control command systems or their interconnections with the supervision system and offices [1], makes them more vulnerable to cyberattacks that can affect the system safety and its functionality. Therefore, cybersecurity has become a critical subject in industrial systems and should be analyzed with the safety risks in the same analysis [2]. For these reasons, a large number of risk analysis approaches are developed to combine the safety and cybersecurity risks [3], they are interesting but present some limits, either in the level of detail of the analysis, or in the system modeling, or in the attack scenario definition stage, or in the applicability to the control system and the security of industrial installations.

We proposed a new risk analysis approach. This approach is based on specific data of the industrial installation and on knowledge bases to model the system (physical and software components and their functionalities), to research the vulnerabilities, and to generate the attack scenarios. This approach aims to simplify the steps of searching the vulnerabilities and the attack scenarios by using guides and computerizing the step of generating the scenarios. This approach is composed of these steps: Research of undesirable events which can be from safety or cybersecurity source; System modeling (components with a list of attributes); Research of vulnerabilities existing on the industrial system; Generation of attack scenarios from meta-models; Combination of the two types of risks. We will concentrate in this article on the steps of generating the attack scenarios. It relies on a new meta-model to define the possible scenarios on an industrial site. Then, the scenarios are generated in an algorithmic way. The originality of this approach is to simplify the analysis steps in the existing approaches by exploiting the knowledge bases, meta-models, and an automated approach. This makes the approach simpler to apply, and less pricey in time, and for users which are no-expert in cybersecurity.

09:30-10:50 Session 15E: Component reliability models
Chair:
Stefan Bracke (University of Wuppertal, Germany)
Location: LG-20
09:30
Alicia Puls (Bergische Universität Wuppertal, Chair of Reliability Engineering and Risk Analytics, Germany)
Stefan Bracke (Bergische Universität Wuppertal, Chair of Reliability Engineering and Risk Analytics, Germany)
Comparative study on multivariate trend analysis by the example of traction batteries in the usage phase
PRESENTER: Alicia Puls

ABSTRACT. In the engineering context, the analysis of trends plays a crucial role due to increasing complexities in functionality and product variety. Using data analytics, degradation or different operating states can be detected [1]. For complex products, overarching trends are often not trivially identifiable. Therefore, with multivariate approaches, a gain of knowledge can be obtained [2]. This paper focuses on multivariate trend analysis by the example of traction batteries in the usage phase. A comparative study with real data including the variables voltage, current, temperature, and state of charge (SoC) is conducted. The SoC is an important parameter for traction batteries providing information on operating states and battery level. It is also an indicator for degradation [3], [4]. As a computed and not clearly defined quantity, there are several uncertainties besides the measurement-related factors. Therefore, in conjunction with a method comparison, this paper presents an approach to validate the SoC development with trend analysis.

A concept for a comparative study including data exploration, comparison, and combination of different methods as well as validation is set up and proved by two use cases with different characteristics like value and scattering range or correlation between the variables. Various trend analysis approaches are applied and compared, in particular, statistical hypothesis testing for trend as well as change point detection. Both univariate and multivariate trend tests are being applied to point out the added value of multivariate approaches. Considered trend tests are the nonparametric Cox-Stuart trend test [5] and the Mann-Kendall trend test [6], [7] for univariate time series and the nonparametric extension of the Mann-Kendall trend test [8] for multivariate time series. Four different change point detection algorithms are used, the hierarchical clustering approaches e.agglo [9] and e.divisive [9] as well as the dynamic programming and pruning approaches e.cp3o_delta [10] and ks.cp3o_delta [10]. Quantified by criteria like sensitivity and accuracy the most appropriate algorithm is identified. Concluding, a decision scheme for the method selection is developed considering the results of the comparative study and the characteristics of the two use cases.

References [1]ŞEN, Z., 2017. Innovative Trend Methodologies in Science and Engineering. Cham: Springer International Publishing; Imprint; Springer. ISBN 978-3-319-52337-8. [2]HÄRDLE, W. and L. SIMAR, 2015. Applied multivariate statistical analysis. 4th ed. Heidelberg: Springer. ISBN 978-3-662-45170-0. [3] HAUBROCK, A., 2011. Degradationsuntersuchungen von Lithium-Ionen Batterien bei deren Einsatz in Elektro- und Hybridfahrzeugen. ISBN 9783869558318. [4] HUYNH, P-L., 2016. Beitrag zur Bewertung des Gesundheitszustands von Traktionsbatterien in Elektrofahrzeugen. Springer Fachmedien Wiesbaden, Wiesbaden. ISBN 978-3-658-16561-1. doi: 10.1007/978-3-658-16562-8. [5] COX, D.R., and A. STUART, 1955. Some Quick Sign Tests for Trend in Location and Dispersion. Biometrika, 42(1/2), 80-95. ISSN 00063444. Available at: doi:10.2307/2333424 [6] MANN, H.B., 1945. Nonparametric Tests Against Trend. Econometrica, 13(3), 245-259. ISSN 00129682. Available at: doi:10.2307/1907187 [7] KENDALL, M.G., 1948. Rank correlation methods. Griffin. [8] LETTENMAIER, D.P., 1988. MULTIVARIATE NONPARAMETRIC TESTS FOR TRENDS IN WATER QUALITY. Journal of the American Water Resources Association, 24(3), 505-512. ISSN 1093-474X. Available at: doi:10.1111/j.1752-1688.1988.tb00900.x [9] MATTESON, D.S., and N.A. JAMES, 2014. A Nonparametric Approach for Multiple Change Point Analysis of Multivariate Data. Journal of the American Statistical Association, 109(505), 334-345. ISSN 0162-1459. Available at: doi:10.1080/01621459.2013.849605 [10] ZHANG, W., N.A. JAMES and D.S. MATTESON, 2017. Pruning and Nonparametric Multiple Change Point Detection. 2017 IEEE International Conference on Data Mining Workshops (ICDMW, 288-295. 2017 IEEE International Conference on Data Mining Workshops (ICDMW. Available at: doi:10.1109/ICDMW.2017.44

09:50
Thomas Gwosch (Karlsruhe Institute of Technology, Germany)
Julian Peters (Karlsruhe Institute of Technology, Germany)
Johanna Pehlivan (Karlsruhe Institute of Technology, Germany)
Christian Naber (Karlsruhe Institute of Technology, Germany)
Reliability Model of an Automatically Switching Radon Exposimeter for System Design Evaluation
PRESENTER: Thomas Gwosch

ABSTRACT. Reliability of technical systems that measure safety or legally relevant values is of great importance. Such a device is an automatically switching radon exposimeter, which is focused on in this contribution for reliability assessment. In certain areas it is legally required to measure the radon dose on persons, which can lead to health risks like lung cancer. An early evaluation of reliability helps to prevent costly iterations in production phase. The problem is, that reliability data is hard to get in early stages of development, when most of the components are not even fixed yet. Therefore, in this contribution the reliability is assessed by using available generic failure rates data for device classes similar to the ones used in the radon exposimeter. Together with methods like reliability-block-diagrams (RBD) and failure-mode-and-effects-analysis (FMEA) the failure rates and mean-time-to-failures (MTTF) are calculated. Critical components are identified and improvements like redundancy are introduced to improve the MTTFs. This helps product developers to avoid flaws and reduce risk of loss of function at an early stage, avoiding structural changes later in the process.

10:10
Franz-Georg Neupert (University of Wuppertal, Germany)
Stefan Bracke (University of Wuppertal, Germany)
New two phases distribution failure model for a description of failures caused by a change of material property

ABSTRACT. Failure symptoms of technical complex products are often caused by mixed failure root causes. In reliability engineering, the use of a single Weibull-distribution (parameters threshold t0, shape b, location T) is the state of the art regarding the description of a single failure mode. In terms of a mixed failure root cause, a mixture of different Weibull-distributions has to be considered. The following failure models are state of the art, cf. (Meyer 2003) and (VDA 2016): Compete failure model, mixed population failure model, partial population failure model, general failure model and mixture distribution model. A failure behavior, which is caused by a change of material property (belongs to the category of mixed failure root causes), is not considered by the mentioned models. The following examples are typically for failure behaviors caused by change of material property: (a) Corrosion reduces the lubricity of bearings, which is influencing their lifetime. (b) Some synthetic materials are changing their structure caused by load cycles, which leads finally to the failure behavior. In both cases the failures are occurring in two subsequently following steps (two phases). But the corrosion (a) itself or the change of the synthetic material structure (b) itself are not failures in the sense of the considered failure model. They are from a selected point of time the initiator for a significant development of the resulting failure behavior, which is visible in a right sided curved course of the cumulative probability in a Weibull probability net. Until now a single Weibull-distribution with the third parameter (the threshold parameter t0) or a Lognormal-distribution has been often applied in reliability engineering for such a mixed failure root cause. But these distribution models do not explain the observed failure behavior, because the change of material property itself is Weibull distributed. On the other side, the task of a mathematical model is particularly to describe the law of the observed failure behavior, which is the pre-condition for a failure prognosis or could give an important hint for problem solving (e.g. to gain knowledge about the corrosion behavior). This paper focuses on a new failure model with the goal, to describe a failure behavior caused by a change of material property. The new model is based on a convolution of the first phase distribution with the second phase distribution. Furthermore, the separation of the two phase distributions from the observed values and the estimation of their parameters via a deconvolution are shown. The new failure model contains the main steps, as follows: (1) Stepwise shifting of the simulated or observed two-phase failures with the goal of minimizing the residuals of the regression line. Analyzing of the regression coefficient for evaluating of the parameter values of the distribution model of the second failure phase. (2) Equidistant interpolation and low-pass filtering of the simulated (or observed) values. (3) Generating values to the first failure phase by deconvolution of the interpolated and filtered values with the values of the estimated second distribution model. (4) Estimation the parameter values of the first failure phase distribution model by linear regression. (5) Estimation of the two-phase failure model by convolution of the estimated distribution models of the two failure phases Finally, the application and the effectiveness of the new failure model is explained within an automotive engineering case study.

10:30
Shouqing Huang (Beijing Institute of Spacecraft Environment Engineering, China)
Taichun Qin (Beijing Institute of Spacecraft Environment Engineering, China)
Yuan Zhou (Beijing Institute of Spacecraft Environment Engineering, China)
Yue Guo (Beijing Institute of Spacecraft Environment Engineering, China)
Xinming Su (Beijing Institute of Spacecraft Environment Engineering, China)
Prediction of CMG Combined Stress Failure Boundary Domain Based on Particle Swarm Optimization and Neural Network Method
PRESENTER: Shouqing Huang

ABSTRACT. Studying the failure boundary domain of Control Moment Gyroscope (CMG) under thermal vacuum and dynamic conditions is of great significance to ensure and improve the reliability of CMG. The test data is obtained from a test rig that can simulate the vacuum thermal environment, the angular momentum exchange between CMG and spacecraft simultaneously. The test simulates the combined stresses of temperature, CMG gimbal rotating speed, and spacecraft rotating speed under a vacuum environment, so as to obtain the running status data of CMG. The Particle Swarm Optimization and BP neural network (PSO-BP) model is applied to learn the running status data, and then predict the running status under more stress combinations, so as to finally obtain the complete failure boundary domain of CMG. The results show that the proposed method can significantly save the test cost to obtain the CMG failure boundary domain with both high prediction accuracy and adaptability to the combined stress situations. In the prediction of 172 groups of data, the accuracy of BP neural network algorithm is 98.8%, while the accuracy of PSO-BP neural network algorithm is 100%. Besides, the proposed method can internalize the engineering experience behind the test data.

09:30-10:50 Session 15F: H-workload: Human mental workload in safety critical applications
Chairs:
Ivan Gligorijevic (MbrainTrain, Serbia)
Sam Cromie (Trinity College Dublin, Ireland)
Location: LG-21
09:30
Valeria Villani (Department of Sciences and Methods for Engineering (DISMI), University of Modena and Reggio Emilia, Italy)
Marta Gabbi (Department of Sciences and Methods for Engineering (DISMI), University of Modena and Reggio Emilia, Italy, Italy)
Lorenzo Sabattini (Department of Sciences and Methods for Engineering (DISMI), University of Modena and Reggio Emilia, Italy)
Detecting mental and physical fatigue from heart rate variability
PRESENTER: Valeria Villani

ABSTRACT. In this study we consider the problem of monitoring operator’s condition in human-robot collaboration scenarios. Specifically, we aim to detect any mental or physical fatigue they might be experiencing in workplaces. To achieve this goal, we focus on physiological monitoring by means of a wearable device that measures cardiac activity. The ultimate aim is to let the robot be aware of the operator’s status and adapt its behavior accordingly.

09:50
Bilal Alam Khan (Trinity College Dublin, Ireland)
M. Chiara Leva (TU dublin, Ireland)
Sam Cromie (Trinity College Dublin, Ireland)
Design of Simulated Takeover Request Task
PRESENTER: M. Chiara Leva

ABSTRACT. As the population of older adults is increasing worldwide, it is imperative to accommodate their mobility and hence independence needs in the design of our transport systems. Semi-autonomous vehicles theoretically provide a potential solution to the mobility needs of older adults by supporting them in their driving. However, the technology is not fully matured yet and it still poses issues such as the paradigm of takeover request. This particular scenario may very well cause more problems than it solves. The major issue is that we don’t know how a person above 60 years of age may respond to a Takeover request and whether it could potentially be a very error-prone situation for any driver, especially the ones in the older age groups. In this study, we are developing a task that mimics the fundamentals of the Takeover request scenario and that is based on executive functions of the brain. Thus, it could help in predicting performance in a semi-autonomous vehicle with the idea of verifying what are the key processes leading to error and/or good performance and whether there may be specific preconditions to be checked for takeover requests to be more likely to succeed.

10:10
Bojana Bjegojevic (Technological University Dublin, Ireland)
Maria Chiara Leva (Technological University Dublin, Ireland)
Nora Balfe (Irish Rail; Trinity College Dublin, Ireland)
Sam Cromie (Trinity College Dublin, Ireland)
Physiological Indicators for Real-Time Detection of Operator’s Attention

ABSTRACT. Attention is a safety-critical operator ability that needs to be sustained over the course of specific tasks. However, many factors such as cognitive underload or overload, stress, fatigue, HMI quality, etc. can cause attention to drift away from the task. Having real-time indicators of operator’s attention could increase the safety of any human-operated system. Recent industrial deployment of driver-monitoring systems demonstrated the possible use of certain physiological and behavioural metrics as indicators of attention. However, it is unclear how sensitive and accurate these metrics are in detecting attention-related changes. This paper aims to provide a brief review of the potential real-time proxy-indicators of attention and present an experiment design to assess their suitability and sensitivity using performance metrics as a benchmark. Several variables identified in the literature are presented, each is associated with a particular aspect of attention. They are grouped into electroencephalography- , eye-tracking- , and electrocardiography- based variables. The experiment devised to test these variables involves computer-based task, designed to incur varying degrees of task load and to evoke different attentional requirements. It allows the recording of different individual performance metrics. The relationship between performance and physiological indicators will be tested, and compared across different attentional requirement and task load conditions. Real-time indices of attention have important safety implications such as providing immediate feedback to the operator or predicting attentional lapses.

10:30
Maren Eitrheim (NTNU, Institute for Energy Technology (IFE), Norway)
Markus Log (NTNU, Norway)
Trude Tørset (NTNU, Norway)
Tomas Levin (Norwegian Public Road Administration, Norway)
Trond Nordfjærn (NTNU, Norway)
Driver workload in truck platooning: Insights from an on-road pilot study on rural roads
PRESENTER: Maren Eitrheim

ABSTRACT. Truck platooning is expected to enable safer, greener, and more efficient road freight transport. Most platooning studies have been performed in ideal conditions on highways. The current pilot study investigated driver workload in partially automated three-truck platoons on rural roads. The trucks were operated by professional drivers, along a 380 km route in Northern Norway. The route traversed high-quality road sections, as well as several sections with challenging features, such as sharp turns, steep inclines, and narrow tunnels. Two different self-report measures of workload were used. Single-item ratings for 10-minute driving periods appeared to be sensitive to variable road and driving conditions. The NASA Task Load Index, which assessed 1-hour driving periods, showed global scores comparable to previous driving studies and indicated that participants got accustomed to the test situation and platooning over time. The current pilot study showed promising results in terms of identifying sensitive and non-intrusive techniques to assess driver workload in real-world conditions. Such studies are needed to discern the impacts of platooning systems, truck positions, and specific road features on platoon driver workload. Future studies should also establish the predictive validity of self-report measures as a basis for changing regulations and prioritizing infrastructure projects to support safe truck platooning operations.

09:30-10:50 Session 15G: Maintenance Modeling and Applications I: Degradation models
Chair:
Massimiliano Giorgio (Università di Napoli Federico II, Italy)
Location: LG-22
09:30
Nicola Esposito (Université d'Angers, France)
Bruno Castanier (Université d'Angers, France)
Massimiliano Giorgio (Università di Napoli Federico II, Italy)
An adaptive hybrid maintenance policy for a gamma deteriorating unit in the presence of random effect.
PRESENTER: Nicola Esposito

ABSTRACT. In this paper, we propose an adaptive hybrid age/condition-based maintenance policy for units whose degradation path can be modelled via the gamma process with random effect proposed in [1]. As in [2], the maintenance policy consists in measuring the degradation level of the unit at a first (age-based) inspection time, and in using a condition-based rule to decide whether to immediately replace the unit or to postpone its replacement to a future time. Yet, with respect to [2], in this paper we assume that this latter replacement time may be planned based on the outcome of the inspection. The optimal maintenance policy is defined by minimizing the long-run average cost rate. After each replacement the unit is considered as good as new. The lifetime of the unit is defined by using a failure threshold model. It is assumed that failures are not self-announcing and that failed units can continue to operate, even if with reduced performance and/or additional costs. Maintenance costs are computed accounting for preventive replacement cost, corrective replacement cost, inspections cost, logistic cost, and downtime cost (which depends on the time spent in a failed state).

References 1. Lawless J. and Crowder M., 2004. Covariates and random effects in a gamma process model with application to degradation and failure. Lifetime Data Analysis, 10(3), pp.213-227 2. Esposito N., Mele A., Castanier B., and Giorgio M., 2021. A hybrid maintenance policy for a deteriorating unit in the presence of random effect and measurement error. Proceedings of the 31st European Safety and Reliability Conference (ESREL2021), Research Publishing Services, Singapore.

09:50
Nicola Esposito (Université d'Angers - Università di Napoli Federico II, Italy)
Bruno Castanier (Université d'Angers, France)
Massimiliano Giorgio (Università di Napoli Federico II, Italy)
A prescriptive maintenance policy for a gamma deteriorating unit.
PRESENTER: Nicola Esposito

ABSTRACT. Most of the recent research in the field of maintenance is focused on using more and more information about the state of the system and its environment for predicting future events and making prescriptions about maintenance and operation. Indeed, these prescriptions take the form of recommendations that do not only describe what, how, and when to conduct the maintenance but also consist in advises on how to adjust the system operating conditions for desired outcome (e.g., see [1]). By following this general idea, this paper suggests a new maintenance policy for a degrading unit that generalizes the one proposed in [2] by including the possibility of influencing the remaining useful life of the unit by changing its usage rate. In fact, the suggested policy assumes that an inspection is performed at a prefixed time and that, based on the result of the inspection, it is decided whether to immediately replace the unit or to postpone its replacement to a second predetermined time and possibly adjust its working rate, if this latter option is deemed convenient. After each replacement the unit is considered as good as new. The degradation process of the unit is described by a using a gamma process. The unit is assumed to fail when its degradation level passes an assigned threshold. It is supposed that failures are not self-announcing and that failed units can continue to operate, though with reduced performance and/or additional costs. Maintenance costs are computed considering the cost of preventive replacements, corrective replacements, inspections, logistic costs, downtime costs (which account for time spent in a failed state), and costs that account for the change of the unit working rate. This latter costs also include the possible penalty determined by failure to comply with contract clauses. The optimal maintenance policy is defined by minimizing the long-run average cost rate.

References 1. Castanier B., Lemoine D. How to use prescriptive maintenance to construct robust Master production schedules. Proceedings of the 31st European Safety and Reliability Conference (ESREL2021), Research Publishing Services, Singapore. 2. Esposito N., Mele A., Castanier B., and Giorgio M., 2021. A hybrid maintenance policy for a deteriorating unit in the presence of random effect and measurement error. Proceedings of the 31st European Safety and Reliability Conference (ESREL2021), Research Publishing Services, Singapore.

10:10
Margaux Leroy (Univ. Grenoble Alpes, France)
Laurent Doyen (Univ. Grenoble Alpes, France)
Christophe Bérenguer (Univ. Grenoble Alpes, France)
Olivier Gaudoin (Univ. Grenoble Alpes, France)
Parameter estimation for a degradation-based imperfect maintenance model with different information levels
PRESENTER: Margaux Leroy

ABSTRACT. In this article, technological or industrial equipment that are subject to degradation are considered. These units undergo maintenance actions, which reduce their degradation level. The paper considers a degradation model with imperfect maintenance effect. The underlying degradation process is a Wiener process with drift. The maintenance effects are described with an Arithmetic Reduction of Degradation ($ARD_1$) model. The system is regularly inspected and the degradation levels are measured.

Four different observation schemes are considered so that degradation levels can be observed between maintenance actions as well as just before or just after maintenance times. In each scheme, observations of the degradation level between successive maintenance actions are made. In the first observation scheme, degradation levels just before and just after each maintenance action are observed. In the second scheme, degradation levels just after each maintenance action are not observed but are observed just before. On the contrary, in the third scheme, degradation levels just before the maintenance actions are not observed but are observed just after. Finally, in the fourth observation scheme, the degradation levels are not observed neither just before nor just after the maintenance actions.

The paper studies the estimation of the model parameters under these different observation schemes. The maximum likelihood estimators are derived for each scheme. Several situations are studied in order to assess the impact of different features on the estimation quality. Among them, the number of observations between successive maintenance actions, the number of maintenance actions, the maintenance efficiency parameter and the location of the observations are considered. These situations are used to assess the estimation quality and compare the observation schemes through an extensive simulation and performance study.

10:30
Matthieu Roux (CentraleSupélec, France)
Anne Barros (CentraleSupélec, France)
Yi-Ping Fang (CentraleSupélec, France)
Impact of imperfect monitoring on the optimal condition-based maintenance policy of a single-item system
PRESENTER: Matthieu Roux

ABSTRACT. In maintenance planning, condition-based maintenance (CBM) policies leverage the observation of the current condition of a degrading system to optimize the planning of future maintenance interventions. Researchers often make a strong assumption that remote sensors (potentially coupled with analytic technologies) perfectly capture the health condition of an industrial asset. For many reasons however (e.g., feasibility, cost), such assumption is inexact and the optimization model should take into account the inaccuracy of the monitoring system. This work focuses on a single-item system, continuously but imperfectly monitored, to illustrate how partially observable Markov decision processes (POMDPs) enable this inaccuracy to be considered within the maintenance optimization model. In this framework, the decision-maker minimizes the total discounted cost over an infinite time horizon by taking one maintenance decision at each time step. For the use case we consider, we choose to model three types of interventions: i) preventive maintenance (PM), (ii) corrective maintenance (CM), and iii) perfect inspection (I). A resource constraint is also added to the model to limit the availability of the repair crew. To solve the POMDP, we implement a point-based value iteration (PBVI) algorithm to compute the optimal CBM policy via approximate dynamic programming. In order to guide investment decisions into monitoring technologies, we develop a framework to quantify and compare the value of information provided by different monitoring systems with known average performances. Finally, we analyze how the monitoring quality impacts the structure of the optimal CBM policy.

09:30-10:50 Session 15H: Maritime and Offshore Technology: component inspection
Chair:
Mario Brito (University of Southampton, UK)
Location: CQ-105
09:30
João Pedro B. Cuba (University of Sao Paulo - USP, Brazil)
Leonardo O. Barros (Research and Development Center of Petrobras – CENPES – Petrobras, Brazil)
Rene T. C. Orlowski (Research and Development Center of Petrobras – CENPES – Petrobras, Brazil)
Marcelo R. Martins (University of Sao Paulo - USP, Brazil)
Adriana M. Schleder (University of Sao Paulo - USP, Brazil)
Methodology to Define the Probability of Detection of Non-destructive Techniques in an Offshore Environment

ABSTRACT. Equipment and components failures are usually related to a combination of condition, such as inappropriate design, improper use, or presence of discontinuities in materials such as cracks, pores and corrosion. In order to avoid failures predominantly caused by discontinuities, the use of non-destructive techniques (NTD) seeks to obtain information related to the level of deterioration of the component through the detection of discontinuities and aim to assist in decision making about the need of repair or replacement. The most common method for quantifying and therefore evaluating the reliability of an NDT is the use of probability of detection curves (PoDs). These PODs curves are built mainly by the cumulative detection probability function for different discontinuity sizes, although many other parameters and factors influence this measure. Originally the development of PoDs was directed to detection of cracks; however, afterwards, PoDs were developed with other focus, such as detection of corrosion, impact damage and delamination. It is possible to observe in the literature the information about these applications is very rare and the development of these PoDs is a complex task. Therefore, some authors propose to quantify the detection capacity of a technique, method, or inspection in a qualitative way. This paper proposes a methodology to define the detection probabilities of an NDT when applied in different scenarios. Considering an NDT has the ability to detect a discontinuity or a failure mode, it is possible to correlate the detection of a failure mode and a specific technique and, through a scenario analysis, define its PoD. It is worth mentioning that the methodology was designed to be applied in subsea systems; however, it is believed that in future works, its application can be expanded to other sectors and scenarios of interest.

09:50
Geir Hamre (DNV, Norway)
Yanzhi Chen (DNV, China)
Computer vision for remote inspection
PRESENTER: Geir Hamre

ABSTRACT. Classification societies ensures that ships are in compliance with safety standards via regular surveys. This work includes having human surveyors enter vessel cargo and ballast tanks as part of inspecting the integrity of the vessel hull. As these tanks can be tens of meters tall, lacking oxygen and contain toxic gases, tank inspections are potentially hazardous work. Recent years have witnessed an increasing interest in using remote inspection technologies. By reducing the need for sending human surveyors into the ship tanks, remote inspection offers the potential for lower risk to humans, as well as automating the inspection work.

Remote inspections generate large amounts of data. Looking for defects in huge amounts of inspection data can be a tedious task for a human. In this paper, the use of computer vision algorithms for detecting potential defects in recorded data is presented. Two individual algorithms have been developed: i) detecting corroded area in a tank and evaluating the coating condition, and ii) detecting cracks. On the selected hull image test set, the algorithm for evaluating coating condition performs on par with human surveyors. While the crack detection algorithm performs on a level comparable to human surveyors on the selected crack dataset, when considering the recall. Although we have seen impressive computer vision results demonstrated from various domains, we will still require human surveyors to make the final decisions.

10:10
Ana Cláudia Negreiros (Universidade Federal de Pernambuco, Brazil)
Isis Lins (Universidade Federal de Pernambuco, Brazil)
Caio Maior (Universidade Federal de Pernambuco, Brazil)
Márcio Moura (Universidade Federal de Pernambuco, Brazil)
A Web Application for Oil Spill Detection on Images using Feature Extraction and Machine Learning

ABSTRACT. The oil spill rapid detection is one essential prevent catastrophic effects, especially in marine areas. In this context, we developed a web application on Streamlit that applies a features extractor based on a novel q-Exponential probabilistic distribution (q-EFE) on images and classifies these images into one of two classes: “with oil spill” or “without oil spill”. Firstly, the q-EFE is applied to capturing important information from the images, then a machine learning model, Support Vector Machine (SVM), is used to classify the images. In this process, the users can choose some options to reach the classification results for each image. For example, they can select the grayscale conversion method (“Discrete Scale” or “Continuous Scale”) and the image size (64x64 or 256x256), then they input the images (one or more) and expects the display of the results. Users can see the images’ corresponding feature maps – the output of the q-EFE – that feed SVM. The machine learning model, in turn, provides the images’ classification, but also the probabilities of being in one class or another. If any oil spill is detected, measures can be taken to avoid greater negative impacts on the environment. The idea is to deliver a technological product capable of helping in the fast oil spill identification and using a simple device, such as a web application available to any computer. This procedure can be done online in a fast, cheap, and accurate way.

10:30
Paulo Siqueira (Federal University of Pernambuco, Brazil)
Márcio Moura (Federal University of Pernambuco, Brazil)
Heitor Duarte (Federal University of Pernambuco, Brazil)
José Marinho (Federal University of Pernambuco, Brazil)
Maria Luisa Gois (Federal University of Pernambuco, Brazil)
Beatriz Brandão (Federal University of Pernambuco, Brazil)
Quantitative Ecological Risk Assessment for Potential Oil Spills Near Fernando de Noronha Archipelago
PRESENTER: Paulo Siqueira

ABSTRACT. Despite the efforts of maritime authorities to enhance ship safety, marine accidents, such as oil spills, still occur and have caused significant damage to ecological environments (animals and plants). More specifically, in Brazil, the Fernando de Noronha Archipelago (FNA) has a Conservation Unit’s status, protecting endemic species and maintaining a healthy island ecosystem. Furthermore, FNA lacks infrastructure and mitigation plans for such accidents, so the consequences of a spill can be intensified. Thus, an effective method to assess ecological risks has been required. The methodology used to quantify such risks uses a stochastic population model to consider extreme and rare events, such as oil spills. It accounts for the frequency of the occurrence and the magnitude of the consequences of the accidents. Other models can be integrated into the assessment, such as the simulation of the fate and transport of the oil plume and ocean and a Bayesian-based method to improve the frequency estimates aggregating information from historical records and experts’ opinions. Hence, it can quantify the risks as a probability of extinction or decline of a representative environment species. We can summarize the results into risk categories to make the communication with the general public more straightforward. Finally, the results could provide relevant information to support policies to prevent and cope with the potential spills, mitigating the impacts.

09:30-10:50 Session 15I: Aeronautics and Aerospace
Chair:
Riccardo Patriarca (Sapienza University of Rome, Italy)
Location: CQ-107
09:30
Juseong Lee (Faculty of Aerospace Engineering, Delft University of Technology, Netherlands)
Sunyue Geng (Faculty of Technology, Policy, and Management, Delft University of Technology, Netherlands)
Mihaela Mitici (Faculty of Aerospace Engineering, Delft University of Technology, Netherlands)
Ming Yang (Faculty of Technology, Policy, and Management, Delft University of Technology, Netherlands)
Designing reliable, data-driven maintenance for aircraft systems with applications to the aircraft landing gear brakes
PRESENTER: Juseong Lee

ABSTRACT. When designing the maintenance of multi-component aircraft systems, we consider parameters such as safety margins (used when component replacements are scheduled), and reliability thresholds (used to define data-driven Remaining-Useful-Life prognostics of components). We propose Gaussian process learning and novel adaptive sampling techniques to efficiently optimize these design parameters. We illustrate our approach for aircraft landing gear bakes. Data-driven, Remaining-Useful-Life prognostics for brakes are obtained using a Bayesian linear regression. Pareto optimal safety margins for scheduling brake replacements are identified, together with Pareto optimal reliability thresholds for prognostics.

09:50
Marta Woch (Air Force Institute of Technology, Poland)
Justyna Tomaszewska (Polish Air Force University, Poland)
Mariusz Zieja (Air Force Institute of Technology, Poland)
Bandar Alotaibi (Polish Air Force University, Poland)
INFLUENCE OF CARGO ARRANGEMENT ON AIRCRAFT MASS CENTER OF GRAVITY
PRESENTER: Marta Woch

ABSTRACT. In ultra-modern years safety issues have led to rigorous procedures to make sure ‘baggage reconciliation’, which ensures totally accompanied baggage is loaded, barring a specific greater validation technique for each and every unaccompanied bag has been followed. Weight distribution between holds has a full-size have an impact on upon the center of gravity (center of gravity) of the aircraft. Load distribution will be certain on the Loading Instruction Form (LIF) by means of hold, or via keep compartment in the case of massive below floor hold areas. The main motivation behind this study is to deeply analyze and investigate aircraft aviation accident history and how the aircraft accident happened due to unbalancing of the aircraft which includes the major influencing factor of cargo arrangement. Many accidents have happened in the past history when the aircraft crashes while taking off or in the landing or during a flight due to the loss control and instability of the aircraft. The main reasons behind these accidents are due to mishaps in the center gravity of the aircraft. Cargo arrangements have a very vital influence at this scenario because when the baggage and luggage are kept in the aircraft in a very structure and designed manner. In this research study, we have analyzed the theory of the center of gravity and center of mass. We do analyze how the center of gravity will have impact on any object. The idea of center of gravity shows a static structure for any object or any anticipate moving body. First research question was concerning about cargo arrangement and its influence on unbalancing the aircraft. In order to provide an answer to this specific question, it was studied how the flight dynamics depends on the position of the center of gravity and the center of mass of the aircraft. Based on the literature analysis, it was noted that often even small changes in the position of the center of gravity can cause instability. Center gravity of the aircraft is equal to the mass of the object. The direction of center of gravity will push the aircraft towards the ground. Center of mass and gravity both are used interchangeably. It is also a very essential element during flight execution. Center of gravity is point of center where the object will balance and, on that center, center of mass will also lie. This center of gravity will decide the stability and performance of an object. The second issue raised concerned the distribution of cargo in such a way as to prevent accidents. Using statistical methods and historical data on aircraft accidents, it was noted that the most important element is to secure the cargo in such a way as to prevent it from shifting during the flight. The second most important quantity is to prevent overloading of the aircraft. And last but not least should be analysed the effect of the unbalance and instability of the aircraft on its operational capability. In attempting to find an answer to this particular issue, a simulation was carried out in Mathematica that assumed not only different luggage arrangements but also the weight of the fuel in the tanks. At the end of this part, it was analysed overloading or over balancing of the aircraft Diamond D20-C1 in order to determine the overloading of the aircraft. Summarizing, the outcomes of this study, the overloading of the aircraft depends upon the weight and balancing of the aircraft which is depends upon the weight of the aircraft, pilots, number of passengers and the spot of all loaded luggage. Weight and balancing limits are very crucial flight factor for any aircraft and flight. Finally, it is worth noting that in order to improve the performance of the Diamond D20-C1 aircraft, it must be taken into consider all the factors which also cause the overloading of the aircraft.

10:10
Axel Berres (DLR, Germany)
Sacha Lübbe (DLR, Germany)
Michael Schäfer (DLR, Germany)
Viola Voth (DLR, Germany)
Comparing distributed and integrated Hazard Identification Processes
PRESENTER: Axel Berres

ABSTRACT. The operational risk of a safety-critical aviation system can be minimized if ARP 4754 by SAE (1995) is consistently applied during development as recommended. This standard supports the identification and evaluation of hazards through, among other, Functional Hazard Analysis (FHA) and Fault Tree Analysis (FTA). As a result of these analyses, the risk of each identified hazard is assessed. In addition, appropriate measures can be identified to mitigate the operational risk. Despite this proven approach and all the care taken in development, catastrophic accidents such the Lion Air flight 610 crash of a Boeing 737 MAX in 2018 can occur. Could these accidents have been prevented?

A standard can only specify what can be done to operate a system as safely as possible according to the current state of knowledge. Unfortunately, there is no universal answer as to how the analyses should be carried out in detail and the quality of the required data and information. However, one requirement from the standard is transparency, i.e., the analyses performed must be traceable and plausible. This means that an expert with his knowledge must be able to arrive at the same analysis results based on the given data and the applied process. Furthermore, questions about ambiguity should be able to be answered satisfactorily. In the case of certification of a system, it should also be shown that proposed risk reduction measures are effective.

The execution of the analyses can be carried out in different ways. For the FHA, an Excel template as described in Berres (2021) can be used for the analysis. This approach is well suited in heterogeneous development environments. For a homogeneous environment, for example, a commercial tool such as Cameo SystemsModeller can be extended. The tool supports system modeling with SysML. The modeling language can be extended by using profiles. By using an FHA profile as described in Schaefer (2021), the SysML for the FHA was extended and for a new aviation system the possible hazards were identified and evaluated.

Based on the demonstrated procedures, the advantages and disadvantages will be discussed and a better understanding of the different development environments will be shown. Additionally, it will be shown how the collaboration between the different disciplines can be improved.

10:30
Alexandre Magno Ferreira de Paula (Universidade Católica de Petrópolis - UCP, Brazil)
José Cristiano Pereira (Universidade Católica de Petrópolis - UCP, Brazil)
Giovane Quadrelli (Universidade Católica de Petrópolis - UCP, Brazil)
Probabilistic Risk Analysis in the Overhaul of Aero-engines Using a Combination of Bayesian Networks and Fuzzy Logic Aiming at Meeting Civil Aviation Agency Regulations and the Requirements AS9100 - A Case Study.

ABSTRACT. As technology advances over the years, its complexity increases directly proportional. These advances bring new frontiers of performance, efficiency, and sustainability, and with that, new risks are identified in the production of these technologies. In aircraft engine maintenance activities, identifying and responding to risks is fundamental since an engine failure during a flight may cause a forced landing and, tragically, claim several lives. This reality makes it essential to monitor, identify, and prioritize risk treatment during aero-engine maintenance. What determines the essentiality and complexity of prioritizing risk treatment is the high number of risks identified - this is what usually happens in most repair stations. This fact was observed in a major aero-engine repair station located in South America. The authors conducted a literature review on probabilistic risk analysis in scientific databases associated with a case study at this repair station. A method to combine risks found in different ways in a single model to estimate the overall risk score of its operation is proposed. The objective is to embed the model into the Safety Management System to meet Civil Aviation Agency Regulations and the requirements AS9100. As a result, a Bayesian network modeling method integrated with Fuzzy Logic is proposed to combine risks generated from different sources. It assists in prioritizing decision-making in the treatment of operational risks in turbofan aero-engine maintenance, making more efficient the investment of resources in the treatment of identified risks and, consequently, quality continuous improvement and optimization of operational safety.

 

Full paper available here: https://rpsonline.com.sg/rps2prod/esrel22-epro/pdf/R01-01-082.pdf

09:30-10:50 Session 15J: Natural hazards on critical infrastructure: impacts and recovery
Chair:
Marcelo Alencar (Universidade Federal de Pernambuco, Brazil)
Location: CQ-010
09:30
Marianna Loli (University of Surrey, UK)
John Manousakis (Elxis Group, Greece)
Stergios Mitoulis (University of Surrey, UK)
Dimitrios Zekkos (University of California, Berkeley, United States)
Rapid damage assessment and monitoring of bridge recovery after a Mediterranean Hurricane
PRESENTER: Marianna Loli

ABSTRACT. Strong winds and heavy rainfall hit western and central Greece on 17 – 18 September 2020 as the Mediterranean Hurricane (Medicane) “Ianos” made its catastrophic passage through the country. Widespread flooding caused landslides and debris flows, while the erosive forces of water washed away foundation supports and earthworks along swelling rivers, impacting buildings, transport infrastructure, and powerlines. The town of Mouzaki, in Central Greece was one of the hotspots of the event. All five bridges that exist in the area, within a radius of 3 km from the town’s centre, suffered extensive damage or complete failure, leading to disruption of transportation and month-long isolation of local communities. The paper presents select outcomes of a comprehensive reconnaissance study where aerial photography and mapping by Unmanned Aerial Vehicles (UAVs) were used to enhance conventional field investigation and archival research for damage characterization, assessment and recovery. Three dimensional models of select bridge structures are presented and prominent failure patterns are discussed with reference to key response factors and structural characteristics based on numerical modelling of the mechanical response and CFD simulations of flow-structure interaction. The paper provides a kaleidoscopic insight into bridge response to a landmark flood event, where the abundance of perishable data was collected in a timely and systematic manner. A universal inadequacy to withstand this flood is identified, raising concerns over what appears to be a new norm of climate exacerbated, intense weather events in the Mediterranean. Although the use of UAV mapping in disaster reconnaissance has been common practice in the past years, this is a unique case study where UAVs were also employed for monitoring of bridge recovery in a multi-hazard environment. Comparison of imagery captured at different stages after the event allowed inspection of restorations with minimum human interactions amid the COVID-19 pandemic. Furthermore, it proved useful for the rapid assessment of the impact of a moderate seismic event that affected the damaged structures only a few months after the Medicane. The approach presented provides an effective solution for transport operators managing assets in an environment of increasingly severe and complex hazards.

09:50
Nicolas de Albuquerque (Universidade Federal de Pernambuco (UFPE), Brazil)
Lucas da Silva (Universidade Federal de Pernambuco (UFPE), Brazil)
Marcelo Alencar (Universidade Federal de Pernambuco (UFPE), Brazil)
Adiel de Almeida (Universidade Federal de Pernambuco (UFPE), Brazil)
Prioritizing urban shelters to combat flood disasters with a multidimensional decision model

ABSTRACT. The urban system is dynamic and complex so it reflects how postmodern societies must face new challenges preventively. Then, the damaging consequences of natural hazards, which flood is the most recurrent, are the major concern of public managers worldwide to adapt these areas for the future. It includes the need of improving emergency planning against hydrological disasters, which comprise the most recurrent event due to climate effects. From this perspective, the aim of this work seeks to support the urban shelter location under flood events in a multidimensional and georeferenced perspective. To do so, it should be noted the benefits from a multidimensional decision model that deals with many flood impacts, often conflicting with each other in order to establish effective protocols to combat these extreme events. In the light of the risk-based context, this paper introduces different issues: economic, route feasibility, capacity, and the number of evacuees to obtain a ranking of the main potential locations for structuring temporary emergency shelters and planning assistance actions in affected areas. From a broad range of information, it is expected that scholars, monitoring agencies, industrial sectors, and others to gather their efforts and expertise and then share the benefits of flood risk management practices.

10:10
William Lair (EDF R&D, France)
Grégory Michel (EDF R&D, France)
François Meyer (Enedis, France)
Hélène Decroix (Enedis, France)
Marc Chapert (Enedis, France)
Windy Smart Grid : Forecasting the impact of storms on the power system
PRESENTER: William Lair

ABSTRACT. The passage of the storms Lothar and Martin in 1999 showed the vulnerability of the overhead electricity distribution network to climatic hazards (3.4 million customers cut off following the passage of these storms). Since then, Enedis and the EDF Group's R&D have undertaken various projects to better understand storm phenomena and reduce their impact on supply continuity. Part of this work consists in developing tools to anticipate the impact of storms in order to pre-mobilize intervention teams as soon as possible and thus reduce the duration of customer outages.

Windy Smart Grid is a decision support software that responds to this problem. Indeed, the tool feeds on Météo-France forecasts and calculates the number of incidents on the Medium Voltage network. The weather forecasts come from the ARPEGE model; they are hourly forecasts interpolated on a 0.1°x0.1° grid. The user has the possibility to choose at which grid the calculations are performed: department, region or whole France. The tool also calculates the evolution of the number of incidents as a function of time at hourly intervals. The forecasts of the number of incidents are given with a confidence interval which allows to appreciate the uncertainty. The calculation engine is based on a Cox model whose parameters have been estimated from a database containing past incidents, past weather forecasts and network description data. When the number of forecasted incidents exceeds a threshold, an alert is sent by email to the users. The calculation engine has been coded in R and the graphical interface uses the RShiny solution.

This tool was tested on different storms in 2020 (Alex, Barbara and Bella) and gave good results justifying its deployment in experimental phase since September 2021. During the Aurora storm (October 2021), a first alert was issued 2 days before the beginning of the weather event. The forecasts contributed significantly to the decision to pre-mobilize resources. Finally, this innovation was the subject of a patent application.

10:30
Vadim Bobrovskiy (Politecnico di Milano, Italy)
Paolo Trucco (Politecnico di Milano, Italy)
Alexey Kaplin (Cosmetecor UK Ltd., UK)
Towards a new seismic short-term prediction methodology for critical service operators and manufacturing companies against earthquake
PRESENTER: Vadim Bobrovskiy

ABSTRACT. VADIM BOBROVSKIY1 , PAOLO TRUCCO1,ALEXEY KAPLIN2 1School of Management, Politecnico di Milano, Via Lambruschini 4/B, Italy. 2Ccosmetecor UK Ltd., England.

Urban critical infrastructure systems have become more complex and interdependent, and the risk of cascading failures is getting more serious. Researchers and policy makers have recently focused on estimating the downtime of infrastructure after an earthquake (Zhang, 2009). The downtime is the time required to achieve pre-event performance after disastrous event (Tierney, 1997). Natural disasters can be devastating also to businesses because of their vulnerability to capital, labour, suppliers, and markets. Businesses often report direct physical damage to a building, equipment, and inventory, as a result of a disaster. Because businesses are linked to the networks with other firms and urban lifelines, their vulnerability can be caused by failures suppliers, such as providers of material, equipment/machinery, and utility lifeline services (i.e. , electricity, gas, water, communication) (Kammouh, 2018). Businesses may experience operational problems as well even if they do not suffer physical damage, and therefore suffer losses due to interruptions from suppliers (Pant). To date, the study of business and lifeline recovery is more focused on construction cost, repair time, mobilization of resources and making decisions (Weng, 2020). A major shortcoming of existing risk assessment approaches is that the recover to their initial (pre-disaster) performance level is established on the basis of long-term post-disaster reconstruction estimations and fails to consider a recovery within a short period of time after such a disaster. The reason might be, that in terms of expected annual earthquake occurrence of rare catastrophic events, long-term planning is the best risk reduction option. However, during the short time after an event, companies are primarily reliant on their limited recovery budgets and emergency preparedness measures they have taken. Earthquake warning systems can help companies mobilize resources and make informed decisions. However, engineering-based risk model issues alerting based on magnitude threshold to predict seismic impact on built environment. Therefore, the current efforts of the team focus on compensating this gap by integrating proper seismic risk information to make decisions, against a risk metric, on the mobilization of a wide range of capacities to mitigate earthquake consequences (Xiao, 2022, Cremen, 2022) . Here, seismic risk information must represent the plan of action where a company is a system of interconnected components and dependencies that support the successful daily operations of the company. This plan will be considered to show how an overall seismic loss and downtime can be manipulated by risk reduction strategies and actions in terms of risk metrics, such as for example value at risk or set of preparedness actions (outsourcing, risk transfer, etc.). To facilitate the quantitative comparison of seismic risk, risk metrics, such as VaR (Goda, 2015) can be obtained for a longer time interval (4-6-12 months), while re-evaluation of mitigating actions is considered for a shorter periods of time (5 days and 12days). It is worth noting that comprehensive evaluation of the new seismic risk information in the context of earthquake risk management (i.e., integrating engineering-based risk model, risk preferences of end-users, seismic loss and risk metrics) is critically lacking in the literature. Summarizing: First, we developed mathematical 2-layers LSTM and Neural Network model based on the analysis of electric potential time series at the earth surface boundary (Bobrovskiy, 2017) and seismic probability risk maps. Collection of time series accounts for 20 years of continuous observations. This results for simulation case in recognizing with high probabilities two cases, whether a large earthquake is going to happen or not, and probability distribution (5 days and 12 days). Simulation probability distribution for 5 days interval (σ(N) is lead time of earthquake) is 83%. Probability distribution for 12 days interval (μ(N) is lead time of earthquake) is 35%. In case of longer period (6 months), when risk of earthquake is considered increasing in 4 times ().

Second, we consider the case where there is complex interdependence of urban lifelines (electricity) and business operations (manufacturing company – car parts) for various earthquake scenarios, accounting for uncertainties that are integral part to any earthquake warning system. In result, the system identifies 1) the optimal action (issue or not a warning) for a given event, and 2) the optimal action that can be sensitive to stakeholder risk preferences.

References: Zhang, Yang, Michael K. Lindell, and Carla S. Prater (2009) Vulnerability of community businesses to environmental disasters. Disasters. 33(1). pp. 38-57. Tierney, K. J. (1997a) Business impacts of the Northridge earthquake. Journal of Contingencies & Crisis Management. 5(2). pp. 87-97. Kammouh, O., Cimellaro, G. P., and Mahin, S. A. (2018). Downtime estimation and analysis of lifelines after an earthquake. Engineering Structures, 173:393-403. Pant, R., Barker, K., and Zobel, C. W. (2014). Static and dynamic metrics of economic resilience for interdependent infrastructure and industry sectors. Reliability Engineering & System Safety, 125:92-102. Goda K (2015) Seismic risk management of insurance portfolio using catastrophe bonds. Comput Aid Civ Infrastruct Eng 30:570–582. Weng, Y. Z. A Bayesian Network Model for Seismic Risk Analysis. Risk Analysis. 2021 Oct;41(10):1809-1822. Xiao Y., Zhao X., Wu Y., Chen Z., Gong H., Zhu L., Liu Y. Seismic resilience assessment of urban interdependent lifeline networks Reliab Eng Syst Saf, 218 (2022), Article 108164 Cremen G., F. Bozzoni, S. Pistorio, C. Galasso, Developing a risk-informed decision-support system for earthquake early warning at a critical seaport, Reliability Engineering & System Safety, Volume 218, Part A, 2022, 108035. Bobrovskiy, V., Stoppa, F., Nicoli, L., and Losyeva, Y. Nonstationary electrical activity38in the tectonosphere-atmosphere interface retrieving by multielectrode sensors: case study of three39major earthquakes in central italy with m 6.Earth Science Informatics 10, 2 (2017), 269–285.

09:30-10:50 Session 15K: S.34 Digitalisation and risk assessment – a new ball game? (promoted by EU-OSHA)

Special Panel session promoted by EU-OSHA titled : Digitalisation and risk assessment – a new ball game?

Chair:
Michael Gillen (European Agency for Safety and Health at work, Ireland)
Location: CQ-006
09:30
Sascha Wischnewski (Federal Institute for Occupational Safety and Health (BAuA), Germany., Germany)
Eva Heinold (Federal Institute for Occupational Safety and Health (BAuA), Germany., Germany)
Patricia Rosen (Federal Institute for Occupational Safety and Health (BAuA), Germany., Germany)
Chances or risks? – Impacts of new technologies and digitalisation on occupational safety and health

ABSTRACT. New and emerging technologies change continuously the way we work. In this context, AI-based systems such as smart information and communication technology as well as advanced robotics will play an important role in the near future.We conducted an extensive literature review and interviewed experts to identify opportunities and challenges for these cognitive and physical task support or automation technologies. We aggregated the findings into physical, psychosocial and organizational impact categories from an occupational safety and health point of view.

Link to paper or extended abstract

 

 

09:50
Coen Van Gulijk (TNO Healthy living, Netherlands)
Dynamic risk assessment: can occupational safety and health learn from industry experience?

ABSTRACT. Dynamic risk analysis is proliferating but the OSH domain benefits only marginally. This is partly due to the complexities found in the OSH domain as well as a general hesitation to engage with AI technologies by OSH practitioners. But there are more fundamental problems. Safety culture, for example, is notoriously difficult to measure and even harder to influence, and remains fundamentally a human effort. But digitalisation and AI could be of equal benefit as any other safety domain. The authors wish to discuss progress in special session 34: digitalization and risk assessment for OSH.

Link to paper or extended abstract

10:10
Ioannis Anyfantis (European Agency for Safety and Health at Work, Spain., Spain)
Timothy Tregenza (European Agency for Safety and Health at Work, Spain., Spain)
Digitalisation and prevention

ABSTRACT. The European Agency for Safety and Health at Work is running a research programme on digitalisation and occupational safety and health that covers a wide range of topics including advanced robotics and artificial intelligence, flexible work, and online platforms to examine the impacts, both positive and negative, of the rapid changes in working life that digitalisation is producing.

Link to full paper and extended abstract

11:10-12:50 Session 16A: Security Assessment
Chair:
Zdenek Vintr (University of Defence, Czechia)
Location: LG-22
11:10
Thomas Termin (Institute for Security Systems, University of Wuppertal, Germany, Germany)
Daniel Lichte (Institute for the Protection of Terrestrial Infrastructures, German Aerospace Center, Germany, Germany)
Kai-Dietrich Wolf (Institute for Security Systems, University of Wuppertal, Germany, Germany)
An Analytic Approach to Analyze a Defense-in-Depth (DiD) Effect as proposed in IT Security Assessment
PRESENTER: Thomas Termin

ABSTRACT. Latest approaches in IT security assessments interpret the Common Vulnerability Scoring System (CVSS) parameters as barriers connected in series. In contrast to the classic multiplicative approach according to CVSS for determining exploitability via numerical values associated with the CVSS parameters, an additive approach is proposed in Braband (2019). Logarithmized CVSS scores are introduced to overcome the computational limitations with ordinal values. The log score sum across all barriers is sorted on a scale corresponding to a likelihood of exploitability (LoE) category. CVSS world is not only decomposed and remodeled into a mathematically admissible algorithm, but it also contains an inherent defense-in-depth (DiD) effect. With each barrier added, the LoE decreases. This architectural interpretation can neither be falsified nor confirmed with previous CVSS metrics. Unlike in the IT security domain, tools exist in physical security to compute DiD in an objectively consistent manner. In our paper, we apply these considerations to a physical security setup in order to replicate his systemic modification based on CVSS. In a detailed analysis, we examine the boundary conditions and measures that must be taken in quantitative physical security metrics to emulate the DiD effect in IT security.

11:30
Dustin Witte (University of Wuppertal, Germany)
Daniel Lichte (German Aerospace Center, Germany)
Kai-Dietrich Wolf (University of Wuppertal, Germany)
An Approach to the Consideration of Uncertainties in Cost-Benefit Optimal Design of Physical Security Systems
PRESENTER: Dustin Witte

ABSTRACT. The importance of (physical) security is increasingly acknowledged by society and the scientific community. In light of increasing terrorist threat levels, numerous security assessments of critical infrastructures are conducted and researchers continuously propose new approaches. Moreover, consideration is given to how security measures need to be (re)designed to address the findings of the assessments, taking into account the potentially costly nature of security investments. At the same time, however, assessments suffer from the fundamental problem of inherent uncertainties regarding threats and capabilities of security measures due to little evidence of actual attacks. In this paper, we combine previous work on the concept of security margins with an approach for cost-benefit optimal allocation of available resources considering budgetary constraints to form a three-step approach. In a first step, a security system is assessed for potential vulnerabilities. If such are found, most relevant model parameters are identified on barrier level via sensitivity analysis in a second step. In a third step, security margins are determined for these parameters by optimization, taking into account uncertainties in the assessment as well as cost constraints due to total available budget. The approach is demonstrated using a notional airport structure as an example. The optimization is performed for various budgets to investigate the influence of the budget on system vulnerability and allocation of resources to security measures.

11:50
Giulia Marroni (Department of Civil and Industrial Engineering, University of Pisa, Italy)
Francesco Tamburini (Department of Political Sciences, University of Pisa, Italy)
Andrea Bartolucci (Institute of Security and Global Affairs, Faculty of Governance and Global Affairs, Leiden University, Netherlands)
Sanneke Kuipers (Institute of Security and Global Affairs, Faculty of Governance and Global Affairs, Leiden University, Netherlands)
Wout Broekema (Institute of Security and Global Affairs, Faculty of Governance and Global Affairs, Leiden University, Netherlands)
Valeria Casson Moreno (Laboratory of Industrial Safety and Environmental Sustainability - DICAM, Alma Mater Studiorum – University of Bologna, Italy)
Gabriele Landucci (Department of Civil and Industrial Engineering, University of Pisa, Italy)
Development of equipment fragility models to support the security management of process installations
PRESENTER: Giulia Marroni

ABSTRACT. In the last twenty years, security concerns have become relevant for the process industry, especially for plants that process and store significant quantities of hazardous substances. A successful intentional attack to a chemical facility might result in severe fires, explosion and toxic dispersion scenarios. Moreover, the impact of such scenarios may escalate towards neighboring units, causing the escalation of domino effects. The most consolidated techniques for security risk assessment aimed at evaluating the effectiveness of security countermeasures or physical protection systems (PPS) only provide qualitative or semi-quantitative indications. However, as the credibility of this threat increases, the development of quantitative metrics is essential to enhance the protection against external attacks and to ensure a correct allocation of security-related investments. Vulnerability is a key variable for security assessment of process facilities, and it represents a weakness that can be exploited by an external agent to perform an attack. Two main factors need to be taken into consideration in vulnerability evaluation: firstly, the performance of PPS; secondarily, vulnerability involves the evaluation of equipment structural integrity in response to different types of attack vectors. Previous studies available in the literature, dealing with vulnerability analysis of process facilities exposed to physical attacks, often carried out simplifications in the assessment of equipment structural integrity, assuming unitary damage probability of equipment in case of successful attack. Hence, the improvement and implementation of equipment fragility models suitable to deal with intentional impact vectors is critical to support vulnerability estimation and to identify the most critical types of scenarios. In this work, fragility models for process equipment exposed to different impact vectors were reviewed and tailored for the specific analysis of security related scenarios. Existing vulnerability models for overpressure impact were considered and combined in a comprehensive approach for either military explosives or improvised explosive devices. For what concerns fire attack adopting arson devices or incendiary weapons, the effects of a concentrated heat load on process equipment have been studied using a lumped parameters model, based on a thermal nodes approach. The model enabled for the development of specific failure correlations and, consequently, a fragility model aimed at the estimation of failure probability in case of fired attack with incendiary weapons. A methodology for the quantitative assessment of vulnerability was developed to show the potentialities of the models developed. Vulnerability has been evaluated as the likelihood of attack success, accounting also for the quantitative performance of PPS. A case study, based on the analysis of an industrial site storing and handling hazardous materials, was defined and analyzed to exemplify the model application. Firstly, vulnerability has been evaluated using the improved fragility models; secondarily, literature conventional approaches were adopted. The comparison between the two approaches highlighted the influence of equipment resistance on security vulnerability beside the contribution of the PPS in place. Based on the results obtained, the present study contributed to the identification of the more critical security-related scenarios and to define key strategies for the security management in process facilities.

12:10
Rhea C. Rinaldo (German Research Center for Artificial Intelligence (DFKI), Germany)
Dieter Hutter (German Research Center for Artificial Intelligence (DFKI), Germany)
Dependency Graph Modularization for a Scalable Safety and Security Analysis
PRESENTER: Rhea C. Rinaldo

ABSTRACT. Due to the steady development of automated and autonomous vehicles, a growing increase in the amount and complexity of the vehicle's internal components and their safety and security requirements can be registered. Various assessment techniques for the posed safety and security requirements exist; however, some of the applied techniques become insufficient of modeling this increased complexity, or the evaluation effort increases heavily. Consequently, existing approaches need to be revised and new approaches, tailored to this use case, developed. Prior to this work, we combined an analytical approach called ERIS and a numerical approach named AT-CARS to a hybrid to reduce the overall complexity of the model through simulation while obtaining realistic results, especially for versatile and sophisticated subsystems such as AI computing nodes. Thereby, the main system is modeled in ERIS graphically as a Dependency Graph and dedicated system parts are constituted to subsystems and outsourced to AT-CARS. Although encouraging results could be achieved and modularization properties could be obtained, it was discovered that an analytic evaluation of the subsystems would be more beneficial for specific system structures.

Consequently, we focus this paper on exploring the system modularization further and view it regarding the recursive analytical evaluation. Therefore we firstly establish the formal basis of abstraction and modularization of dependency graphs, followed by an adapted evaluation process. Based on this we discuss the impact of different component dependencies and provide criteria for a well-formed modularization. To show the efficiency and the benefit of this addition for the future evaluation of critical and complex systems, we apply the modularization scheme on an abstracted but realistic model of an autonomous vehicle.

12:30
Tingting Luan (Beijing Institute of Petrochemical Technology, China)
Hongru Li (Beijing Institute of Petrochemical Technology, China)
Yanfang Zhou (Beijing Institute of Petrochemical Technology, China)
Li Tao (Beijing Institute of Petrochemical Technology, China)
Study on the risk assessment of terrorist attacks at MICE events and countermeasures for prevention
PRESENTER: Tingting Luan

ABSTRACT. According to the gathering of people, economic concentration and a wide range of social influence. It has become a key target for terrorist attacks. While analyzing the typical events of exhibition terrorist attacks, this paper focuses on the main risk factors of terrorist attacks in exhibition activities, and focuses on the main risk factors of terrorist attacks in exhibition activities from three dimensions: external terrorist threats, security vulnerability of exhibitions and the severity of the harmful consequences of terrorist attacks, and then, on this basis, constructs a risk assessment index system for terrorist attacks in exhibition activities, and uses Bayesian network to model the risk of terrorist attacks in exhibitions. For the conditional probability distribution in the Bayesian network model, the qualitative method based on expert fuzzy evaluation and the quantitative method of literature research is used to determine, and the exhibition fuzzy Bayesian terrorist attack risk assessment model combining qualitative and quantitative methods is constructed. Finally, we will analyze the problems existing in China current exhibition activities to prevent the risk of terrorist attacks and propose relevant counter-terrorism countermeasures.

11:10-12:50 Session 16B: S.21 Joint ESReDA - ESRA Session on Advancements in Resilience Engineering of Critical Infrastructures
Chair:
John Andrews (University of Nottingham, UK)
Location: CQ-008
11:10
Stefan Schauer (AIT Austrian Institute of Technology GmbH, Austria)
Martin Latzenhofer (Austrian Institute of Technology, Austria)
Sandra König (Austrian Institute of Technology, Austria)
Christoph Schmittner (AIT Austrian Institute of Technology GmbH, Austria)
Sebstian Chlup (AIT Austrian Institute of Technology GmbH, Austria)
Application of a Generic Digital Twin for Risk and Resilience Assessment in Critical Infrastructures
PRESENTER: Stefan Schauer

ABSTRACT. Over the last decade, the usage of realistic digital replicas of physical systems, i.e., digital twins, has become quite common in industry. Complex simulation algorithms and increased computing power made it possible to create digital twins of essential assets within industrial companies or critical infrastructures (CIs), e.g., turbines, valves, or others. The data and information gained from such simulations can be used to evaluate the behavior of these assets under specific conditions (e.g., extreme pressure or temperature) and assess the risk of a failure when performing at the limits of normal operation. Similar to physical assets, digital twins can also be built to represent cyber assets, e.g., to mimic the behavior of a CI’s SCADA network. Such cyber digital twins can be used to evaluate how the network reacts to high data load or malware attacks. Although the individual digital twins are very helpful, it is not trivial to combine two or more of them to get a holistic view of interdependencies and cascading effects among them. In this paper, we describe how existing digital twins from the physical and cyber domain can be integrated into a Generic Digital Twin (GDT). This GDT is based on information provided by the individual digital twins and brings them into a more abstract form such that their data can be connected. In detail, we specify a model that incorporates the most relevant data from the digital twins from the physical and cyber domain with a specific focus on capturing the interrelations among them. Hence, the GDT allows to describe the individual and combined behavior of the digital twins in a more generic way and thus provides a holistic overview on the entire CI and its dependencies on other infrastructures. Additionally, we will explain how the GDT can be applied to assess potential risks that could affect the CI or to assess resilience aspects in context with those threats. This approach particularly considers the interrelations among the digital twins and the cascading effects stemming from them, thus providing an improved overview on the wide-ranging consequences potential threats stemming from both the physical and cyber-domain can have.

11:30
Arto Niemi (DLR Institute for the Protection of Maritime Infrastructures, Germany)
Frank Sill Torres (DLR Institute for the Protection of Maritime Infrastructures, Germany)
Evaluation of the proposed European Commission directive on critical entities resilience and its potential to consolidate the resilience terminology
PRESENTER: Arto Niemi

ABSTRACT. The European Commission (EC) has proposed a new directive on critical entities resilience. The aim is to enhance the protection and to unify the approaches in different member states. The stated novelty of this directive lies in the thought that protecting the infrastructure is not sufficient. Therefore, it is necessary to reinforce the resilience of the critical infrastructure operators. This paper gives a brief overview on past legislative developments in critical infrastructure protection and attempts to evaluate the impact of the new EC proposal. We base the estimate on impact analyses past legislation. There are two key findings. EC legislation leaves the implementation to the member states, which gives them a certain freedom to interpret the text of EC directives. This has led to heterogeneous adaptation of the legislation within member states. This kind of heterogeneous impact will likely be also the result of the current proposal. Secondly, EC directives have had mandates for cooperation between member states. These have resulted in member states developing common vocabularies in the focus areas of directives. In the resilience engineering field, this may have a significant consolidating effect, as technological resilience is still a new concept associated with some ambiguity around its definition. Our paper discusses this matter and provides evidence that existing legislation had already a consolidating effect in the resilience engineering field.

11:50
Nour Chahrour (Université Grenoble Alpes, France)
Guillaume Piton (INRAE, France)
Jean-Marc Tacnet (INRAE, France)
Christophe Bérenguer (Grenoble INP, Gipsa-lab, France)
Designing Protection Systems in Mountains for Reduced Maintenance Costs: Claret’s Retention Dam Case Study
PRESENTER: Nour Chahrour

ABSTRACT. Debris retention dams are one type of critical protection structures implemented in torrents in order to provide protection against natural phenomena. They mainly aim in moderating, through their openings, the passage of the flow to the downstream where elements at risk are located. This is achieved by storing specific volumes of debris materials in their upstream debris basins and then, eventually, releasing this volume with a lower discharge. The filling of a debris basin by debris material over time, reduces the efficacy of the dam in achieving its functions. Consequently, cleaning maintenance operations of a debris basin should be regularly performed. This necessitates high monetary budgets, which can not be always affordable by the State for the management of protection structures in mountains. This paper proposes a model that makes it possible to model the passage of debris flows through retention dams. The case of the Claret retention dam in France is considered, in which the proposed model is used to analyze the performance of the dam considering its initial and new designs. A numerical analysis, over a period of 50 years, is performed using real data. The obtained results permit the managers of the Claret to figure out which of the designs is more favorable in terms of reducing maintenance costs and in terms of increasing protection efficacy.

12:10
Rundong Yan (Loughborough University, UK)
Sarah Dunnett (Loughborough University, UK)
John Andrews (University of Nottingham, UK)
Comparison of Resilience vs Traditional Probabilistic Safety Assessment for Nuclear Power Facilities
PRESENTER: Rundong Yan

ABSTRACT. With more than 400 nuclear reactors currently operating in the world and several decades of operational history, nuclear power generation is widely recognised to be a well-established and mature technology. However, due to the potentially catastrophic consequences of nuclear accidents, the safety of such facilities has consistently been of concern to the general public opinion as well as the scientific community. In terms of the reliability of Nuclear Power Plants (NPPs), much work has been carried out using classical risk assessment approaches such as Event tree analysis and Fault tree analysis. However, it is found that these conventional methods have limitations, for example accounting for the influences of unpredictable events, such as earthquakes and tsunamis. Resilience engineering offers a promising alternative. Different from conventional risk assessment methods that aim to predict the failure rate or reliability of the system and eliminate the root causes of the failure, resilience analysis considers the ability of the system to recover in the presence of failure. Since many extreme events, such as severe weather and earthquakes, are inevitable, resilience analysis aims at enhancing the systems’ ability to anticipate and absorb unexpected events and adapt from the event. In this paper, a Petri net modelling-based resilience methodology developed by the authors is compared with the traditional Probabilistic Safety Assessment (PSA) methodologies namely Fault Tree Analysis (FTA) and Event Tree Analysis (ETA) used for nuclear engineering risk assessment. A station blackout (SBO) accident is used as a case study to facilitate the comparison. The advantages and drawbacks of the two methods are compared and conclusions are drawn as to the best of these two options for future exploitation in ensuring the safety of nuclear reactors.

12:30
Natalia Naval (Universidad de Zaragoza, Spain)
Yassine Rqiq (Universidad de Zaragoza, Spain)
Jose Maria Yusta (Universidad de Zaragoza, Spain)
Pumped-hydro potential to enhance power system resilience under critical gas supply interruptions
PRESENTER: Natalia Naval

ABSTRACT. Renewable energy is consolidating as the main energy source in the coming years. However, the process still requires time to achieve an energy mix with 100% renewable production. Photovoltaic and wind energy have reached a high degree of technological development, but these renewable sources still require support to overcome their intermittency and variability nature. In this regard, storage systems are essential to enable the integration of renewables into the electricity system, but their development and implementation are still going in progress. Therefore, gas-fired power plants have become the main source to support renewables and accelerate the energy transition by functioning as backup generation. The European Union is highly dependent on this resource and especially on gas from Russia, which is the country with the largest natural gas reserves worldwide. Europe receives more than 40% of natural gas through the network of pipelines connecting to Russia. As a result of the continuing trade and political conflicts between Russia and Ukraine, increasing in recent years, have made the security of gas supply one of the EU's main concerns, as the risk of Russia completely disrupting supplies to the EU is growing. The EU has promoted policies to favor coordination and efficient cooperation between the different countries, sharing all available resources and infrastructures in the event of a gas supply crisis and thus reducing the harmful effects of supply interruptions in the most vulnerable countries. As a consequence of this gas supply uncertainty, Europe is paying high prices for liquefied natural gas transported by sea, mainly from the United States, due to geopolitical reasons of supply diversification to avoid supply shortages. In addition to cooperation mechanisms between countries, it is essential to promote the use of other technologies such as pumped hydropower to ensure balance between electricity consumption and generation. This technology can provide flexibility in the electricity system in case of possible critical gas supply interruptions affecting gas-fired power plants, and thus guaranteeing the supply of reliable electricity in combination with renewable energy. The development of energy storage systems is essential to accelerate the transition to an emission-neutral economy and the effective integration of renewable energy into the electricity system. Currently, pumped hydro technology is the most efficient system for large-scale energy storage, but it is dependent on orographic factors. Therefore, pumped hydro technology is more convenient than other types of storage (batteries, hydrogen) to maintain stability and security in the power system, as it generates a large amount of energy with a very fast response time. Some previous studies proposed different methodologies for locating suitable sites for the development of pumped-hydro energy storages worldwide. This paper aims to analyze the potential of pumped hydropower to improve the resilience of the European energy system in the face of gas supply disruptions under very severe climatic and technical conditions. The proposed mathematical optimization model maximizes the electricity demand coverage using cooperation mechanisms among EU and neighboring countries in the event of critical gas disruption. Both the interruption of gas from Russia and the supply of liquefied natural gas by sea are considered. The proposed methodology is applied to a case study with the maximum demand recorded in both the gas and electricity systems in Europe. During the winter of 2016/2017, Europe coped with an extreme cold spell, atypically reaching simultaneous peak demand for gas and electricity on January 18, 2017. As a result of the low availability of nuclear power plants and renewable generation, natural gas consumption in electricity generation reached its highest level in the last years. Therefore, in order to illustrate the proposed methodology, data from January 18, 2017, are chosen to assess the influence of existing and potential pumped hydro in the event of gas supply disruptions on the European electricity supply. From the results obtained under severe climatic conditions, first, in a situation with gas interruptions, high electricity demand and low generation availability, eleven countries in total would have problems to meet electricity demand. Secondly, without considering the existing pumping hydro capacity, seventeen countries would have problems to meet their electricity demand (Austria, Belgium, Denmark, Finland, France, Greece, Hungary, Italy, Lithuania, Macedonia, Poland, Portugal, Slovakia, Slovenia, Spain, Switzerland, and UK). Several countries are highly dependent on natural gas from Russia, such as Finland and Lithuania, in addition to having few domestic resources. In contrast, other countries such as Macedonia and Greece have problems to meet their electricity demand due to poor connections with other countries into the system. These results show that the role of existing pumping hydro is key for many countries to keep their electric power systems in operation under critical conditions. On the other hand, a few European countries have potential for new pumping hydro plants. Pumped-hydro potential in all European countries has been already assessed by the European Commission. Taking the results of expected gas shortages under the gas cooperation model of the case study, the availability of new pumping hydro plants would solve the electricity demand coverage in six out of eleven countries mentioned above. In short, the results obtained in this paper stand out the benefit of exploiting of pumped-hydro potential in the different countries to reduce the high external dependence on gas and the negative impact of consequences in the event of critical gas supply interruptions. Thus, the use of pumping hydro plants can improve the stability and security of power systems.

11:10-12:50 Session 16C: S.14: Digit twin: recent advancements and challenges for dealing with uncertainty and bad data II
Chair:
Matteo Broggi (University of Hannover, Germany)
Location: CQ-006
11:10
Marco de Angelis (University of Liverpool, UK)
Matthew Bonney (University of Sheffield, UK)
Mattia Dal Borgo (Siemens, Belgium)
David Wagg (University of Sheffield, UK)
Introducing Cristallo an open digital twin operational platform for engineering testing
PRESENTER: Marco de Angelis

ABSTRACT. We introduce Cristallo, an open-source, modular, OS independent digital twin to support the management of data and information in relation to experimental testing and simulation. The development of digital twin software has been seen mainly in commercial domains, where many proprietary solutions have been marketed. In contrast, this paper describes a prototype of a digital twin framework to give researchers a basis to develop further their digital twin projects, and inspire on the concept of a digital twin for engineering testing. In particular, this paper focuses on the idea of transparency and trust. With Cristallo the user is able to gather information about the empirical and simulation data, while seeing through the whole process of data acquisition and storage. This is achieved using ontologies and knowledge graphs, which are powerful tools for the management and visualisation of information. Moreover, the user is able to see through the code running the analysis, and check its hypotheses. Thanks to automatic uncertainty quantification, verification and validation can take place without human interaction: the correctness of the code running the analysis can be verified and the physics-based model can be validated against the empirical data. An example of a digital twin of a three-storey structure subject to impact excitation is presented.

11:30
Ludvig Björklund (Norwegian University of Science and Technology, Norway)
Markus Glaser (Institute for High Integrity Mechatronic Systems, Germany)
Sebastian Imle (Institute of Machine Components, University of Stuttgart, Germany, Germany)
Gunleiv Skofteland (Norwegian University of Science and Technology & Equinor, Norway)
Mary Ann Lundteigen (Norwegian University of Science and Technology, Norway)
Design of a Digital Twins of Gate Valves for Partial Stroke Testing

ABSTRACT. To reduce the carbon footprint and environmental impact, the oil and gas industry is currently pushing for digitalization and electrification of subsea equipment. Towards this aim, research into subsea production and processing is ongoing at the joint industry-academia research-based innovation center SFI SUBPRO. One proposal for electrification is an all-electric control system that operates the safety valves in a X-mas tree, i.e., valves that are part of primary and secondary well barriers (Winter. T, et. al, 2020). The all-electric control system is safety-critical, cutting the flow of hydrocarbon in the event of a production- or an emergency shutdown. Therefore, the system needs to provide an equivalent safety integrity level compared to existing solutions. One benefit of this all-electric control system is increased control and opportunities to perform continuous diagnostic testing of the equipment. Diagnostic testing for safety-critical systems can increase safety by providing information about the current state of equipment. The diagnostic tests are designed to detect symptoms that could lead to failure (Modest. C, Thielecke. F, 2012). An important characteristic of valves that can be determined with diagnostic tests is the signature curve. The signature curve is the relationship between applied torque and the position of the valve, during a complete valve stroke. From the signature curve, it is possible to determine characteristics of the valve, to enable trending the degradation over time to detect potential failure. Partial stroke testing is a diagnostic test for estimating the signature curve without requiring a complete valve stroke and without impacting the production (Lundteigen. M, Rausand. M, 2007). During a project at Aalen University, a experimental test bench enabled testing of a wear-optimized concept for producing a signature curve. The project in this article aims to reproduce the signature curve generated from this partial stroke testing, by creating a Digital Twin of the experimental setup. A Digital Twin is a virtual model attempting to replicate the corresponding physical asset utilizing all available information (Glaessgen. E, Stargel. D, 2012). The data from the experimental setup are classified as the golden standard and used to validate the accuracy of the Digital Twin. The Digital Twin is created according to first principle methods and created in Simulink. The Digital Twin is required to be sufficiently fast, to mitigate simulation time and serve as a useful tool for diagnostic testing. Furthermore, the Digital Twin is designed such that it should allow for validation of the partial stroke testing software and to enable failure injection, such as simulating physical degradation. A test bench is set up to provide the same input data into the Digital Twin of the physical asset. The results from the test bench indicate that the Digital Twin tracks the experimental data to a close degree, adequately mirroring the results and behavior of the physical asset. The results from degradation tests of the Digital Twin also allow for the prediction of how the signature curve will develop with equipment degradation. The Digital Twin is validated to be used as a viable alternative for the design of partial stroke testing software. The Digital Twin is not aimed to replace partial stroke testing since degradation can only be induced based on a-priori knowledge in a simulation. In addition, the Digital Twin can be used to evaluate the behavior of the diagnostic tests. By injecting known faults into the Digital Twin, harnessing it with the software will enable virtual testing of diagnostic testing. In addition, the Digital Twin can be used for evaluating the boundaries of the valve parameters. Through this, the Digital Twin can be used both during the development of diagnostic testing and possibly for the verification and validation of the diagnostic software.

Glaessgen, Edward, and David Stargel. "The digital twin paradigm for future NASA and US Air Force vehicles." In 53rd AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics and materials conference 20th AIAA/ASME/AHS adaptive structures conference 14th AIAA, p. 1818. 2012. Lundteigen, Mary Ann, and Marvin Rausand. "Partial stroke testing of process shutdown valves: How to determine the test coverage." Journal of Loss Prevention in the Process Industries 21, no. 6 (2008): 579-588. Modest, Christian, and Frank Thielecke. "A design methodology of optimized diagnosis functions for high lift actuation systems." In Annual Conference of the PHM Society, vol. 4, no. 1. 2012. Winter, Tobias, Markus Glaser, Bernd Bertsche, Sebastian Imle, and Julian Popp. "Analysis of an All-Electric Safety Subsea Actuation System Architecture." In 2020 Annual Reliability and Maintainability Symposium (RAMS), pp. 1-7. IEEE, 2020.

11:50
Tanmoy Chatterjee (Swansea University, UK)
Michael I. Friswell (Swansea University, UK)
Sondipon Adhikari (University of Glasgow, UK)
Hamed H. Khodaparast (Swansea University, UK)
Gradient enhanced physics-informed neural networks for digital twins of structural vibrations

ABSTRACT. Physics-informed neural networks (PINNs) have received considerable attention for the solution and data-driven discovery of physical systems governed by differential equations [1]. Despite their immense success, it has been observed that the first-generation PINNs suffer from a drawback arising from the regularization of the loss function. The motivation of this work is to address the above inherent drawback and improve the approximation potential of PINNs for solving forward and inverse structural vibration problems. For doing so, a novel second-generation extended PINNs approach called gradient enhanced physics-informed neural networks (GE-PINNs) is proposed. GE-PINNs is observed to mitigate the regularization issue effectively by modifying the loss term and hence adequately capture the spatial and temporal response behaviour. The advantage of GE-PINNs is that the adopted strategy does not require any additional sample points to be generated and utilizes the same computational cost as that of conventional PINNs as opposed to [2]. Two representative forward and inverse structural vibration problems involving ordinary and partial differential equations are solved to access the performance of GE-PINNs. The results are validated with the analytical solutions.

Selected References [1] G. E. Karniadakis, I. G. Kevrekidis, L. Lu, P. Perdikaris, S. Wang, L. Yang, Physics-informed machine learning, Nature Reviews Physics 3 (6) (2021) 422–440. [2] S. Wang, Y. Teng, P. Perdikaris, Understanding and mitigating gradient flow pathologies in physics-informed neural networks, SIAM Journal on Scientific Computing 43 (5) (2021) 3055–3081.

12:10
Gerben Dirksen (Framatome GmbH, Germany)
Artem Shevchenko (Framatome GmbH, Germany)
Heiko Kollasko (Framatome GmbH, Germany)
Christine Bell (Framatome GmbH, Germany)
Dusko Kancev (Kernkraftwerk Gösgen-Däniken, Switzerland)
Creating digital twin reliability models using RiskSpectrum® Model Builder
PRESENTER: Gerben Dirksen

ABSTRACT. In classical PSA, fault tree and event tree modelling is performed in a dedicated PSA tool, such as RiskSpectrum® PSA. Although such models are well understood by PSA specialists, they have a low accessibility for third parties. This makes it difficult to perform independent review of the model, which is often performed based mainly on the results (e.g. Does the PSA model calculate the correct failure combinations leading to an undesired event?). This low accessibility also makes it harder to justify the results of the PSA to third parties such as plant management or regulatory bodies.

Using the RiskSpectrum® ModelBuilder tool, a software product developed by Lloyd’s Register RiskSpectrum AB based on the EDF tool KB3, Framatome has developed a methodology to create reliability models of complex plant systems which have the form of a digital twin of the system in the plant.

The methodology consists of the following parts: • Identification of the relevant tasks to be modelled for the system. • Breakdown of the system tasks to the component level using a visual representation in the P&ID. • Creation of the system failure mode and effect analysis (FMEA) for the PSA directly from the P&ID visualization. • Automatic import of the PSA-relevant data from FMEA into RiskSpectrum ÜSA ModelBuilder, including component dependencies such as power supply, signals and cooling. • Visual flow diagrams for the system • Automatic generation of fault trees and other risk models to be used in PSA tools such as RiskSpectrum PSA.

The documentation therefore consists of the following parts: • PSA system descriptions • Color-coded P&IDs representing the system tasks on the component level • Knowledge base of component types, defining the PSA-relevant failure modes for each component type dependent on initial state and final state for a given system task. • FMEA database. • Visual flow diagrams inside RiskSpectrum® ModelBuilder. • Exported fault trees in RiskSpectrum® PSA.

In the chosen data structures, it is possible to identify PSA-relevant basic events and compare the generated events between ModelBuilder and the FMEA.

Therefore, using this new methodology, a well-documented and traceable link is created between the P&ID and the reliability model, allowing easy quality assurance and increasing acceptance of the PSA by third parties.

12:30
Alexandre Blanchet (Lab-STICC, France)
Nathalie Julien (Lab-STICC, France)
Mohammed Hamzaoui (Lab-STICC, France)
Typology as a deployment tool for digital twins: application to maintenance in industry

ABSTRACT. The digital twin (DT) is a dynamic virtual representation of an object (product, process or service) that supports this latter all along its lifecycle by following all its variations. In this paper, we first present an overview of literature on DT for maintenance applications in manufacturing. Based on this state of the art, we report that few architectures of the digital twin were proposed. In [1], an approach was made to define the twin, which states that the digital twin is encompassed by the hybrid twin, which is itself encompassed by the cognitive twin. Aberoff in [2] has also proposed a digital twin architecture using the RAMI 4.0 model by declining it in 4 levels of integration: First as a “model”; then as a “digital shadow”; subsequently as a “digital twin”; and finally, as a “digital twin predictive” (DTP) in its last level. But in most of the papers, the DT architecture is not addressed or poorly defined. This could be explained by the fact that, altough the digital twin is a very promising technology, the lack of standardized deployment methods and tools is a critical problem to tackle. Moreover, the diversity of environments and applications in which digital twins are deployed makes their organization confusing [3]. Only the recently validated ISO 23247 standard offers a beginning of architecture and organization [4]. To handle this scientific and technical lock, we define a deployment methodology based on the 5C Cyber-Physical System model proposed in [5] combined with generic architectures developed from [6] and extended to the whole DT environment. The first step of this approach is based on a complete typology described here. This typology is composed of 11 criteria defining the DT and its interactions. • OMA: Observable manufacturing Asset as defined in [4]. • Type: Grieves in [7] defines 3 different types of digital twins: DT Prototype; DT Instance; DT aggregate. • Level: Several levels are possible depending on the point of view, as a DT can be a system of systems, system, equipment or component. • Maturity: Numerical maturity of the representation which can be of 3 different types which are control DT; cognitive DT; collaborative DT as detailed in [8]. • Topology: It qualifies the connection between the observable manufacturing asset and the digital twin. From [9], we have defined 4 topologies which are: Connected; Disconnected; Embedded; Combined. • Synchronization: Synchronization from a synchronous point of view can be done in real time, near real time or periodically. From an asynchronous point of view, it can be event-based, conditional or on-demand • Decision loop: We have extended the works in [10] by proposing 3 different types. The first type is the open loop, the human receives the information and makes a decision. The second type is the closed loop, the human just receives information and the DT is autonomous regarding the decision making. The third type is the mixed loop, the human receives the information but can also make a decision just like the DT; in this case, the assignment between human and system decision must be clearly formalized. • User: There are several users which can be the human, the device, the application or another DT, as defined in [4]. • Usages: The digital twin can be used to analyze, to simulate, to optimize, to predict, to compare, to collaborate and to conceptualize; details can be found in [8]. • Applications: According to [4], the digital twin can be used for real time control, Off-line analytics, Predictive maintenance, Health check, engineering design, etc... • SMS strategy: SMS strategy can be defined by flexible, lean, sustainable, quality, intelligent, agile as explained in [11]. By applying this methodology on usual maintenance applications for manufacturing obtained from the state of the art discussed previously, we propose some generic architectures with their associated typology. These examples illustrate the interest of our formalized approach on various maintenance issues in order to guide the design of DT in industry.

References

1. Abburu and all in COGNITWIN - Hybrid and Cognitive Digital Twins for the Process industry, IEEEInternational Conference on Engineering, 2020 2. Aheleroff et al - Digital Twin as a Service (DTaaS) in Industry 4.0, Advanced Engineering Informatics, Science Direct, 2021 3. W. Davidson - How to tell the difference between a model and digital twin, Springer, 2020 4. Standard ISO/DIS 23247, Automation systems and integration - Digital Twin framework for manufacturing, December 2020. 5. J. Lee, B. Bagheri, Hung-An Kao – A Cyber-Physical Systems architecture for Industry 4.0-based manufacturing systems, ScienceDirect, 2014 6. F. Tao, M. Zhang, and all - Digital Twin Driven Smart Manufacturing, Academic Press, Elsevier, 2019 7. Grieves M., Vickers J. - Digital Twin: Mitigating Unpredictable, Undesirable Emergent Behavior in Complex Systems. In: Kahlen FJ., Flumerfelt S., Alves A. (eds) Transdisciplinary Perspectives on Complex Systems. Springer, Cham. https://doi.org/10.1007/978-3-319-38756-7_4, 2017. 8. N. Julien, E. Martin – A usage driven approach to characterize and implement industrial digital twins, The 31st European Conference on Safety and Reliability (ESREL), 2021 9. Schroeder, G., Steinmetz, C., et al. - Digital Twin connectivity topologies, Proceedings of the 17th IFAC Symposium on Information Control Problems in Manufacturing, 2021. 10. Traoré, M. K. - Unifying Digital Twin Framework: Simulation-Based Proof-of-Concept, Proceedings of the 17th IFAC Symposium on Information Control Problems in Manufacturing, 2021 11. Lu, Y., K.C. Morris, et al. - Current Standards Landscape for Smart Manufacturing Systems. NIST Report, NISTIR 8107, 2016.

11:10-12:50 Session 16D: S.23: Fault-Tolerant and Attack-Resilient Cyber-Physical Systems II
Chairs:
Roozbeh Razavi-Far (University of Windsor, Canada)
Francesco Di Maio (Politecnico di Milano, Italy)
Location: CQ-106
11:10
Sabarathinam Chockalingam (Institute for Energy Technology, Norway)
Clara Maathuis (Open University of the Netherlands, Netherlands)
Assessing Cascading Effects of Cyber-attacks in Interconnected and Interdependent Critical Infrastructures

ABSTRACT. Critical Infrastructures (CIs) represent the backbone of modern societies. It is essential to ensure a smooth and reliable functioning of vital services, and processes that societies rely on. CIs are either based on or are soon to be embedding (modern) digital technologies. The level of interconnectedness and interdependencies of such systems continue to increase. These developments make them directly exposed to different types of cyber-attacks like ransomware and Distributed Denial of Service (DDoS). Cyber-attacks on such infrastructures would not only impact the overall functioning of such infrastructures, well-being of people and a nation’s economy, but also cause cascading effects. There is a lack of framework to assess cascading effects that propagate in both independent and interdependent CIs regarding the operational level of components. To tackle this gap, this research aims to answer the following research question: “How can we assess the cascading effects in terms of operational level corresponding to different components in both independent and interdependent Critical Infrastructures taking into account component dependencies, vulnerabilities, and existing measures?”, by means of building a Bayesian Network-based framework following a Design Science Research methodological approach.

11:30
John Eidar Simensen (Institute for Energy Technology, Norway)
Per Arne Jørgensen (Institute for Energy Technology, Norway)
Aleksander Toppe (Institute for Energy Technology, Norway)
Experience from performing controlled technical cyber experiments on Critical Infrastructure as hybrid events

ABSTRACT. Digitization and the technological shift introduce higher connectivity and broader availability. This is also the case for critical infrastructures. Replacing and upgrading analogue systems with digital and moving from single unit control to multi-unit control and remote operation, introduce new vulnerabilities and new attack surfaces. The cyber-attacks on the Ukraine power grid critical infrastructure in 2015 resulted in temporary power black outs and demonstrated the potential for cyber-attacks to potentially cause serious harm to human life. The attacks further revealed that there was a potential for the hackers to cause even more severe impacts than what was achieved. The Ukraine attack is one argument for the recent focus on security and cyber-security of critical infrastructures such as power grids, communication-, and transportation infrastructure to name some. The project Cyber-security Platform for Assessment and Training for Critical Infrastructures – Legacy to Digital Twin (CybWin), addresses the cyber-security of critical infrastructures. CybWin has as a core deliverable a cyber-security platform with physical, replicated and simulated components of real-world critical infrastructures, empowered with tools for RAMS (reliability, availability, maintainability, safety) assessment, vulnerability assessment, attack simulation, incident prediction and response. Since 2019, a Cyber Security Centre facility has been established at the Institute for Energy Technology (IFE), through close cooperation with the CybWin-project, aiming at performing holistic cyber research and delivering the needed capabilities to support controlled experiments. This paper presents recent (2022) experiences from the CybWin-project on controlled technical cyber experiments in the IFE cyber laboratory on critical infrastructure; namely a digital substation controlling power distribution networks. The experiments included an attack team performing a total of 7 cyber-attacks, a defence team in charge of event detection and logging, a laboratory support team ensuring the capabilities of the experiment environment as well as mimicking operator input, and the experiment lead in charge of overall experiment control. Furthermore, the experiments were performed as hybrid events with remote team participation. The paper focus on the overall experiment setup and organisation. An approach is suggested on how to setup and coordinate experiment with remote participants on critical infrastructure safely and securely. In the paper it is argued that potential dynamic variability in the presented experiments must be mitigated through well-defined operational procedures, a clear overall experiment approach, stable laboratory environment, comprehensive data capture capabilities, and by involving the right competence. In the next planned experiments, additional stakeholders are involved to achieve even higher scenario realism. Additionally, laboratory and experiment capabilities are expanded to in the future support the evaluation of e.g., human factors.

11:50
Erfan Koza (University of Wuppertal, Clavis Institute for Information Security at University of Niederrhein, Germany)
Observe-Orient-Decide-Act Loop as a Decision Support Model to Continuous and Dynamic Vulnerability Management and Incident Response Management of Critical Infrastructures

ABSTRACT. Our perceptions of the world have a direct impact on our thoughts. Conversely, the world we perceive is filtered through our thoughts. In this context, perception and decision-making are two coherent processes that influence each other. In an interactive, dynamic, volatile, and complex world, security is not a perpetual state. The quality and quantity of information readily available to us are essential for appropriate decision-making. In addition to the quality of a decision, speed and timing also play a crucial role. These three factors have characteristically ensured victory or defeat in countless military endeavours for several thousand years. This paper thus takes advantage of the interdisciplinary nature of information security and attempts to apply military-strategic knowledge to information security in Critical Infrastructures (CRITIS). To this end, John Boyd’s Observe-Orient-Decide-Act loop (OODA loop) is to be assessed. The resulting artifact is a decision support model that is used in the context of the evaluation and assessment of Common Vulnerabilities and Exposures (CVE). Our model effectively contributes to determining and increasing true positives while eliminating false positives and reducing false negatives. In this way, cybersecurity engineers of CRITIS can make qualitative and fact-based decisions in interactive, dynamic, and complex situations to optimize the use of their limited financial and human resources in terms of efficient resource allocation and increasing the resilience of their operational technology systems.

12:10
Konstantinos Ntafloukas (University College of Dublin, Ireland)
Daniel McCrum (University College of Dublin - School of Civil Engineering, Ireland)
Liliana Pasquale (University College of Dublin - School of Computer Science, Ireland)
A risk assessment approach for IoT enabled transportation infrastructure subjected to cyber-physical attacks

ABSTRACT. Critical transportation infrastructure integrated with an Internet of Things (IoT) based wireless sensor network operates as a cyber-physical system. However, such IoT devices suffer from inherent cyber vulnerabilities (e.g., lack of authentication) that cyber-attackers can exploit to damage the physical space (e.g., loss of service). As more and more transportation infrastructure are becoming IoT enabled, understanding the risks from cyber-physical attacks is more important than ever. It has been typical for the cyber and physical domains to be treated as isolated environments resulting in IoT enabled transportation infrastructure not being adequately risk assessed by stakeholders, who act as assessors (i.e., operators, civil and security engineers) against cyber-physical attacks. In this paper, a new risk assessment approach is proposed to assist stakeholders towards risk assessment. The approach incorporates the cyber-physical characteristics of vulnerability, attacker (e.g., motives), and physical impact. A case study of an IoT enabled bridge, subjected to a cyber-physical attack scenario, is used to demonstrate the application and usefulness of the approach. Countermeasures, such as proactive measures, are briefly discussed and considered in Monte Carlo simulations, resulting in reduction of risk of 42.2%. Results are of interest for stakeholders who attempt to incorporate security features in risk assessment procedures.

12:30
Francesco Simone (Sapienza University of Rome, Italy)
Riccardo Patriarca (Sapienza Università di Roma, Italy)
A simulation-driven cyber resilience assessment for water treatment plants
PRESENTER: Francesco Simone

ABSTRACT. Digitalization is increasingly characterizing modern industrial assets: cyber-physical components allow for greater efficiency, coordination, and quality, but also open to new disruption scenarios with potentially disastrous impacts. In this context, a systematic risk management process should be in place to encompass system’s physical and informative properties. Consequently, cyber resilience is meant to evaluate not only the possibility of failures related to the physical part of industrial items, but also anomalies of - and attacks against - their cyber counterparts. This research investigates cyber resilience of a water treatment system via a simulation model. A digital twin of a water desalination plant has been developed in MATLAB/Simulink along with a custom Simulink block to reproduce cyber attack’s actions. Dedicated resilience metrics are computed to assess system’s cyber resilience. The metrics are defined merging two approaches: (i) a deterministic approach that considers the outputs of the physical process model; (ii) a probabilistic approach that considers fluctuations of cyber-attacks’ duration and impact, and variability of system response and recovery capacities. The results provide evidence on the need for cyber-physical inspired modelling within modern industrial plants to identify criticalities, and design corrective actions.

11:10-12:50 Session 16E: Reliability analysis and small data sets
Chair:
Scott Ferson (University of Liverpool, UK)
Location: LG-20
11:10
Jaleena Sunny (University of Liverpool, UK)
Marco de Angelis (University of Liverpool, UK)
Ben Edwards (University of Liverpool, UK)
Calibration of stochastic ground-motion simulation using SMSIM
PRESENTER: Jaleena Sunny

ABSTRACT. An optimization-based calibration technique using the area metric is applied to determine the input parameters of a stochastic earthquake-waveform simulation tool, ‘SMSIM - Fortran Programs for Simulating Ground Motions from Earthquakes’. SMSIM simulates acceleration time-histories based on a band-limited frequency-modulated stochastic noise signal, with a suite of input terms describing the effect of the earthquake source, path- and site phenomena on the frequency-amplitude content of the signal. We introduce a recalibration algorithm that modifies a prior estimate of a region’s seismological parameters, which have typically been developed using data from a wide range of data, resulting in overestimates of a target region’s ground-motion variability, and in some cases introducing model biases. The method simultaneously attains the range and distribution (uncertainty) of the SMSIM input seismological parameters for a specific target region. The simulation results are analysed by taking advantage of the available information and the properties of recorded signals in the region of Italy, as recorded in the European Strong Motion (ESM) dataset, as well as independent seismological models developed using strong-motion data in wider European contexts. The algorithm is applied to the seismological parameters previously determined for Italy, with calibration attaining optimum values of each input parameter to develop a locally adjusted, minimum-bias stochastic model. The parameters considered for this specific study are the coefficients corresponding to the geometrical spreading, path attenuation and site diminution. We were able to reduce the area metric (misfit) value by 40-45% for the simulations using the updated parameters, compared to the initial values. This framework for the calibration and updating of models can help achieve robust and transparent regionally adjusted stochastic models.

11:30
Mohamed Rabhi (Univ Angers, LARIS, SFR MATHSTIC, France)
Anis Ben Abdessalem (Univ Angers, LARIS, SFR MATHSTIC, France)
Laurent Saintis (Univ Angers, LARIS, SFR MATHSTIC, France)
Bruno Castanier (Univ Angers, LARIS, SFR MATHSTIC, France)
Using the ABC-NS method for reliability model estimation and selection in the case of right-censored data
PRESENTER: Mohamed Rabhi

ABSTRACT. One of the most common methods for parameters estimation is the maximum likelihood method (MLE). However, when the sample size is low, the likelihood function is intractable and cannot be formulated even in a closed form or fail to converge. The parameters estimation is approached with a new variant of Approximate Bayesian Computation method based on an ellipsoidal Nested Sampling technique, called ABC-NS. ABC-NS is a promising and flexible estimation alternative using similarities between simulated and empirical data. ABC-NS advantages are a better exploration of the parameters space due to the use of different similarity metrics, a nice framework easy to implement and finally provides a good approximation of posterior distributions. Nevertheless, as far as we know, ABC methods, including ABC-NS, do not deal with censorship.

The objective of this paper is to propose an ABC-NS method for censored data and to compare its goodness-of-fit with classical estimation methods. First, ABC-NS will be adapted to right censored data based on the empirical function and metric calculations using Kaplan-Meier empirical function and associated metrics. Then, an intensive numerical performance analysis based on Monte Carlo simulations will be conducted using different distances measuring the discrepancy between the empirical and simulated times of failure data.

11:50
Marie Chiron (ONERA, France)
Christian Genest (McGill University, Canada)
Jérôme Morio (ONERA, France)
Sylvain Dubreuil (ONERA, France)
Michel Salaün (ISAE Supaéro, France)
Rare event probability estimation through high-dimensional elliptical distribution modeling and multiple importance sampling
PRESENTER: Marie Chiron

ABSTRACT. Computing the probability that an engineering system reaches a particular failure state is a major challenge in reliability analysis. It requires the evaluation of a multiple integral over a complex set describing detrimental configurations of the system random inputs. The present work focuses on systems represented by a performance function and whose inputs can be modeled using elliptical laws [1]. As a result, the inputs are described by the product of a radius random variable and a directional random vector uniformly distributed on the unit hypersphere, independent of the radius.

Importance sampling (IS) is a rare event simulation technique based on the Monte Carlo method in which an auxiliary density is introduced in the computation of the probability integral to reduce the variance of the probability estimator. To this end, the cross entropy (CE) method constructs a near optimal auxiliary density with an adaptive sampling approach by minimizing the Kullback–Leibler divergence between the theoretically optimal auxiliary density and a suitably chosen parametric family of distributions. However, most parametric densities in high dimension (several hundred inputs) are not flexible enough to account for the shape of the failure domain, causing the IS weights to collapse [2].

As an alternative, it is proposed here to estimate an IS auxiliary density for each failure region of high-dimensional input space, assuming the gradient of the performance function to be known. The auxiliary IS density is then taken to be a mixture of all those densities and the failure probability is then estimated by adaptive multiple importance sampling [3].

In high-dimensional space, the important ring [4] describes the area between the two hyperspheres which contains most of the probability mass. The proposed search of an efficient auxiliary IS distribution for the failure probability estimation is divided in two phases : optimization and then parametric estimation, in order to sample in the failure region of the important ring. For each failure zone, an optimization is first performed to find the failing point closest to the origin, which belongs to the important ring. Using the stochastic decomposition of elliptical distributions, the proposed IS auxiliary family density is then modeled as the product of a specified density for the radial component and a von Mises–Fisher (vMF) density for the directional component. The distribution of the radial component is chosen to be the original law of the elliptical inputs radius variable, conditioned to be greater than the Euclidean norm of the optimized failing point. The direction of the optimized failing point is then taken as the mean direction of the vMF density. Finally, the concentration parameter of the vMF is optimized with the CE method and multiple importance sampling.

This approach is shown to enhance flexibility and accuracy in high-dimensional IS as the radius and direction of the elliptical random vector of system inputs can be optimized jointly for each failure zone while remaining stochastically independent [5].

Several numerical examples with multiple failure regions and where the inputs are modeled with the multivariate normal distribution and the multivariate Student distribution, two commonly used elliptical laws, are investigated to demonstrate the efficiency of the proposed algorithm.

[1] C. Genest, and J. Nešlehová, Copulas and copula models. Encyclopedia of Environmetrics, Second Edition, Vol. 2, 2012.

[2] L.S. Katafygiotis, and K.M. Zuev, Geometric insight into the challenges of solving high-dimensional reliability problems. Probabilistic Engineering Mechanics, Vol. 23, no (2–3), pp. 208–218, 2008.

[3] A. B. Owen, Importance sampling, Monte Carlo theory, methods and examples, pp. 3–40 (2013).

[4] Z. Wang and J. Song, Cross-entropy-based adaptive importance sampling using von Mises–Fisher mixture for high dimensional reliability analysis. Structural Safety, Vol. 59, pp. 42–52, 2016.

[5] I. Papaioannou, S. Geyer, and D. Straub, Improved cross entropy-based importance sampling with a flexible mixture model. Reliability Engineering & System Safety, Vol. 191, 106564, 2019.

12:10
Martin Dazer (Institute of Machine Components - University of Stuttgart, Germany)
Alexander Grundler (Institute of Machine Components - University of Stuttgart, Germany)
Achim Benz (Institute of Machine Components - University of Stuttgart, Germany)
Marco Arndt (Institute of Machine Components - University of Stuttgart, Germany)
Philipp Mell (Institute of Machine Components - University of Stuttgart, Germany)
Pitfalls of Zero Failure Testing for Reliability Demonstration
PRESENTER: Martin Dazer

ABSTRACT. Zero Failure Tests are widely used to demonstrate the reliability requirements of products. The test is popular in application because it offers some advantages. Specimens are tested to the service life requirement and then classified in a binary fashion, as they can only assume two states after testing - Intact or Failed. This allows test planning to be based on the binomial distribution and can be handled easily. The sample size to be tested results directly from the reliability and confidence requirement. The test is sometimes so strongly integrated into the validation processes that the disadvantages are often forgotten and pitfalls arise in the application which are addressed in this paper. The main focus is on the result of the test, which is only a minimum reliability information. This may be sufficient in the context of a reliability demonstration or validation, but one will never know the actual reliability of the product. Without this knowledge, the risk of product oversizing is constantly present. Especially since the Zero Failure Test can only be successful, if the products are heavily oversized. Thus, if an attempt is made to reduce oversizing, the probability of observing failures in the test increases, which is wrongly attributed to the product being too bad. For this purpose, a solution for the evaluation of the test itself was developed at the Institute of Machine Components, with which the probability of test success of reliability tests and thus their suitability for the respective boundary conditions can be evaluated.

12:30
Marco Bonato (Valeo Thermal Systems, France)
Lambert Pierrat (LJ-Consulting, French Polynesia)
Statistical estimation of minimum resistance threshold from small sample size reliability tests. Case Study on Automotive Heat Exchangers.
PRESENTER: Marco Bonato

ABSTRACT. The reliability of automotive components plays an important role in the global durability and safety of the whole vehicle. With the advent of the fully electric powertrain, more and more carmakers are offering extended warranty periods. The warranty commitments have been moving from typically 2-3 years 100 000km to 5 and even 8 years, and up to 150 000 km. This warranty extension is of particular interest for components such as heat exchangers. On the one hand their reliability is very important, because they help the thermal management of the car to be at optimal level (both for cabin and battery cooling), therefore playing a fundamental role in the preservation of the battery and in providing higher autonomy. On the other hand, the durability of such mechanical components is challenged by wear out phenomena such as fatigue failure and corrosion. The possibility of predicting lifetime durability has to rely on the results obtained from qualification tests (design validation DV, and product validation PV). Design validation of heat exchangers is obtained by the means of accelerated life tests. Such bench tests are conceived so that the total stress damage accumulated by the average user (life profile of 15 years and 300 000 km) is reduced to a much shorter bench or rig test. The acceleration factor is based on the physics of failure associated with each in-service or environmental stress load. The challenge of the design validation test lies in the fact that the sample size considered (hence used for reliability predictions) is generally small (4 to 8 units). Phenomena such as material fatigue and corrosion show a typical kinetic behavior where the failure requires a minimum "time of latency" before it occurs. For material fatigue, this initiation time would account for the necessary number of cycles necessary to “accumulate” the fatigue damage via the three phases of fatigue rupture: the crack initiation at microscopic level due to surface irregularities and stress concentration; the crack propagation, due to Stress concentration at the bottom of the crack which will step by step cause a failure; and the final (catastrophic) failure. Also in the case of corrosion failure, we can assume the existence of a minimal threshold resistance period, because the kinetic of the corrosion shows a “latent time” of the corrosion initiation, before the oxo-reductive reaction effects start until the final failure. The objective of this work is to define a deterministic “minimum threshold resistance” value, which represents the period for which the population of components is going to show zero wear out failure. The “minimum threshold resistance” is conceptually similar to the “location parameter” of the Weibull three parameters distribution. Nevertheless, the location parameter is a statistical value, and its determination brings a high level of uncertainty, in case of the small sample size adopted for accelerated validation tests. The presentation gives various numerical examples that highlights the benefit of such an approach, and the advantages of calculating a deterministic threshold obtained as mean value and associated variability. Finally, we illustrate the importance of an appropriate definition of the resistance distribution for reliability predictions (the probability of failure from a given time in service of mileage) obtained by the Stress-Strength interference approach.

11:10-12:50 Session 16F: Joint event: Irish Human Factors and Ergonomics Society special session
Chair:
Leonard O'Sullivan (University of Limerick, Ireland)
Location: LG-21
11:10
Leonard O'Sullivan (University of Limerick, Ireland)
The Ergonomics of Exoskeletons and Technology Acceptence

ABSTRACT. Exoskeletons have the power to boost human capabilities by providing increased strength and endurance while protecting us from injuries. They can transform the way we work, as well as play a vital role in medical treatment and improve the lives of an ageing population. These wearable robotic technologies is already available on the commercial market and an increasing amount of research is being devoted to its possibilities. However, at the heart of the devices’ future success is the experience of the humans who are using them. Without excellent usability, even the most advanced technology will fail to have an impact because people won’t want to engage with it. That challenge becomes even greater when designing a device that has to match movements we usually make automatically. A technology solution on its own, without good human factors design, is of very limited benefit. Good design is crucial. They need to become technologies that are elegantly designed, simple and non-intrusive, so they can just form part of your life. The usability has to be right, otherwise they will just be put to one side and there will not be adoption of the technology.

11:30
Joan Cahill (Trinity College Dublin, Ireland)
Paul Cullen (Trinity College Dublin, Ireland)
Keith Gaynor (University College Dublin, Ireland)
The Impact of the COVID 19 Pandemic on the Health and Wellbeing of Aviation Workers Employed by Irish Registered Airlines.
PRESENTER: Joan Cahill

ABSTRACT. This study reports on the findings of an anonymous online survey (n=1,010) undertaken between October and December 2021 addressing the impact of the COVID 19 pandemic on the health and wellbeing of aviation workers, including those employed by Irish registered airlines. The survey incorporated several standardised instruments measuring levels of common mental health issues. Survey analysis indicates that a significant number of aviation workers are suffering from the symptoms of depression and anxiety. The prevalence of psychological anguish for aviation workers is higher than what is reported in the general population. Logistic regression was used to assess the probability of certain health outcomes for two groups – namely, those working for Irish registered airlines and all others. The outcomes including reaching the threshold for clinical levels of depression and anxiety, suicidal ideation, and a life satisfaction/happiness rating ≥ the OECD average. Statistical analysis indicates that the probability of having major depression and anxiety is higher for those working for Irish registered airlines than all others. Employees of Irish registered airlines are less likely to have life satisfaction and happiness levels which are the same or above the OECD average, as compared with all others. However, statistical analysis indicates that working for an Irish registered airline does not either increase or decrease the probability of suicidal ideation. Given that wellbeing is a factor in safe performance, aviation organisations need to develop new approaches to integrating wellbeing and safety culture, and associated safety management processes.

11:50
Ellen Liston (St James’s Hospital, Dublin, Ireland, Ireland)
Enda O'Connor (St James’s Hospital, Dublin, Ireland, Ireland)
Marie E. Ward (St James’s Hospital, Dublin, Ireland)
An Investigation of ICU MDT Safety Culture in a Large Irish Teaching Hospital
PRESENTER: Marie E. Ward

ABSTRACT. Patient Safety is a key priority in healthcare. Benchmarking work in Safety Culture has indicated there is substantial room for improvement internationally to ensure safe, effective, and more resilient health systems. Mixed method measurements of SC are recommended to account for diverse social, cultural, and subcultural context within different healthcare settings. This paper provides novel research, in the Irish ICU setting, triangulating data from three sources. Data were collected using The Hospital Survey for Patient Safety Culture (HSOPSC), Adverse event (AE) reporting and retrospective chart review using the global trigger tool (GTT) for ICU. High positive results were found in composites of Teamwork, Supervisor/Manager/Clinical leader support for patient safety, Organisational learning/Continuous improvement, and Handoffs information exchange. Areas for improvement were identified by lower positive results in staff perceptions of Communication openness, Reporting patient safety events, Communication about error and Hospital management support for patient safety. The low reporting was corroborated in the GTT and AE data. An overall positive safety culture was found for the study population. Areas for improvement, specifically regarding psychological safety, were identified across the data.

12:10
Nick McDonald (Trinity College Dublin, Ireland)
Marie Ward (St. James's Hospital, Dublin, Ireland, Ireland)
Building accountability and trust into healthcare risk management
PRESENTER: Nick McDonald

ABSTRACT. Accountability in healthcare requires developing a transparent link between activity and outcome, for both normal operations and improvement. The Access Risk Knowledge platform addresses this problem through data analysis, Socio-Technical Systems Analysis, managing the risk in change, and a strategic synthesis of knowledge from many projects. This paper addresses the initial challenges of implementing a data-rich technology platform to achieve a trustworthy and accountable system of risk governance. The platform was deployed in a large teaching hospital in two stages: Stage 1 established the importance and feasibility of a more comprehensive risk based approach to environment hygiene assessment. In Stage 2 a data map of 110 metrics currently in use to measure and monitor the risk of healthcare associated infection was presented at two stakeholder workshops that reinforced the value of understanding the complexity of the data and led to the formulation of questions to interrogate the data. While Stage 1 built trust locally, Stage 2 engaged stakeholders to identify organizational needs. This will lead to the implementation of specific projects that will combine to form a trustworthy accountable risk management system.

12:30
Hector Diego Estrada Lugo (Technological University Dublin, Ireland)
Arianna Giuliani (Technological University Dublin, Ireland)
Andres Alonso Perez (Technological University Dublin, Ireland)
Maria Chiara Leva (Technological University DublinAndres Alonso Perez, Ireland)
Gernot Stuebl (Profactor GmbH, Austria)
Thomas Poenitz (Profactor GmbH, Austria)
Mehmet Tuncel (Istanbul Technical University, Turkey)
Nazim Kemal Ure (Istanbul Technical University, Turkey)
Video analysis for ergonomics assessment in the manufacturing industry: initial feed- back on a case study

ABSTRACT. The manufacturing industry is being benefited from the new technologies developed in the field of artificial intelligence. However, as part of the European AI strategy, the role of workers in the industry must be protected by including human-centered ethical values. The TEAMING.AI project is developing a revolutionary human-AI teaming software platform comprised of interconnected utilities. This work reflects the preliminary results of some of the methodologies that are being developed within the project. An ergonomics assessment of manual activities performed by operators in a manufacturing workplace is carried out. The data for the assessment comes from video recordings obtained with cameras installed in strategic points of the shop floor. In this work, the assessment is done by manually selecting the images from the videos and scoring them based on the Rapid Upper-Limb Assessment and Rapid Entire Body Assessment methods. Once a scored is computed, an analysis of the activity is provided. The preliminary results show that distortion in the image from recording can affect the assessments. A method to enhance the video analysis in two major directions is proposed. The first direction focuses in the automatic operator detection. The second, on generating 3D information for ergonomic assessment with undistorted images. Some details related to the use case are omitted to preserve the anonymity of the operators in the company.

11:10-12:50 Session 16G: Maintenance Modeling and Applications II: Reliability and warranty
Chair:
Antoine Grall (Troyes University of Technology, France)
Location: CQ-009
11:10
Larissa Perlitz (Ruhr West University of Applied Sciences, Germany)
Uwe Kay Rakowsky (Ruhr West University of Applied Sciences, Germany)
Simulation of a multi-system scenario comparing non-derating versus reliability-adaptive systems
PRESENTER: Larissa Perlitz

ABSTRACT. The modelling of reliability-adaptive systems (RAS) has been widely discussed in many contributions and in an ESREL session. In a nutshell: RAS are conducting a self-derating of their performance to extend the remaining useful life (RUL). The measure RUL is well-known from prognostics and health management approaches. The RUL is permanently assessed during operation as a function of performance and external influences, e.g. electrical or mechanical load, temperature, vibration.

The proposed contribution compares the efficiency of systems either operated conventionally in a non-derating mode or in a reliability-adaptive mode as roughly outlined above. The evaluation is conducted by a Matlab simulation.

Description of the scenario – The scenario is located on the Moon based on the concept of the Moon Village presented by the European Space Agency. The scenario consists of six systems maintained by a single maintenance unit (MU). The systems are robots roving on the Moon operating on six sites around the MU. The task of the robots is drilling the surface and bringing back soil samples to the MU. After their return to the MU, the robots are subject to maintenance. Unfortunately, the MU can maintain only one robot at a time. If more robots are returning from their sites, they have to wait in a queue.

Reliability model – For the sake of simplicity, the reliability model of a robot is reduced to one single component: the drill. Firstly, a conventional reliability estimation of the drills is conducted during the development of the robots. This general (non-drill-individual) estimation is based on NPRD data and yields to the well-known basic failure rate with some extra factors. Secondly, six drill-individual RUL prognoses are conducted during their operation. Depending on the prior unknown soil density, the drills will individually wear out sooner or later – the RUL of the six robots will be shorter or longer.

Reliability-adaptive operation – The following procedure roughly describes the scenario operation. 1) At the beginning of the time interval all robots are drilling with full performance (level 1 or 100 %). 2) Soon after, the robots are assigned a performance in descending order of their ID to reduce the possibility of simultaneous failures. 3) The RUL prognosis starts after the first robot failure. 4) When predicting the RUL, all previous failures as well as the current performance of the respective robot are taken into account. The prognosis of the second worst RUL may yield to an overlapping occupation of the MU. If so, the performance of the robot is reduced in 0.01 increments of the performance level until the RUL prognosis coincides with a clear MU. 5) The robots with succeeding RUL “behave” accordingly, degrading their performance in terms of their individual RUL and the capability of the MU.

After every robot failure, the robots are self-rearranging their performance levels according to their current individual RUL prognoses. The objective of the reliability-adaptive operation is that the probably next failing robot does not fail until the restoration of the preceding robot is completed. Putting it positively: The next faulty robot is arriving at the MU exactly when the preceding robot is leaving.

Simulation – As stated above, the same lunar scenario is modelled twice a) with conventional operating robots permanently drilling at 100 % performance and b) with reliability-adaptive operating robots. Drilling on the site is defined as uptime. Roving on the Moon, waiting in a queue, and restoration in the MU is summarized as downtime. In this first approach the lunar soil is assumed to have an identical density on every site and in every depth; however, the time to failure for all robots are calculated randomly. Different distances between the sites and the MU and more aspects yields to displacements of up- and downtime patterns.

After a while, the MU gets a bit busy. In scenario a) more and more often robots waste their time in the queue. In scenario b) the up- and downtime patterns get pretty complicated; however, more robots are drilling at the same time.

Results and conclusions – The scenario-wide workload performed by all robots is the evaluation measure in this approach. The workload is the product of the performance level multiplied by the duration of the related uptime interval. Three results are obtained: 1) Reliability-adaptive systems are operating more efficiently in the given lunar context than the conventional systems operate as they continuously achieve higher up times in total. 2) The longer the simulation time, the higher is the efficiency of reliability-adaptive operation compared to conventional operation. (We are starting at 1000 h and ending at five years simulation time; drill duration at full performance is around 30 hours.) 3) It has also been shown that reliability adaptive systems have a significant influence on reducing delay times when arriving at the sites.

11:30
Abderrahim Krini (Robert Bosch GmbH, Germany)
Josef Boercsoek (University of Kassel, Germany)
New approach to predict warranty costs using a new bivariate reliability prediction model
PRESENTER: Abderrahim Krini

ABSTRACT. An important marketing strategy of automotive manufacturers is to promote high-quality products by extending warranty periods, which inevitably generates additional costs. In general, different loads on a control unit cause a failure. Therefore, a failure can be traced back to the loads experienced. In reliability theory, the operating time t is commonly used as a reference value for describing the load acting on a system, since the ECUs are only under current and voltage for the duration t and are therefore only loaded during this time. However, obtaining a suitable database is difficult. The failure data of ECUs collected during the warranty period from the supplier in the event of complaints can be regarded as complete and therefore represent a suitable database for determining reliability parameters. The warranty data are time-censored and reflect the field behaviour of the ECUs during the warranty period. As a rule, when collecting the data, the duration (production and complaint date) of the ECUs in the field is recorded in addition to the cause of failure. With a new multivariate prediction model that uses this data to forecast the expected ECU failures and knowledge of the costs of a warranty case, the expected additional costs of a warranty extension can be calculated.

11:50
Minjae Park (Hongik University, South Korea)
Determination of Optimal Warranty Period under Two-dimensional Warranty Policy with Periodic Preventive Maintenance and Its Application

ABSTRACT. This paper considers a two-dimensional warranty policy for a repairable product with increasing failure rate, during which a fixed number of periodic preventive maintenances are provided by the manufacturer or the seller. The preventive maintenance is the common planned action to delay the wear-out of the product, while it is still in the operating state. The product is warranted by taking into account both age and usage and the warranty is expired when the product reaches a specified age or specified usage, whichever comes first. In this paper, we develop a cost model to evaluate the expected total warranty cost from the manufacturer’s perspective under a certain cost structure and determine the optimal warranty period under the two-dimensional warranty policy under study. As a practical application of the proposed optimal warranty policy, we present the optimal maintenance strategy under a certain type of lemon law, by which the manufacturer is required to either refund or replace the purchased product if the product failures can’t meet certain conditions of repair thresholds set by the law. In addition, we discuss the optimal premium prices for the warranty policy from the manufacturer’s perspective.

12:10
Marjorie Bellinello (UTFPR – Federal University of Technology of the Paraná, Brazil)
Marcelo Rodrigues (UTFPR – Federal University of Technology of the Paraná, Brazil)
Emerson Rigoni (UTFPR – Federal University of Technology of the Paraná, Brazil)
Carlos Henrique Mariano (UTFPR – Federal University of Technology of the Paraná, Brazil)
Raphael Augusto de Souza Benedito (UTFPR – Federal University of Technology of the Paraná, Brazil)
Paulo Sérgio Walenia (UTFPR – Federal University of Technology of the Paraná, Brazil)
Yago Lafourcade Baracy (UFSC - Federal University of Santa Catarina, Brazil)
Gilberto Francisco Martha Souza (Polytechnic School - University of São Paulo, Brazil)
Gisele Maria de Oliveira Salles (COPEL - Energy Company of Parana State, Brazil)
Reliability Centered Maintenance - Quantitative (RCM-Q) Applied to Hydropower Plants: Analysis from the Zero-Base Transition to the Quantitative Process

ABSTRACT. Hydroelectricity is the basis of the Brazilian energy matrix. Therefore, it is clear the need to maintain the availability and operational reliability of hydroelectric plants, so as not to compromise the continuity and conformity (quality) of the electrical energy supply to the end consumer. Ensuring availability along with the reliability of hydroelectric plants can be maintained by employing appropriate maintenance policies that reduce the likelihood of failure or even eliminate its root causes, preventing failure from occurring.

The purpose of this paper is to present the process of transitioning from a maintenance policy based on zero-base reliability to a maintenance policy based on quantitative-reliability (RCM-Q). The RCM-Q is applied, as a case study, in a Francis-type hydrogenerator, pointing out the difficulties in carrying out this transition.

The RCM-Q considers the impact of failures in electrical and mechanical systems on the operation of the hydrogenerator. Each assets system is divided into subsystems, and each operational subsystem has its failures modes analyzed by the tool Failure Mode Effects and Criticality Analysis (FMECA), to select the significant functions and to prioritize the risks related to failure modes. The failures modes classification considers the indicators about impacts on the safety, environment, operation, and economics aspects of the process.

Through life data analysis, based on maintenance and failure historical data, it is possible to determine the cumulative probability density function of failure (CDF) and the cumulative probability density function of repair for each subsystem. These functions will be used as input parameters of the discrete event simulation model, which, in turn, uses the representation of reliability block diagrams.

The simulation process, in addition to providing a general analysis of the reliability, maintainability and availability of the system, helps maintenance managers to identify the optimal time interval for performing the maintenance tasks (replacing assets components or operational systems preventive/predictive inspection). Thus, it is possible to determine and prioritize assertively the maintenance actions, ensuring availability and reliability operational to the energy generation process.

This paper presents part of results from a research project (PD-06491-0341/2014 “Methodology for asset management applied to hydro generators based on reliability and maintainability mathematical models”) development by Federal University of Technology – Paraná (UTFPR) and University of São Paulo (USP) in partnership with COPEL - Energy Company of Paraná State S.A (generation and transmission sectors). This research project aims to reach the scope of the research and technological development program applied to the electric sector, which is regulated by the Brazilian Electricity Regulatory Agency (ANEEL).

12:30
Mengchu Song (Technical University of Denmark, Denmark)
Morten Lind (Technical University of Denmark, Denmark)
Functional Modeling and Reasoning for Reliability Centered Maintenance
PRESENTER: Mengchu Song

ABSTRACT. Reliability centered maintenance (RCM) is a systematic analysis method for developing and optimizing preventive maintenance program for assets in complex systems. RCM turns around the maintenance principle from conventionally protecting equipment to preserving function. Therefore, function and functional failure analysis is an essential process and can provide the basis for an effective maintenance decision making. Nevertheless, such analysis is usually labor-intensive, which to any maintenance project needs a large amount of manual work. Although there are commercial software that have significantly eased the time and paperwork burden for information recording, they cannot replace humans of their reasoning role, thus can hardly automate the RCM analysis. In order to improve the efficiency of RCM, this paper proposes a functional modeling and reasoning approach for RCM based on multilevel flow modeling (MFM). Developing an MFM model relies on decomposition of operational or safety goals and establishment of mass and energy flows, which can provide the knowledge base for defining functional failures and conducting the component failure analysis. While the build-in intelligent causal reasoning solution of MFM can indicate the failure consequences on the system or plant level. Both modeling and reasoning capabilities of MFM are integrated into a logic decision tree to define the classification of each component, which is the most critical result from the RCM analysis. This work expects to be used to develop an intelligent RCM decision support system to optimize the existing maintenance plan and prevent the unwanted consequences of failure.

11:10-12:50 Session 16H: Maritime and Offshore Technology: risk analysis I
Chair:
Ingrid B Utne (Department of Marine Technology, NTNU, Norway)
Location: CQ-105
11:10
Ivana Jovanovic (Faculty of mechanical engineering and naval architecture, Croatia)
Nikola Vladimir (Faculty of mechanical engineering and naval architecture, Croatia)
Maja Perčić (Faculty of mechanical engineering and naval architecture, Croatia)
Marija Koričan (Faculty of mechanical engineering and naval architecture, Croatia)
Effect of potential autonomous short-sea shipping in the Adriatic Sea on the maritime transportation safety
PRESENTER: Ivana Jovanovic

ABSTRACT. Rapid technological development, wireless communication, and monitoring, growing environmental awareness, alternative fuels, and harsher regulations are continuously pressuring maritime transportation and shipbuilding. The maritime sector is exploring ways to reduce cost and emissions but at the same time to increase safety and energy efficiency. Autonomous shipping is an emerging topic, where technical, economic, safety, and environmental aspects are still not mature enough to significantly increase the percentage of autonomous vessels in the global fleet. The reduced crew onboard brings savings and a lower risk of human-induced errors that can lead to human casualties and environmental disasters. However, autonomous vessels monitored from the shore require a high-quality and reliable communication system. The technologies needed for autonomous navigation already exist and it is necessary to find the optimal way to combine their safety, reliability, feasibility, and cost-effectiveness. Short-sea shipping is an ideal candidate for developing and testing new technologies due to shorter routes, low energy demand, and frequent port calls. The Croatian part of the Adriatic coastline with numerous islands and developed tourism activity is extremely sensitive to congestion and marine pollution. This paper presents the analysis of Croatian maritime accident reports to assess whether the accident would have happened if the ship had been autonomous, and once the accident had happened - would its consequences have been different.

11:30
Simona Miraglia (Technical University of Denmark, Denmark)
Nicolas Preben Kraunsøe Frandsen (Borsen, Denmark)
Christian Mathias Faber (Ramboll Denmark, Denmark)
Toke Koldborg Jensen (Ramboll Denmark, Denmark)
Søren Randrup Thomsen (Ramboll Denmark, Denmark)
Simulation based model for the evaluation of the design impact force from ship collision on bridge piers
PRESENTER: Simona Miraglia

ABSTRACT. The value of the dynamic design impact force from sea going vessels against bridge piers as provided by the Eurocode design code is bounded by several fixed-value variables - in particular speed, tonnage and impact area on the pier - which might cause both an overestimation and underestimation of the impact design load depending on the real ship traffic intensity, number and size of ships crossing the channel within specific ship categories, probability of impact avoidance due to human intervention, dimensions of pier etc. Despite the use of probabilistic modeling for defining design loads from different sources of hazards (e.g. road traffic) is quite advanced and largely applied, probabilistic models to evaluate the design load from collision due to ship accidents are rarely used due to the unavailability of reliable open source data to simulate the occurrence of ship accidents, thus making difficult to perform a fully adequate uncertainty quantification. Indeed, while AIS-data (Automatic Identification System) are often freely available, records of accidents are seldom accessible, in particular, of the kind that would allow to correlate accidents with environmental conditions, mechanical failure and human errors. Moreover, a mechanistic model for impact energy calculation is not codified and the choice of the model to calculate the impact energy is rather left open to the judgement of the engineering designers’ team. An evaluation of the impact energy, to be used in combination with the Eurocode formulation, based on local ship crossing data and a mechanistic model for the impact dynamics would allow at least for a more realistic evaluation of the impact energy range and consequently a more realistic evaluation of the design load. Mathematical models using both probabilistic ship traffic modeling and statistical accident scenario modeling have been used to determine annual ship collision frequencies for obstacles like e.g. bridge piers, pylons and girders. Our study aims at using these ship collision models as basic input to a) calculate the initial impact energy using the mechanistic model of Terndrup-Pedersen, which derives from the dynamic equilibrium equation of a ship colliding with a fixed object given ship geometry, tonnage, impact angle, sailing speed and ship-to-pier friction angle; b) perform a sensitivity analysis on friction and impact angle for the simulated scenarios; c) account for and quantify the large uncertainties in the resulting collision (severity reduction factors) from last-minute speed reductions and last-minute course changes. The approach for this is based on a simulation of specific ship approaches (boot-strapping single ships from the collision population) including simulation of speed and course change events and corresponding speed and course d) calculate the resulting probabilistic distribution of the dynamic impact force from ship colliding and based on the principle of reliability-based design, identify the design load. The simulation model herein presented represents a framework as well for the calculation of the dynamic design impact force according to reliability-based design.

11:50
Thomas Porathe (Norwegian University of Science and Technology, Norway)
Collision Avoidance for autonomous ships: making assumptions about other ships intended routes

ABSTRACT. Research on Maritime Autonomous Surface Ships (MASS) have considerably gained momentum since the IMO opened for integration of “new and advancing technologies” in its regulatory framework in 2017. In Norway a first all-electric autonomous short-sea container vessel, Yara Birkeland, has been delivered and delayed, but will start test sailing later this year, albeit with a crew onboard. Within a new 8 year-long research project, SFI AutoShip, design researchers are studying human factors integration for operators in remote operations centre and within the cybernetics and computer science domains studies are made on automatic collision avoidance. In several projects good progress has been made solving simple situations according to the collision regulations (COLREGS). A major hurdle remains in translating qualitative terms like “early and substantial” or “the ordinary practice of seamen” into enumerations useful for computer algorithms. But at the same time, that it is important to make the automatic maneuvers of MASS readily understandable for humans on conventional ships, it is also important for the automatic algorithms to understand the maneuvers and intentions of human navigators on conventional ships. This paper will focus on this important aspect. To avoid collisions, you must understand the route intention of conflicting traffic. One thing is to meet a ship with a steady course and speed that just can be extrapolated into the future. The COLREGS have the very special requirement for ships considered as the “stand-on vessels” that they shall keep their course and speed. But in an archipelago scenario, ships do not keep a steady course and speed but instead follow the fairway or a route planned in a navigation system. Considerations also needs to be taken to wind, current and under keel clearance. However, there are other ways for the “autonomous navigator algorithm” on a MASS to make inferences about other ships intentions, if the simplistic extrapolating course and speed paradigm is ditched. One is to use the “destination” tag in the AIS message, maybe by assuming coherence with “reference routes” which in Norway are published by the Norwegian Coastal Administration (NCA). Another is to make use of Traffic Density Plots which on a statistical level shows the paths of historic traffic in an area and assume that an oncoming ship will do the same as the majority of previous ships has done. Such Traffic Density Plots are published by e.g. Marin Traffic and in Norway by the NCA. These maps are, however, aggregated to a general level and it is not possible to query the data for an individual ship. There is however a need to do this on order to make them useful for collision avoidance. If the algorithm could see which way an individual ship in the past has transited an area, the chances of making the right inferences about its present intentions could increase. The same is true for querying the traffic data for days with similar weather conditions, or ships with similar draughts. A very interesting possibility for the future is using the e-Navigation the concept of “route exchange” that was developed in a number of EU projects during the last decade. Route exchange allows vessels to transmit a number of waypoints ahead of its present position to ships within radio range. These ships will then be able to see directly on their radar or chart screens which way the other ships plan to go. Route exchange holds interesting possibilities for the future. The aim of this paper is to discuss and suggest solutions to these aspects of autonomous collision avoidance.

12:10
Ruochen Yang (Norwegian University of Science and Technology, Norway)
Jens Einar Bremnes (Norwegian University of Science and Technology, Norway)
Ingrid Bouwer Utne (Norwegian University of Science and Technology, Norway)
A system-theoretic approach to hazard identification of autonomous operation with multiple autonomous marine systems

ABSTRACT. Autonomous operations with multiple autonomous marine systems (AMS) are becoming increasingly popular for a variety of applications. Some traditional challenges associated with single AMS operations may be relieved by the presence of a second AMS. However, the operation of multiple AMS may bring new challenges, possibly caused by the unsafe interaction between the participating AMS. Hence, this needs to be further analyzed to improve their safe and reliable operations. However, most previous risk-related works on AMS focuses on the operation of a single AMS and ignores the unsafe interaction between different participating AMS. The current study focuses on the operation with multiple AMS, aiming at identifying the potential hazards during the operation. System theoretic process analysis (STPA) is applied to capture the interaction between each AMS and the interaction between AMS and human operators. An integrated USV-AUVs operation is used as a case study in this study. The analysis results are expected to support future planning of operations with multiple AMS and increase awareness of the operators. In addition, it is expected that the analysis results and conclusions can also be used to develop an online risk model which can capture the rapid change of operating conditions of operations with multiple AMS and then enhance the intelligence of the AMS, its situation awareness, and decision- making during operation.

12:30
Ladislav Myšák (Czech Technical University in Prague, Technicka 4, 166 00 Praha 6, Czechia)
Dana Prochazkova (Czech Technical university in Prague, Technicka 4, 160 00 Praha 6,, Czechia)
Vaclav Dostal (Czech Technical university in Prague, Technicka 4, 160 00 Praha 6,, Czechia)
Jan Procházka (VUT Brno, Czechia)
REDUCTION OF EMISSIONS OF MARINE SHIPMENT
PRESENTER: Ladislav Myšák

ABSTRACT. On the basis of global statistics evaluating the monitoring of the freight maritime transport volume, the continuous increase in climate pollution has been evaluated for a long time. World statistics also show that the transport in question is one of the biggest air polluters; greenhouse gas emissions of GHG – including the carbon dioxide (CO2), methane (CH4) and nitrous oxide (N2O), expressed in CO2 – of total shipping (international, domestic and fisheries) increased from 977 million tons in 2012 to 1 076 million tons in 2018 (an increase of 9.6%). In 2012, CO2 emissions amounted to 962 million tons, while in 2018 this increased by 9.3% to 1 056 million tons of CO2 emissions. This means that the share of shipping emissions in global anthropogenic emissions has increased from 2.76% in 2012 to 2.89% in 2018. As part of the new voyage-based allocation of international shipping, CO2 emissions also increased over the same period from 701 million tons in 2012 to 740 million tons in 2018 (an increase of 5.6%), but at a lower growth rate than total shipping emissions and represent an approximately constant share of global CO2 emissions over this period (around 2%). Using the vessel-based allocation of international shipping taken from the fourth IMO greenhouse gas study, the CO2 emissions increased over the period from 848 million tons in 2012 to 919 million tons in 2018 (an increase of 8.4 %). It is assumed the emissions increase from around 90% of emissions in 2008 to 90-130% of 2008 emissions by 2050 for a number of plausible long-term economic and energy scenarios. Emissions could be higher (lower) than assumed if the rate of economic growth is higher (lower) than as-sumed here, or if the reductions in greenhouse gas emissions of terrestrial sectors are less (higher) than would be necessary to limit the increase in global temperature to well below 2°C. At the same time, the climate change requires emissions from maritime freight transport to be reduced. To do this, it is necessary to fundamentally and radically change the fuel of ships, because the current and the future maritime transport is the most cost-effective in the field of transport and logistics. Today, there are a large number of projects in the world that are already dealing with the solution of the crisis. Based on current knowledge and experience, it seems to be a key investment in hydrogen propulsion. When burning hydrogen, in addition to a significant energy gain (96–120 MJ/kg of hydrogen), only environmentally safe water is produced, i.e. the technology is practically waste-free. The unpleasant property of hydrogen is that it is flammable and that the risk of explosion is great. Therefore, it is necessary to find a solution that will be acceptable, both from the point of view of safety and from the point of view of cost. Currently, in the field of development and research of alternative propulsion for the world's largest vessels, the most close-ly monitored variant is based on Green-NH3 for slow-running two-stroke marine engines within the modi-fied fuel cycle. Based on current knowledge and experience and long-term research and operation of both, the small modular reactors (SMRs) in shipping around the world and the long-term research into hydrogen as an energy fuel, we design a cargo ship powered by hydrogen, which will be produced on board in a technical facility (hydrogen generator) powered by SMR. It will use demineralized water obtained by applying the reverse osmosis technology to seawater. For implementation in practice, we are working on a project to interconnect these technologies while considering all risks in favor of safety (risk-based design). Work [1] presented risk sources for SMRs. In this article: - we address the sources of risks associated with the extraction of demineralized water from seawater by reverse osmosis, the production and combustion of hydrogen on board the ship, - for the sake of practice, we show that the continuously produced emissions of such a ship would be several orders of magnitude lower than those of the currently used ship propulsion. Keywords: freight maritime transport; emissions; SMR; hydrogen production; hydrogen generator; risks; safety; risk-based design; safety management.

[1] PROCHAZKOVA, D., PROCHAZKA, J., DOSTAL, V. Risks of Power Plants with Small Modular Reactors. doi:10.3850/978-981-18-2016-8_125-cd

11:10-12:50 Session 16I: Aeronautics and Aerospace
Chair:
Mario Brito (University of Southampton, UK)
Location: CQ-107
11:10
Riccardo Patriarca (Sapienza University of Rome, Italy)
Joerg Leonhardt (DFS, Germany)
Antonio Licu (EUROCONTROL, Belgium)
Introducing the Structured Exploration of Complex Adaptations to learn from operations in an Air Navigation Service Provider

ABSTRACT. Understanding the nuances of everyday work requires in-depth exploration of system’s properties. The respective organizational knowledge should be the result of a collaborative sharing spanning over tacit and explicit knowledge dimensions to exploit system’s resilient potentials. In this context, this manuscript presents a novel tool called SECA (Structured Exploration of Complex Adaptations) to help detecting weak signals in normal operations for complex socio-technical systems. Besides the description of the situation at hand, SECA encompasses four areas of investigation: response in action, experience, pressures, and goal conflicts. The data obtained from SECA interviews are then coded and analysed systematically to generate and aggregate contents from different respondents across multiple work processes. This semantic analysis following grounded theory is meant to support analysts at identifying concerns affecting system’s safety or productivity. The paper introduces exemplary results as obtained from the application of SECA into a large European Air Navigation Service Provider to improve risk management in the air traffic management system.

11:30
Stanislav Bukhman (Univesity of Southampton, UK)
Mario Brito (Univesity of Southampton, UK)
Ming-Chien Sung (Univesity of Southampton, UK)
APPLICATION OF MACHINE LEARNING ALGORITHMS IN RISK ASSESSMENT OF AIR OPERATIONS IN CONFLICT ZONES

ABSTRACT. Civil aviation continues to be one of the key services uniting people all over the world and in average flights are used by almost two billion passengers per year (IATA, 2021). It is attractive target for international terrorism and in addition is vulnerable in case of operations into areas affected by armed conflicts or above such territories due to intended shooting or misidentification by air defense forces, therefore managing security risks is an important part of aviation industry. Recent examples of civil aircraft shot down over the conflict zones include downing of Malaysia Airlines flight MH17 over Eastern Ukraine in 2014 and Ukraine International Airlines flight PS752 downing over Iran in 2020 (ASN, 2022).

Industry regulations related to risk assessment of civil aviation operations over conflict zones suggest to process significant array of data to carry out risk assessment of passenger flight. This includes Aeronautical Information Publications (AIP), Notices to Airmen (NOTAM), Aeronautical Information Circulars (AIC), state advisories and private industry solutions (ICAO, 2018). As a tool of risk assessment, industry regulations suggest to use qualitative risk matrices (‘low’, ‘medium’, ‘high’) methodologies that are subjective and dependent on the expertise of the decision-makers and can be misinterpreted by users (Renooij and Witteman, 1999).

In this paper we apply machine learning algorithms to predict the risk of a terrorist attack in the aviation industry. Indeed, machine learning is already used to optimize other aspects of aviation security such as advancing capabilities of detection of threat objects during passengers’ baggage screening (Gota et al., 2020), support profiling of passengers prior to approval of their travel to certain country (Zheng et al., 2016) or advance capabilities of threat detection in cybersecurity of critical systems (Perrone et al., 2021).

This study aims to propose predictive model of the security risk assessment for the air operations in and over the conflict zones. The proposed methodology can complement the existing methodologies of aviation security risk assessment avoiding possible bias of experts involved in risk assessment process and eliminating subjectivity of matrix approach widely used in the industry.

11:50
Alessandro Aimasso (Politecnico di Torino - Department of Mechanical and Aerospace Engineering, Italy)
Matteo Davide Lorenzo Dalla Vedova (Politecnico di Torino - Department of Mechanical and Aerospace Engineering, Italy)
Paolo Maggiore (Politecnico di Torino - Department of Mechanical and Aerospace Engineering, Italy)
Gaetano Quattrocchi (Politecnico di Torino - Department of Mechanical and Aerospace Engineering, Italy)
FBG-based optical sensor networks for thermal measurements in aerospace applications

ABSTRACT. The use of optical fiber has revolutionized various technological sectors in recent decades, above all that of communication, but also multiple applications in the medical, lighting, industrial and infrastructural fields. Even more recently, optical fiber has also begun to play a crucial role in aerospace research, with the first studies towards a future transition from fly-by-wire to fly-by-light flight controls. In general, standardizing the usage of optical fiber might provide considerable advantages over traditional electrical systems already in use. A significant feature of optical fiber is its ability to be used not only as a transmission medium but also as a basis for fiber-embedded sensors; one of the most prominent types is based on Bragg gratings (FBGs). Sensors based on Bragg gratings (FBG) are the most immediate instrumentation for the local detection of many physical parameters, particularly a given system's temperature and mechanical deformation. These are two physical quantities that are particularly relevant in the aerospace field. In particular, an accurate temperature evaluation is also essential to perform the thermal compensation of the measurements acquired by traditional or optical sensors used for strain gauge measurements. In this work, the authors analyzed the performance of thermal sensors based on FBGs to verify their stability, accuracy, and sensitivity to operating conditions. In particular, they pay special attention to possible disturbances due to possible thermomechanical interactions and surrounding constraints (humidity, atmosphere, gluing systems, surface finishes of the specimens, architecture of the test chamber). This work is based on two distinct experimental campaigns. Firstly we used a dedicated test bench to check the FBGs accuracy and evaluate their sensitivity to disturbances related to limited but not negligible temperature variations. It was made possible by combining the optical sensors with other traditional (i.e. electronic) probes. However, at a later stage, the research group was equipped with a climatic chamber. As a result, it was possible to analyze the FBGs behavior in the presence of both high-temperature ranges (> 200 ° C) and repeated thermal stresses and scheduled. For this purpose, the authors compared the results obtained using typical temperature sensors to derive the relationship between the observed temperature and the Bragg wavelength variation (i.e. the proportionality coefficient Kt). Finally, to evaluate the effect of the boundary conditions acting on the FBG sensors (in particular, the impact of the thermal expansion of the support structure), the previously described process was repeated by varying the fixing conditions of the fiber and the materials used, analyzing and comparing the data thus obtained. The obtained results, supported by suitable statistical analyses, give optimistic indications concerning the feasibility of adopting sensor networks based on FBG for thermal analysis in the aerospace sector.

12:10
Selcuk Yilmaz (University of Stavanger, Norway)
Ove Njå (University of Stavanger, Norway)
Jon Tømmerås Selvik (University of Stavanger, Norway)
Safety management of Unmanned Air Vehicle – Beyond Visual Line of Sights (BVLOS flights). How does Systems Safety Thinking (STAMP) add to current practices (SORA)?
PRESENTER: Selcuk Yilmaz

ABSTRACT. Increasing number of Unmanned Air Vehicle (UAV) in the airspace has brought safety related challenges to the agenda of aviation authorities. The European Union Aviation Safety Agency (EASA) declared that Beyond Visual Line of Sight (BVLOS) flights must be subjected to a risk assessment process. The Specific Operations Risk Assessment (SORA) method is a multi-stage risk assessment tool developed to meet EASA requirements specifically for BVLOS flights. After EASA declared SORA as an acceptable means of compliance, an increasing number of operators in EASA member states apply SORA. However, the performance of SORA as a risk management tool has yet to be fully documented. In this paper we critically assess the SORA method as a risk management tool for BVLOS flights. A case study encompasses a SORA preparation, incident and an associated investigation, and a System-Theoretic Process Analysis (STPA) post incident. We consider the System-Theoretic Accident Model and Processes (STAMP) to identify critical issues in the Global Navigation Satellite Systems (GNSS). We identify constraints to prevent the recurrence of GNSS-failures. Furthermore, we compare STPA-outputs with SORA results. It was found that STPA identified several accident causative factors and constraints not captured by SORA which included social and organizational structures, new types of training requirements, design concept and requirement flaws, ergonomic system integration failure, and dysfunctional interactions between non-failed components. We conclude that the robustness level of SORA with the use of STPA can add additional operational safety objectives to SORA.

12:30
Gianpiero Buzzo (CIRA S.c.P.A., Italy)
Lidia Travascio (CIRA S.c.P.A., Italy)
Angela Vozella (CIRA S.c.P.A., Italy)
Application of Monte Carlo simulation to the reliability estimation of an experimental Data Communication System
PRESENTER: Gianpiero Buzzo

ABSTRACT. Over the past few years, CIRA (Italian Aerospace Research Center) has been investigating the applicability of alternative reliability evaluation methodologies for complex systems. The research activities have been focused on the development of algorithms for reliability assessment by means of Monte-Carlo simulation. The system under investigation is the experimental setup of FLARE Platform, that is an experimental flying test-bed used by CIRA for testing activities. This facility is based on a TECNAM P92 – Echo S equipped with an experimental set-up used for different experimental purposes, mainly remote piloting and automatic GNC (Guide, Navigation and Control) algorithms. Three components belong to this set-up: a Ground Control Station, a dedicated data link and an on-board avionic set-up. Since CIRA set-up is made by hardware, software, homemade and Commercial off the Shelf equipment, an incremental process has been applied for the reliability assessment of the overall set-up starting with the estimation of the Data Communication System (COMSYS). This system is distributed among the ground control station and FLARE flight segment. A first investigation has been made on COMSYS on-board segment and a reliability evaluation has been performed. Then results from both RBD and Monte-Carlo simulation have been compared [1]. This paper reports, as a first result, the comparison between the results of the reliability evaluation of the COMSYS Ground segment obtained applying both RBD and Monte-Carlo methodologies. For the Monte-Carlo simulation a refinement of the algorithms developed in [1] has been implemented to make them applicable to the Ground segment. Finally, this result together with the result coming from the On-Board segment, has been used to derive a global reliability value for the COMSYS. Moreover, the obtained reliability values have been used to identify the most critical components and to verify the compliance to safety objectives allocated to COMSYS by the functional hazard assessment implemented to obtain the permit to fly.

11:10-12:50 Session 16J: S.17: Living near natural hazards in the age of climate change
Chair:
Knut Øien (SINTEF, Norway)
Location: CQ-007
11:10
Stian Antonsen (NTNU Social Research, Norway)
Torgeir Haavik (NTNU Social Research, Norway)
Bjørn Ivar Kruke (University of Stavanger, Norway)
Stig Andreas Johannessen (University of Svalbard, Norway)
Jacob Taarup-Esbensen (University College Copenhagen, Denmark)
Living near natural hazards in the age of climate change – the relationship between expert and local knowledge in risk governance
PRESENTER: Stian Antonsen

ABSTRACT. Existing research on risk governance and crisis management focus heavily on the roles, responsibilities and actions of formal institutions and organizations. However, risk governance and emergency preparedness consist of more than the planned efforts from authorities (Comfort et al., 2013). Risks and crises are also experienced, made sense of and coped with in the affected communities. This has made several authors argue for a general need to move from a whole-of-government approach to a whole-of-society approach in the governance of risk (e.g. Lindberg & Sundelius, 2013).

This is particularly pressing in local communities that are directly exposed to natural hazard risk. Living in the vicinity often involves developing a form of individual and collective resilience, as was seen in the community’s response to the urban avalanche in Longyearbyen in Svalbard in 2015. Within minutes after the avalanche, large parts of the Longyearbyen community were alerted via neighbours and social media and turned up to help in the search and rescue operation (DSB, 2016). The contributions from local communities in dealing with crises has been previously described in the research literature (e.g. Meyer 2013). Less is known about the way this form of resilience can be tapped into in mitigating risks. This is the point of departure for this paper. Our research question is the following: How can sensor data and aggregated expert knowledge be complemented with the local (often tacit) knowledge of the population in a way that improves sense-making and decision making to control natural hazards?

Several publications underline the need for integration across the two domains of knowledge in risk governance (e.g. Gardner, 2014; Papathoma-Köhle & Dominey-Howes). However, there are few empirical studies showing how this integration may look like in practice. We aim to target this research gap by means of an in-depth study of the integration between expert and local knowledge in the risk governance of avalanche risk in Longyearbyen, the administrative centre of the Norwegian archipelago Svalbard located at 78 degrees North. Climate changes occur at a faster pace in the Arctic than elsewhere, involving a need for increased sensitivity to changes in the risk picture. Local observers and citizens making observations of snow conditions can have important contributions to avalanche risk governance, both as part of the formal monitoring system, and based informal communication channels. In any case, the monitoring contributions of such actors are sources of updated information of high value in risk governance tailored to specific localities and contexts (see also Gardner, 2014).

We present a brief scoping review of literature that is specifically targeting the combination of expert and local knowledge regarding snow avalanche risk. The bulk of this literature is found under the headings of Disaster Risk Reduction (DRR) and Nature-Based Tourism (NBT). We then use this literature as a backdrop for analysing the results from a qualitative study of decision-makers and local observers from Longyearbyen. We discuss these results with an aim to draw out lessons from well-functioning integration of expert and local knowledge, as well as barriers for such well-functioning integration.

References Comfort, L. K., Boin, A., & Demchak, C. C. (Eds.). (2013). Designing resilience: Preparing for extreme events. Pittsburgh, University of Pittsburgh Press.

DSB (2016). Skredulykken i Longyearbyen 19.desember 2015 (The avalanche in Longyearbyen 19 December 2015). DSB, Tønsberg.

Lindberg, H., & Sundelius, B. (2013). Whole-of-society disaster resilience: The Swedish way. In D. Kamien (Ed.). The McGraw-Hill Homeland Security Handbook, 1295-1319. McGraw-Hill, New York

Gardner, J. S. (2015). Risk Complexity and Governance in Mountain Environments. In U. Fra.Paleo (Ed.), Risk Governance: The Articulation of Hazard, Politics and Ecology (pp. 349-371). Dordrecht: Springer Netherlands.

Meyer, M. (2013). Social capital and collective efficacy for disaster resilience. PhD thesis, University of Colorado

Papathoma-Köhle, M., & Dominey-Howes, D. (2018). Risk Governance of Limited-Notice or No-Notice Natural Hazards. In: Oxford University Press.

11:30
Stig Andreas Johannessen (UNIS: University Centre in Svalbard, Svalbard and Jan Mayen)
Potential time related impacts of turn-over on knowledge continuity in Longyearbyen, Svalbard

ABSTRACT. At the edge of civilization Longyearbyen, Svalbard stands out as an example of how the effects of anthropomorphic climate change manifests as climate-related risks. Where they face increased air temperature, increased annual precipitation, more frequent and intense events with heavy rainfall, increased river flow, destabilization of near-surface permafrost, changes in glacier area and mass, increased frequency for many types of floods, and increased frequency for many types of avalanches and landslides. While also contending against the gradual uncertainty and ambiguity which surrounds the manifestation, communication, and handling of these gradually unfolding events. In this paper we investigate how the high rate of turn-over in Longyearbyen impacts risk perception in decision-makers and experts. Where we see risk perception as continuous and gradually unfolding knowledge. While uncovering how the externalization of knowledge may be an adaptation measure to this problem.

11:50
Jacob Taarup-Esbensen (University College Copenhagen, Denmark)
Tsunami in the Uummannaq fjord – Communities adjusting to the hazards of climate change

ABSTRACT. The tsunami came with no warning, and people were unaware of the impending danger until it impacted their lives. On the 17th of June 2017, a 9-10 meter high wave hit two settlements in the Uummannaq fjord system, resulting in four deaths and nine injured. Further investigations showed that a cliffside some 30 kilometres away had slid into the water, causing a tsunami event that subsequently hit the settlements of Illorsuit and Nuugaatsiaq. Following the Uummannaq fjord event, the Greenlandic government, with support from geologists in Norway and Denmark, carried out a survey that revealed a significant danger from the area that would have a much greater impact. A total of seven settlements and one town potentially faced a life-changing event from a tsunami of up to 74 meters in height (GEUS, 2021). The estimation was that the first settlement would be reached in just seven minutes and the last in around 38 minutes. The report could not say when such a landslide would occur. Based on the findings, the Greenlandic emergency response and the police issued a report on steps taken to save lives and property for the 2,200 people living in the fjord (Naalakkersuisoqarfik, 2018). The report recommended the abandonment of the two nearest settlements and reevaluating the feasibility of keeping two others provided that the municipality ensures an adequate protection level. A warning system was discussed using active monitoring by the residents and alerts issued through the existing telephone system. For historical reasons, the proposal to leave settlements is highly controversial in Greenland (Hendriksen, 2013). Over the years, many settlements have been abandoned either by force or as people moved to the larger towns (Hendriksen, 2013). Today, around 70 villages remain in Greenland, and there is little appetite among its people to reduce the number much more. Due to this local pressure, the municipality, together with the local emergency response and police, started to work towards plans that would save as many settlements as possible while at the same time assuring a relatively high level of safety. Besides the Greenlandic emergency response, the plans required local participation and help from larger nations with more rescue resources (Synnestvedt, 2021). However, there continue to remain doubts about the effectiveness of such plans given the harsh Arctic environment, long distances and lack of resources. Using reports, maps and pictures from Uummannaq fjord, the paper explores how the Greenlandic government is preparing for a landslide event—seeking to answer the research question: How can communities in the Uummannaq fjord system ensure sustained livability through an organisational resilience approach? The paper investigates three interdependent themes of organisational resilience, the ability to protect lives, critical infrastructure and employment. Each is essential to continued liveability in the five settlements and town of Uummannaq. The capacity to save lives has a high priority in preparing for a possible tsunami event. However, it has proven challenging to organise local exercises (Kristensen, 2021). Much of the work done by the municipality of Avannata and the Government of Greenland focused on risk identification and doing risk analysis (Grønlands Politi, 2021; Kristensen, 2022). A primary concern was that it was unknown when another event could occur, leaving decision-makers with an unknown frequency but tasked with preparing for an incident with a catastrophic consequence. It was not until 2021 that the emergency preparedness conducted an exercise in Uummannaq, the central town in the area (Kristensen, 2021). Until then, the focus was on monitoring the mountainside, emergency sirens in the communities, making a plan for lookouts, and training locals in tsunami identification. After the exercise, the emergency preparedness chief stated that "If we were to experience a tsunami, then we must know how the emergency preparedness is to act. That is why we must carry out more disaster drills in the future, and the citizens will have the opportunity to practice with us." (Kristensen, 2021) While saving lives has a high priority, it has proven challenging to muster an adequate response given the resources available in the municipality. The municipality has prioritised local low-tech solutions, such as human lookouts and awareness training, while more advanced measures like an effective early warning system are yet to be designed. Maps of the area show that in most cases will, both critical infrastructure and places of employment be either destroyed, damaged or be impacted through secondary effects. The continued livability in these communities relies on access to critical infrastructure and making it possible to make a living through a place of employment. In Greenland is critical infrastructure defined as access to energy supply, telecommunication, freshwater, heliport. In addition, is the local healthcare center, schools and buildings that can hold many people, and the local police and fire station also included in Uummannaq (Grønlands Politi, 2021). Damage to these essential services is a challenge for emergency preparedness efforts, as most are positioned below the flood line. The same situation is evident when looking at places of employment as fish factories, fishing boats, and small shops will be affected. In case of an event, this would mean that even if plans for emergency response would work and lives would be saved, the communities would cease to exist as they would not constitute places of livability. Climate change in the Arctic entails dramatic impacts on communities, including threats to their long-term livability. Saving lives during an event is essential, but a response must include the durability of the settlements and towns to be resilient to these changes. The paper highlights the need for a resilience approach to climate change impacts, including critical infrastructure and employment. Community resilience looks at but also beyond the events themselves towards the recovery and what comes after. The holistic approach ensures that communities acquire and retain the ability to respond, monitor, learn from and anticipate changes in context before and after the event. Hence the concept works to enhance long-term livability in settlements and towns.

12:10
Siiri Wickström (University Center in Svalbard, Norway)
Marius Jonassen (University Center in Svalbard, Norway)
Holt Hancock (University Centre in Svalbard, Norway)
Stig Andreas Johannesen (University Center in Svalbard, Norway)
Eirik Albrechtsen (Norwegian University of Science and Technology, Norway)
Meteorological drivers of snow avalanche hazards in Longyearbyen’s current and future climate
PRESENTER: Siiri Wickström

ABSTRACT. Snow avalanche hazards routinely affect infrastructure in Longyearbyen, Svalbard. Risks from these hazards have prompted authorities to establish avalanche forecasting programs and construct permanent avalanche defenses in an effort to mitigate avalanche risk. Forecasted meteorological and climatic conditions serve as a first-order data input for predictions of future avalanche conditions in this High Arctic setting – both at shorter (daily) time scales relevant for daily avalanche hazard bulletins and longer (decadal) time scales pertinent to life cycle of permanent avalanche defenses. Here, we investigate how modeled weather and climate data influences assessments of avalanche risk in Longyearbyen. This work aims to show how appropriate utilization of meteorological and climatological data can strengthen knowledge of a well-defined local natural hazard in a location undergoing dramatic climatic changes. We also identify how uncertainties related to numerical weather and climate prediction products influence the avalanche forecast – and thereby the risk governance process – using avalanches in Longyearbyen as a case-study. With the Svalbard archipelago experiencing some of the globe’s most rapid and severe climatic changes, we expect our results to contribute towards developing management tools for natural hazards in other regions where climatic changes will increase in the next decades.

12:30
Knut Øien (SINTEF, Norway)
Eirik Albrechtsen (NTNU, Norway)
Holt Hancock (UNIS, Norway)
Martin Indreiten (UNIS, Norway)
Evaluation of a Local Avalanche Forecasting System in Svalbard
PRESENTER: Knut Øien

ABSTRACT. Longyearbyen, the world's northernmost settlement and the administrative center of Svalbard, Norway, experienced a fatal avalanche disaster in 2015. Another avalanche hit the settlement in 2017, this time without fatalities. These accidents led to the introduction of a local avalanche forecasting system assessing the avalanche risk daily during the winter season. In this paper, we evaluate the current local avalanche forecasting system, from data collection to decision-making, with the aim to improve the communication of risk and uncertainty, as well as improving the foundation for decision-making (e.g., deciding on the need for evacuation). The overall methodology consists of action research in close collaboration with local stakeholders, a risk governance framework and a comparative analysis. The latter includes a comparison with the regional avalanche warning system for Svalbard provided by the Norwegian authorities, a European study on local avalanche warning, and a guideline for local avalanche forecasts in Switzerland. The results show potential improvements regarding communication of risk and uncertainty during data collection, risk assessment, and decision-making. This includes recommended changes to the risk matrices currently employed in the local forecasting system and explicitly addressing uncertainty during all phases of the risk governance process. Svalbard is experiencing climate change and increased temperature at an extreme rate compared to most of the world. The effects of these climatic changes, especially the long-term effects for short-term protection measures such as avalanche forecasts, are discussed and recommendations provided regarding integrated avalanche protection measures consisting of both temporary and permanent solutions.

11:10-12:50 Session 16K: S.30: Synergies between Machine Learning, Reliability Engineering and Predictive Maintenance I
Chairs:
Biswajit Basu (Trinity College Dublin, Ireland)
Andrea Staino (Alstom, France)
Location: CQ-010
11:10
Shuo Zhang (Technological University Dublin, Ireland)
Emma Robinson (Technological University Dublin, Ireland)
Malabika Basu (Technological University Dublin, Ireland)
Hybrid Approach integrated with Gaussian Process Regression for Condition Monitoring Strategies at the Rotor side of a DFIG
PRESENTER: Shuo Zhang

ABSTRACT. With the applications of Machine Learning (ML) in condition monitoring (CM) of wind turbines (WTs), regression-based approaches are mainly applied to fit the power curve for the evaluation of WT performance. Although a fitted power curve is prevailing and straightforward for anomaly detection, it is difficult to identify the fault types at the rotor side of a WT, particularly, because the operation can be dependent on multiple parameters. The present paper proposes an interesting approach towards condition monitoring (CM) and fault diagnosis of a DFIG by merely processing rotor currents through various signal processing techniques, where miscellaneous electrical disturbances can be recognized and localized. A non-parametric regression approach, Gaussian process regression (GPR), is advised to fit the healthy performance curve (PC) of rotor current standard deviation (SD) versus wind speed. Thereafter, a further hybrid approach with GPR is investigated to visualize healthy operation, yield the anomaly, and conduct fault recognition at the rotor side.

11:30
Ali Camdal (Trinity College Dublin, Ireland)
Biswajit Basu (Trinity College Dublin, Ireland)
Andrea Staino (Alstom, France)
An Assessment of the Application of Real-time YOLOV5 Deep Learning Algorithm in Unmanned Surface Vessels for Environmental Maintenance: Some Preliminary Results
PRESENTER: Ali Camdal

ABSTRACT. Pollution of aquatic environments is one of the biggest problems faced by mankind throughout history. The quality of ocean, sea and other stagnant waters that cover 70% of the world affects not only human life but also other living creatures in nature. Since these environments are large and difficult to access; observing, maintaining, and cleaning these environments are challenges that must be overcome in a reliable and safe manner. In this study, a deep learning algorithm has been trainned and implemented for real-time application in an unmanned surface vessel. This was designed to detect and track objects on the surface of the water, thus subsequently enabling the maintenance and cleaning operations. The device hardware developed and integrated with the real-time deep learning intelligence has been tested in both controlled and field environment. The real-time deep learning model has been retrained and validated using the public marine litter data set. As a result of the training, the model is able to detect objects on the water surface with a mean average precision of 85%, 94% recall, and precision of 78%. Moreover, processing time is less than 100 milliseconds for per frame. The implementation of the real-time YOLOv5 deep learning model will facilitate the operation of tracking objects on the sea surface and thus will reduce maintenance costs, shorten the time requirement for operation, and increase the efficiency of the detection process.

11:50
Biswajit Basu (Trinity College Dublin, Ireland)
Andrea Staino (Alstom, France)
Quantum Machine Learning for Predictive Maintenance in Railways
PRESENTER: Andrea Staino

ABSTRACT. In recent years, predictive maintenance has gained tremendous interest from the industry, with a steadily increasing number of research and development programs as well as of emerging applications in various industrial sectors. Currently, the mobility field is devoting particular attention to the optimization of maintenance processes, with the primary aim of (i) improving the reliability of the fleet (by minimizing the number of failures occurring during the commercial service), (ii) reducing the maintenance costs (by eliminating unnecessary periodic servicing, checks and inspections) and (iii) maximizing the availability of the assets (by reducing the downtime). The key enablers for the shift towards the predictive maintenance paradigm are Artificial Intelligence (AI) and Machine Learning (ML); thanks to the recent advancements in technology and in data science, unprecedented efforts have been directed to the application of advanced analytics to improve the reliability and the operational efficiency of transportation products and services. Under the predictive maintenance framework, maintenance decisions are taken according to carefully designed health indicators, that are typically generated by an AI-based predictive model.

While several experimental and field demonstrator studies have been conducted, systematic application of predictive maintenance methods at fleet level in real projects it is still very limited and carried out on a case-by-case basis. In fact, most of the results reported in the literature so far focus on the optimization of the maintenance process at individual asset level. This “local-optimum”-driven solution, however, is not sufficient to attain the targets expressed above in terms of improved service reliability and of overall reduction of maintenance costs. In order to meet these goals, the predictive maintenance policy should consider the global behaviour of the system, i.e. the combination of health indicators relative to all the units within the fleets, in conjunction with the different constraints and uncertainties relative for instance to the logistics aspects of the warehouse operations, to the planning of resources, to the performance objectives. However, the computational burden due to the complexity of the “global” optimization problem is a major hurdle for large scale deployment and execution of predictive maintenance activities.

In this context, Quantum Machine Learning (QML) is emerging as a novel computing paradigm that could find patterns in classical data by mapping the data to quantum mechanical states, and then manipulating those states using basic quantum linear algebra algorithms. These can be applied directly to the quantum states and made to reveal their underlying features and patterns. The resulting quantum modes of analysis are frequently much more efficient (both in terms of representation and computation) and more revealing than the classical analysis of data. For example, for a data system represented by NXN density matrix, quantum principal component analysis can be used to find its eigenvalues and to reveal the corresponding eigenvectors in time O((log2 N)2), compared with O(N2) measurements needed for a classical device to perform tomography on density matrix, and the O(N2) operations needed to perform the classical PCA. Such analysis of quantum data could be performed on the relatively small quantum computers that are either already available or likely to be available in near future and thus offers great potential for applications in predictive maintenance with respect to classification problems.

In this contribution, the application of QML algorithms for predictive maintenance in railway systems is discussed. The main principles of quantum computing are outlined. For the purpose of illustration, the maintenance of fresh air filters in the air conditioning subsystem of a train is considered. The results are presented and compared with the findings from the traditional approach further, suggestions for future research addressing the issue of QML applied to predictive maintenance are provided.

References. Staino et al, A Monte-Carlo approach for prognostics of clogging process in HVAC filters using a hybrid strategy a real case study in railway systems, 2018 IEEE International Conference on Prognostics and Health Management (ICPHM)

12:10
Amit Patwardhan (Luleå University of Technology, Sweden)
Adithya Thaduri (Luleå University of Technology, Sweden)
Ramin Karim (Luleå University of Technology, Sweden)
Miguel Castano (Luleå University of Technology, Sweden)
An architecture for predictive maintenance using 3D imaging: A case study on railway overhead catenary
PRESENTER: Amit Patwardhan

ABSTRACT. Railway Overhead Catenary (ROC) system is critical for railways’ overall performance! ROC is a linear asset that is spread over a large geographical area. Insufficient performance of ROC has a significant impact on the overall railway operations, which leads to decreased availability and affects performance of the railway system. Prognostic and Health Management (PHM) of ROC is necessary to improve the dependability of the railway. PHM of ROC can be enhanced by implementing a data-driven approach. A data-driven approach to PHM is highly dependent on the availability and accessibility of data, data acquisition, processing and decision-support. Acquiring data for PHM of ROC can be used through various methods, such as manual inspections. Manual inspection of ROC is a time-consuming and costly method to assess the health of the ROC. Another approach for assessing the health of ROC is through condition monitoring using 3D scanning of ROC utilising LiDAR technology. Presently, 3D scanning systems like LiDAR scanners present new avenues for data acquisition of such physical assets. Large amounts of data can be collected from aerial, on-ground, and subterranean environments. Handling and processing this large amount of data require addressing multiple challenges like data collection, processing algorithms, information extraction, information representation, and decision support tools. Current approaches concentrate more on data processing but lack the maturity to support the end-to-end process. Hence, this paper investigates the requirements and proposes an architecture for a data-to-decision approach to PHM of ROC based on utilisation of LiDAR technology.

12:30
Gauthier Stéphane (ALSTOM, France)
Concrete applications of Machine Learning in railways

ABSTRACT. Historically, reliability has been based on a subset of a population, which is studied (calculation of life expectancy, probability of failure, etc.) and whose behaviour is generalised to all individuals. Recent progress in Machine Learning and Artificial Intelligence has provided a complement to the profession of reliability expert. This represents a disruptive approach to studies.

Indeed, the reliability expert will no longer think about a subset of the population but will follow each component in a personalised way: That's what Big Data is all about. Each component can have its own personalised maintenance and its own life expectancy calculation, considering all covariates that may influence its lifetime.

We will detail four examples of applications, relating to the following aspects of the reliability profession: - Reliability by Design - Conditional maintenance - Failure Analysis via clustering - Continuous improvement for future projects These four examples are based on feedback from the ALSTOM Reliability team.

For each project, we will detail: the context of the problem, the method used, and the result obtained.

The aim here is not to detail all the projects carried out by ALSTOM, some of which are related to digital twin, accelerated simulations, or pattern recognition. This presentation today aims to make you understand how your projects can benefit from Machine Learning & Artificial Intelligence, and to put an end to certain preconceived ideas, such as the need to add sensors, or the feeling that Data Science is a complex magic.

14:00-15:20 Session 17A: S.32 In memory of Ioannis A. Papazoglou: new methods and applications on quantified risk assessment for process and energy systems
Chair:
Olga Aneziris (Institute for Nuclear and Radiological Sciences, Energy, Technology and Safety (INRASTES), NCSR Demokritos, Greece)
Location: LG-22
14:00
Olga Aneziris (NCSR "DEMOKRITOS", Greece)
Quantitative risk analysis of alternative marine fuels for bunkering operations

ABSTRACT. This paper investigates and compares safety during the bunkering of a ship with three of the most "ready-to-use" alternative fuels, namely LNG, ammonia and hydrogen. Marine industry has been forced to move towards more environmental friendly fuels to adapt to the requirements for compliance with international maritime legislation for reducing hazardous gas emissions The most promising alternative marine fuels are liquefied natural gas (LNG), ammonia and hydrogen. A quantitative risk assessment is conducted (Papazoglou et. al, 1992) dedicated to the bunkering of the alternative-fueled-ship from a fixed tank installed at port facilities. The master logic diagram (MLD) technique is used to identify the initiating events which create a disturbance in the installation and have the potential to lead to alternative fuel release during the tank to ship bunkering operation. Corrosion in tanks, pipelines and other parts, and excess external heat owing to nearby external fire are merely some of the identified initializing events (Papazoglou and Aneziris, 2003). In addition, event trees are developed to describe the accident sequences starting from the occurrence of initiating an event, which may be followed by the failure of safety systems and will finally lead to a plant damage state and an accidental release of toxic or flammable fuel. The frequency of the major accident scenarios is calculated by exploiting available failure rate data and the Fault Tree-Event Tree method. Consequence assessment will be performed in case of toxic release of ammonia, during storage and or bunkering, and for fires or explosions of LNG and hydrogen release. Finally, risk is evaluated by combining the frequencies of the various accident scenarios with the corresponding consequences resulting in iso-risk contours. The computer code “SOCRATES” (Papazoglou et. al, 1996) will be used for consequence assessment and risk integration, which may take into consideration various uncertainties of the bunkering operations. Risk of these three alternative fuels will be compared in a case study of tank to ship bunkering and all factors influencing risk will be identified.

Keywords: alternative marine fuels, ammonia, hydrogen, LNG, safety, ports References

1. Papazoglou I. A., Nivolianitou Z., Aneziris O., Christou M. (1992), Probabilistic safety analysis in chemical installations, J. Loss Prevention in Process Industries, Vol. 5, No 3, 1992, 181-191. 2. Papazoglou I. A., and Aneziris O.N. (2003), Master Logic Diagram. Method for hazard and Initiating event identification in process plants, Journal of hazardous materials, A97, 11-30. 3. Papazoglou, I.A., Aneziris, O., Bonanos, G., & Christou, M., (1996), SOCRATES: a computerized toolkit for quantification of the risk from accidental releases of toxic and/or flammable substances, () in Gheorghe, A.V. (Editor) , Integrated Regional Health and Environmental Risk Assessment and Safety Management, published in Int. J. Environment and Pollution, Vol. 6, Nos 4-6, 500-533.

14:20
Marko Gerbec (Jozef Stefan Institute, Slovenia)
Peter Vidmar (Faculty of Maritime Studies and Transport, University of Ljubljana, Slovenia)
Gianmaria Pio (Dipartimento di Ingegneria Civile, Chimica, Ambientale e dei Materiali, Università degli studi di Bologna, Italy)
Ernesto Salzano (Dipartimento di Ingegneria Civile, Chimica, Ambientale e dei Materiali, Università degli studi di Bologna, Italy)
Liquefied Natural Gas dispersion modelling for the case of port of Koper, Slovenia
PRESENTER: Marko Gerbec

ABSTRACT. The recent efforts of ensuring green shipping force also green ports to implement safety systems for the distribution and supply of Liquefied Natural Gas (LNG) as alternative maritime fuel for ships. This brings important challenges to the assurance of the safety of the ship bunkering operations because LNG is a cryogenic liquid. One of the central points in safety assurance is estimation of the adequate safety distances in a case of large scale LNG accidental spills. In that respect, the risk assessment of LNG bunkering operations recently gained interest in the literature. This paper will report on the comparative application of three different dispersion modelling tools (Unified Dispersion Model – UDM by DNV-GL® PHAST, and two CFD: FDS - Fire Dynamics Simulator from NIST and Ansys Fluent®) applied on accidental case of large LNG release that can occur during ship-to-ship bunkering operation in a port area. The specific case of port of Koper was selected as the literature on the topic suggests that the micro location specifics play an important role in the dispersion of the evaporated LNG vapors into the ambient air, as well as, due to the fact that the port area is very close to the city of Koper. The results obtained suggest that all three models provide comparable results at the ideal flat terrain case, however, considering the real terrain elevation details, alternative micro locations and wind directions, the CFD models need to be used. In that respect, we found that local turbulence effects play dominant role in the downwind dispersion. The results for two micro locations, two wind directions on the flammable cloud evolution will be presented. In overall, the results obtained provide valuable information in understanding of the LNG spillage and flammable cloud dispersion at port area or similar facilities.

References O. Aneziris, I. Koromila, Z. Nivolianitou, Safety Science, 124, 104595 (2020). S. Park, B. Jeong, Y. Young, J.K. Paik, Ships and Offshore Structures, 13:sup1, 312-321 (2018). M. Gerbec, P. Vidmar, G. Pio, E. Salzano, Safety Science, 144, 105467 (2021).

14:40
David H. Slater (Cardiff University, UK)
Ben Ale (TU-Delft, Netherlands)
Organizations: Drifting or Dysfunctional?
PRESENTER: Ben Ale

ABSTRACT. The very ideas that underpin our traditional understanding of organization and workplace are being critically explored by an increasingly large number of people. Equally, the discussion is also embracing new ways of thinking about organization, through a range of perspectives. It as if a new school of organizational thought and practice is appearing and is crying out for closer investigation. This interest in finding a coherent explanation as to how organizational features influence occasional lapses in their expected behaviors, stems from looking carefully at a series of high-profile incidents. These include early incidents such as Flixborough, (HSE, 1975) and examples are still occurring almost 50 years later. The concern is that despite many thoughtful analyses of a series of well known, historical examples, there are apparently, no easy solutions as to how they can be remedied. So, despite the reassurances from formal inquiries and investigations into the causes, that lessons have been learned and they will never happen again, similar incidents seem to occur at regular intervals raising the \question: “How many blowouts does it take to learn the lessons?” (Verweijen & Lauche, 2018). Speculation as to why organizations often persist in repeating the mistakes of the past, has been the subject of much research and analysis in a number of studies, in various disciplines. One of the earliest of the studies to recognize that organizations might be as fallible as individual employees, was the work of Turner (1997). He showed that the organization responsible for the Aberfan disaster (Solly, 2019) was seemingly unaware of the implications of possibly unstable mine spoil tips. But it seems too superficial to simply hold the entire organization to “blame” and verdicts such as corporate manslaughter (Ale, Kluin & Koopmans, 2018), although theoretically a legal option, are rarely handed down in practice. More recently, however, attempts to address the fallibility of organizations has tended to focus on their “culture” and style of leadership (Ale et al, 2012). Are these then to blame for repeat incidents from lessons assumed learned? Given human frailties, are these very large, dispersed organizations then doomed to fail? Hopkins (1999), using Rasmussen’s model of a system, subject to a hierarchy of external pressures, argues that a better system should not accept the deviations to get out of control. It is perhaps mainly the research of people like (La Porte & Consolini, 1991) who pointed out that in fact some situations which Perrow (1999) classes as unmanageable are handled routinely by so called High Reliability Organizations. A number of other authors have addressed this problem and held that the problem could be solved by decentralization of authority structures as the way to solve the problem. (Haavik et al). Studying how these management systems worked, not just the pressures they worked under, led them to highlight their ability to spontaneously reconfigure the organization structure to adapt to changes and challenges encountered during their operations. The examples often quoted though, are mainly from military type of organizational systems, such as on aircraft carriers (Perrow, 1999) and in commando units (Moorkamp et al 2014). This spontaneous adaptation was only possible in organizations that recognized that, although a conventional hierarchical structure was suitable for “normal” operations, it could be less than adequate in responding to unexpected variabilities. (Burns & Stalker, 1961). Organizational drift is natural, but not unavoidable. It can be recognized and reversed but needs the organization to be constantly aware of the propensity. This is the “Chronic Unease” of the Highly Reliable Organizations approach. As in organisms, every organization is unique, every set of personnel is different. The “sharp end” are not just robots programmed to respond unintelligently; artificial, or otherwise. Treating them as such, misses the chance to ensure and assure the correct signals - are relayed and responses are seen to be made. The middle management have perhaps the most demanding roles, as the keepers of the “corporate memory” and Culture, being aware of what really is happening above and below. The Executives need to ensure that they are really aware of how the organization gets things done. Leadership is lonely but critical. It is essential that this level calibrates the culture, not concede it to middle management.

REFERENCES Ale BJM, Sillem S, Lin PH, Hudson P (2012) Modelling human and organizational behaviour in a high-risk operation; ; PSAM 11, Esrel 2012, Helsinki 25-29 juni 2012 Ale, BJM, Kluin MHA, Koopmans IM (2018), Safety in the Dutch chemical industry 40 years after Seveso Journal of Loss Prevention in the Process Industries Volume 49 Pages 61-67 Burns T, Stalker GM. (1961). The Management of Innovation. London: Social Science Paperbacks. DOI:10.1093/acprof:oso/9780198288787.001.0001 Haavik TK, Antonson S, Tosness R, Hale, A (2019). HRO and RE: A pragmatic perspective. Safety Science, 117, pp 479 - 489. Hopkins, A. (1999). The Limits of Normal Accident Theory. Safety Science 32, 93 - 102. HSE (1975) Health and Safety Executive, 'The Flixborough Disaster : Report of the Court of Inquiry', HMSO, ISBN 0113610750 IChem E. (2021, December). Lessons Learned Database. Retrieved from I Chem E Loss Prevention and Process Safety Group: https://www.icheme.org/membership/communities/special-interest-groups/safety-and-loss-prevention/resources/lessons-learned-database/ La Porte TR, Consolini PM (1991). Working in Practice But Not in Theory: Journal of Public Administration Research and Theory: J-PART Vol. 1, No. 1, pp. 19-48 Moorkamp M, Kramer EH, van Gulijk C, Ale BJM (2014) Safety management theory and the expeditionary organization: A critical theoretical reflection, Safety Science 02/2014 69:71-81 NTNU. (2022) Accident databases. : https://www.ntnu.edu/ross/info/acc-data (as per 09/02/2022) Perrow C. (1999). Normal Accidents: Living with High Risk Technologies. Princeton University Press: ISBN 9780691004129 Solly, M. (2019). The True Story of the Aberfan Disaster. Retrieved from The Smithsonian Magazine: https://www.smithsonianmag.com/history/true-story-aberfan-disaster-featured-crown-180973565/ Turner, B. A. (1997). Man made Disasters 2nd Edition. Blackwells. Verweijen B, Lauche K (2018). How many blowouts does it take to learn the lessons?. Safety Science , 111, (2019), pp. 111-118, https://doi.org/10.1016/j.ssci.2018.06.011.

15:00
Arefe Asadi (University of technology of Troyes, France)
Mitra Fouladira (Aix Marseille University/University of Technology of Troyes, France)
Diego Tomassi (Biofortis, France)
Degradation Model Selection Using Depth Functions
PRESENTER: Arefe Asadi

ABSTRACT. For lifetime prediction or maintenance planning of complex systems, degradation modeling is essential. The reason is that, for highly reliable systems that failure times are difficult to observe, degradation measurements often provide more information than failure time to improve system reliability. According to Lehmann, the stochastic-process-based approach shows great flexibility in describing the failure mechanisms caused by degradation.

To select a model for an observed degradation path amongst some candidate degradation models, the concept of statistical depth could be considered. A depth function reflects the centrality of the observation to a statistical population.

The models that show high values of depth function are compared based on different statistical criteria and, the best model is selected to predict failure time

14:00-15:20 Session 17B: Chemical and Process Industry
Chair:
Patricia Ennis (TU Dublin, Ireland)
Location: CQ-008
14:00
Ákos Orosz (University of Pannonia, Hungary)
Ferenc Friedler (Szechenyi Istvan University, Hungary)
Reliability of Processing Systems: Structural Approach
PRESENTER: Ákos Orosz

ABSTRACT. A processing system is a network of operating units designed to produce a single product or multiple products. Process network synthesis (PNS) is the procedure of determining the best network of operating units for the production of the desired products. The reliability of a processing system is one of its most important properties, it depends on network and the reliabilities of the operating units of the system. The current work describes a PNS method considering reliability, where the processing system with minimal cost is to be determined while satisfying the reliability constraint. The network of a processing system can be redundancy free, however that results in row reliability. If higher reliability is desired, the system must contain redundancies, which can be local or global. During the synthesis of a process considering reliability, all three cases have to be taken into account to ensure finding the global optimum. Traditional methods in process synthesis generate either redundancy-free systems, or systems with local redundancies. Taking into account global redundancy requires special, enumeration-based algorithms. The necessary definitions and algorithms for these methods are included in the P-graph framework [1], which is a combinatorial, axiom-based framework for PNS. Hence, the current work describes the process network synthesis method considering reliability that is based on the P-graph framework and its combinatorial algorithms. References [1] Friedler, F., Orosz, Á., & Pimentel, J. (2022). P-graphs for Process Systems Engineering. Springer International Publishing. https://doi.org/10.1007/978-3-030-92216-0

14:20
Romualdo Marrazzo (ISPRA - Istituto Superiore per la Protezione e la Ricerca Ambientale, Italy)
Fabrizio Vazzana (ISPRA - Istituto Superiore per la Protezione e la Ricerca Ambientale, Italy)
Seveso inspections on process industry during the pandemic: reorganization measures and management continuity

ABSTRACT. The article is aimed at representing the activities of enforcement and monitoring industrial sites, carried out during the COVID period, with particular reference to the inspections on SMS (Safety Management System) of process industry establishments subject to the obligations of Legislative Decree 105/2015 (transposition decree of Directive 2012/18 / EU, so-called Seveso III). Starting from the problem of conducting inspections during the pandemic, it explains the alternative method introduced by the Italian Competent Authorities to ensure the continuity of the activities, in compliance with the standard procedure. It consists of performing some phases remotely, identifying what can be done through documentary examination and what must be done on site. Information on the status of the establishments in pandemic conditions are given, with a focus on process industry case studies as crude oil extraction/process center and oil refinery (considered strategic activities by the Italian legislation enacted during the pandemic). The management continuity of operational activities was ensured, with no interruptions to processes and no changes to significant SMS procedures (i.e. confirmation of the implementation of the measures provided for in the emergency plan). In addition, company measures for the prevention and containment of the virus diffusion are listed, in terms of work reorganization measures for operational staff and non-operating personnel, as well as access procedures at the site and contrast and virus containment in application of COVID-19 protocol and “Contingency Plan”, in agreement with the workers’ unions. The paper ends with the lessons learned from the inspections activities, with attention to non-compliances issued concerning the respect of frequencies for training, the contents of training activities carried out in "remote" mode, the consultation with worker representatives and the compliance with timing/frequency of maintenance activities. The strengths and benefits of the new inspection method is at last explicit, thus allowing the continuation of the control activity to be ensured.

14:40
Xuan Liu (Beijing Institute of Technology, China)
Huixing Meng (Beijing Institute of Technology, China)
Weizhen Yao (Chinese Academy of Sciences& Beijing CASIntruments Semiconductor Technology Co., Ltd., China)
Xianglin Liu (Chinese Academy of Sciences& Beijing CASIntruments Semiconductor Technology Co., Ltd., China)
Chao Zhang (Sinopec Engineering Incorporation, China)
Process risk prioritization of metalorganic chemical vapor deposition device
PRESENTER: Xuan Liu

ABSTRACT. The metal-organic chemical vapor deposition (MOCVD) device is indispensable in the semiconductor manufacturing industry. The process risk prioritization is beneficial to secure the safe operations of MOCVD. To find out the weakness of the device, the hazard and operability (HAZOP) study can be used to assess the process risk of MOCVD. In this paper, to identify the crucial components and associated hazards, we introduce a hybrid method by integrating HAZOP and the interval analytical hierarchy process (IAHP) to prioritize the hazards for decision-makers. Firstly, we employ HAZOP to identify the failure causes and consequences of deviations. Then we conduct IAHP to prioritize the risk of MOCVD to identify the priority for the hazards with the same risk priority number (RPN). The combination valve section before the reactor of MOCVD is analyzed as the case study. The obtained HAZOP results can support the lifecycle risk management of MOCVD.

15:00
Giovanni Romano (ROMANO SAFETY MANAGEMENT STP, Italy)
Paolo Tombini (ROMANO SAFETY MANAGEMENT STP, Italy)
Fabio Ferrario (POLITECNICO DI MILANO, Italy)
Anna Mormile (ROMANO SAFETY MANAGEMENT STP, Italy)
Valentina Busini (POLITECNICO DI MILANO, Italy)
INTERACTION BETWEEN A HIGH-PRESSURE JET AND CYLINDRICAL OBSTACLES IN TANDEM THROUGH CFD
PRESENTER: Giovanni Romano

ABSTRACT. In industrial plants, the most common accident scenarios include jets as more than 50% of industrial accidents are caused by mechanical failure (Crowl and Louvar, 2012[1]): when a break appears in the wall of a tank containing high-pressure flammable gas, it escapes as unignited jet. Its extent depends on the physical characteristics of the fuel, the storage temperature and pressure, the atmospheric turbulence and the presence of obstacles near the release. Furthermore, jets are capable of reaching considerable distances, so they are among the events most capable of triggering domino effects, or cascades of accidental events correlated with each other by cause-effect relationships through which the consequences of the primary accident are amplified by subsequent accidents, spreading in space and time to other equipment not directly involved in the root event (Casal, 2017[2]). An obstacle involved in a flammable gas release is able to affect the behavior of the jet and the extent of the damage areas, as stated both in consolidated literature and in recent works (Bénard et al., 2016[3], Colombini and Busini, 2019[4]). The presence of obstacles that can change the shape and maximum axial extension of the free jet is of particular interest when it comes to industrial and process safety, since, in the case of an incidental event, consequences are directly proportional to the axial extension of the flammable mixture. However, the safety of a high-pressure jet is linked to its behavior and the effects that could arise, which are aspects not much investigated in literature: the modeling of such consequences is not easy, since simpler models, like the gaussian or integral model, are not capable of considering the presence of obstacles with satisfying outcomes. Consequentially, the usage of Computational Fluid Dynamic models (CFD) is needed, although it will require a noticeable amount of time and energy to utilize them properly. It is necessary to find the ideal approach to reach rigorous and precise results that can meet industrial needs, with the objective to define the limits of how much the incidental scenario can be simplified without losing its significance. This work focuses on the analysis of the interactions between a high-pressure flammable gaseous jet and several generic obstacles of varying shapes, dimensions, and numbers. Specifically, the studied obstacles are horizontal and vertical cylinders positioned in a series arrangement. The effects of a jet impinging against multiple obstacles have been studied through computational fluid dynamics (CFD) as the shape, size, orientation and relative distance between obstacles vary. Moreover, the characteristics of the methane release were also varied in terms of upstream pressure and rupture hole size in order to obtain an engineering correlation that allows to predict the area threatened by the pressure jet that impacts against multiple obstacles. The situations in which the presence of one or more obstacles does not affect the behavior of the jet were also considered, in order to verify in which condition it is possible to simplify the industrial scenario without losing the significance of the results.

References

1.D.A.Crowl and J.F.Louvar in Chemical Process Safety: Fundamentals with Applications, Pearson, (2012). 2.J.Casal in Evaluation of the effects and consequences of major accidents in industrial plants, Elsevier, (2017). 3.P.Bénard, A.Hourri, A.Angers, and A.Tchouvelev in Adjacent Surface Effect on the Flammable Cloud of Hydrogen and Methane Jets: Numerical Investigation and Engineering Correlations, International Journal of Hydrogen Energy, 41, 18654-662 (2016). 4.C.Colombini and V.Busini in Obstacle Influence on High-Pressure jets based on Computational Fluid Dynamics Simulations, Chemical Engineering Transactions, 77, 811-816 (2019a).

14:00-15:20 Session 17C: S.14: Digital twin: recent advancements and challenges for dealing with uncertainty and bad data III
Chair:
Edoardo Patelli (University of Strathclyde, UK)
Location: CQ-007
14:00
Marco de Angelis (University of Liverpool, UK)
Ander Gray (University of Liverpool, UK)
Bounding precise failure probability with the SIVIA algorithm
PRESENTER: Marco de Angelis

ABSTRACT. The accuracy of Monte Carlo simulation methods depends on the computational effort invested in reducing the estimator variance. Typically reducing such variance requires invoking Monte Carlo with as many samples as one can afford. When the system is complex and the failure event is rare, it can be challenging to establish the correctness of the failure probability estimate.%, even deploying the most advanced Monte Carlo methods. To combat this verification problem, we present an adaptation of the SIVIA algorithm (Set Inversion Via Interval Analysis) that computes rigorous bounds on the failure probability of rare events. With this method, the nonlinearity of the system and the magnitude of the failure event no longer constitute a limitation. This method can therefore be used for verification, when it is of interest to know the rigorous bounds of the very small target failure probability of complex systems, for example in benchmark problems. The method is rigorous i.e. inclusive and outside-in, so the more computational effort is invested the tighter the bounds.  Because full separation is exercised between the engineering and the probability problem, the input uncertainty model can be changed without a re-evaluation of the physical function which opens avenues towards computing rigorous imprecise failure probability. For example, the reliability  could be formulated without making dependency or distributional statements.

14:20
Gaël Hequet (University of Lorraine - CRAN, France)
Nicolae Brinzei (University of Lorraine - CRAN, France)
Jean-François Petin (University of Lorraine - CRAN, France)
Modelling failures in complex systems with Profile-Based Stochastic Hybrid Automata
PRESENTER: Gaël Hequet

ABSTRACT. Electrical energy is one of the most closely watched subjects due to its growing need and its current and future problems. In order to cope with this, France has a strong nuclear fleet of 56 nuclear power plant reactors in operation in December 2019 [1]. To manage this fleet, efficient tools are needed to both maintain and improve the know-how of the management of these reactors and to guarantee their safety and reliability. To address this issue, the research project “Digital Reactor”, led by EDF, is currently developed. The objective of the "Digital Reactor" project [2] is to build a "digital twin" of a nuclear reactor, covering all life cycle phases, in order to simplify the design process and to forecast safety margins in operation. Such digital twin of a nuclear reactor must provide engineers and operators with an integrated software for displaying complex physical phenomena, allowing them to achieve predictive simulation in any operating situation (normal or incidental) by integrating dysfunctional modes due for example to degradation or ageing of equipment’s. Thus, we will be in the presence of a hybrid simulation of both 3D physical continuous phenomena and discrete events representing the occurrence of failures or the crossing of acceptable levels of degradation. In this context, we have developed previously [3] the Stochastic Hybrid Automata (SHA) model. In the current research work, we propose an extension of this model that is the Profile-based Hybrid Stochastic Automaton (PBSHA) [4]. The addition of profiles allows to consider different constraints applied on the system according to its operating mode or its environment, as well as to consider its degradation state which can also evolve differently according to applied constraints. These different profiles will thus have an impact on components failure rates. To show the advantages of using profiles in a system modelling by a SHA approach, we consider a simplified secondary circuit of a nuclear reactor [5] for which PBHSA models are developed and implemented in PyCATSHOO software [6] in order to take into account the variation of failure rates. Finally, the impact on the availability of the different components and the overall system are assessed according to a scenario in which the system evolves through several transition phases allowing it to pass through various profiles. These variations in system usage modes degrade the system differently and can have a significant impact on its availability. The use of PBSHA requires a good knowledge of the systems under study, their behaviour according to the various possible profiles and their level of degradation, but this can help to refine the proposed models. By better understanding the impact of the system usage modes on its degradation, it will be possible to be more precise and relevant in the future applied control strategies.

References [1]ASN. (2020). ASN report on the state of nuclear safety and radiation protection in France in 2020 (Rapport de l'ASN sur l'état de la sûreté nucléaire et de la radioprotection en France en 2020). Online : https://www.asn.fr/annual_report/2020fr/ . [2] EDF. (2020, 01 01). R&D: Digital to transform nuclear power | EDF France (R&D: Le numérique pour transformer le nucléaire | EDF France). Online : https://www.edf.fr/groupe-edf/inventer-l-avenir-de-l-energie/r-d-un-savoir-faire-mondial/pepites-r-d/reacteur-numerique/ambition-du-projet [3] Perez Castaneda G. A., Aubry J.-F., Brinzei N., “Stochastic hybrid automata model for dynamic reliability assessment”, Proceedings of the Institution of Mechanical Engineers Part O Journal of Risk and Reliability, 225, 1 (2011) 28-41. [4] Hequet, G., Brînzei, N., & Pétin, J.-F. (2021). Usage profile in physical systems modelized with stochastic hybrid automata. 2021 International Conference on Information and Digital Technologies (IDT), 220-229. doi:10.1109/IDT52577.2021.9497617 [5] Babykina G., Brinzei N., Aubry J.F., Deleuze G., “Modeling and simulation of a controlled steam generator in the context of dynamic reliability using a Stochastic Hybrid Automaton”, Reliability Engineering and System Safety, 152, (2016) 115-136. [6] Chraibi, H. (n.d.). PyCATSHOO. Getting started with PyCATSHOO. Website : pycatshoo.org: http://pycatshoo.org/Getting%20started%20with%20PyCATSHOO-PYCV1228.pdf

14:40
Mimi Zhang (Trinity College Dublin, Ireland)
Dermot Brabazon (Dublin City University, Ireland)
Andrew Parnell (Maynooth University, Ireland)
Bayesian Optimisation for Intelligent and Sustainable Experimental Design
PRESENTER: Mimi Zhang

ABSTRACT. Engineering designs are usually performed under strict budget constraints. Each datum obtained, whether from a simulation or a physical experiment, needs to be maximally informative of the goals we are trying to accomplish. It is thus crucial to decide where and how to collect the necessary data to learn most about the subject of study. Data-driven experimental design appears in many different contexts in chemistry and physics where the design is an iterative process and the outcomes of previous experiments are exploited to make an informed selection of the next design to evaluate. Mathematically, it is often formulated as an optimization problem of a black-box function (that is, the input-output relation is complex and not analytically available). Bayesian optimization (BO) is a well-established technique for black-box optimization and is primarily used in situations where (1) the objective function is complex and does not have a closed form, (2) no gradient information is available, and (3) function evaluations are expensive. BO is efficient for multi-objective problems, where the multiple criteria could include sustainability, time, budget, safety, stability, etc. This work aims to bring attention to the benefits of applying BO in designing experiments and to provide a BO manual, covering both methodology and software, for the convenience of anyone who wants to apply or learn BO.

15:00
Francesca Marsili (Chair of Engineering Materials and Building Preservation, Helmut-Schmidt University, Germany)
Filippo Landi (Department of Civil and Industrial Engineering, University of Pisa, Italy)
Alexander Mendler (TUM School of Engineering and Design, Technical University of Munich, Germany)
Sylvia Keßler (Chair of Engineering Materials and Building Preservation, Helmut-Schmidt University, Germany)
A Bayesian approach to determine the minimum detectable damage

ABSTRACT. This paper proposes an approach to the evaluation of the minimum detectable damage, which takes advantage of the Bayes Theorem and of Bayesian Hypothesis Testing. Assuming that some model outputs depending on random parameters are observed, a special application of the Kalman Filter to stationary inverse problems is applied, also called Linear Bayesian Filter, which allows to obtain an analytic formulation of the posterior distribution. A method called HDI+ROPE is used, which is based on a decision rule considering a range of plausible values indicated by the highest density interval of the posterior distribution, and its relation to a region of practical equivalence around the null value. The analytic formula for the minimum detectable damage derives from the limit condition for which it is possible to establish with certainty the presence of damage. In order to validate the formula, an application is developed to a simple linear abstract problem and to a single degree of freedom system, in which the results obtained analytically are compared with those obtained by simulation. This approach could represent a significant step forward in the design of non-destructive tests for existing infrastructures since it allows to put in relationship structural reliability with the reliability of the measurement system, allowing also, in the particular case of Structural Health Monitoring, to consider static and dynamic measurements.

14:00-15:20 Session 17D: S.28: Reliability and Maintenance for Internet of Things and 5G+ Networks
Chair:
Enrico Zio (MINES Paris, PSL University and Politecnico di Milano, Italy)
Location: CQ-006
14:00
Eetu Heikkilä (VTT Technical Research Centre of Finland Ltd., Finland)
Timo Malm (VTT Technical Research Centre of Finland Ltd., Finland)
Daniel Pakkala (VTT Technical Research Centre of Finland Ltd., Finland)
Jere Backman (VTT Technical Research Centre of Finland Ltd., Finland)
Pekka Pääkkönen (VTT Technical Research Centre of Finland Ltd., Finland)
Implications of 5G connectivity on mining automation safety
PRESENTER: Eetu Heikkilä

ABSTRACT. Automation is increasingly being implemented in the mining sector to improve efficiency and safety of operations. When using automated machines, human operators do not need to routinely work in the most hazardous areas of the mine. Instead, the operations can be monitored and controlled from a safe remote location. However, various tasks, such as maintenance operations, require humans to enter the areas where the automated machines operate. To enable flexible operations in such situations, the mining systems need to enable co-existence of humans and automated machines in the same area. This increases the capability of carrying out various tasks in the mine without stopping the operation of automated machines, but it also increases the risk of collision between humans and the machines. Thus, safety of such systems needs to be carefully ensured.

To enable increasingly intelligent and autonomous solutions, highly reliable communication technologies are needed to provide connectivity between machinery, backend systems, and operators supervising the machines and mining activities. However, building sufficient connectivity in an underground mining environment is challenging. Currently, 5G networks are being developed to improve connectivity in the mining sector. 5G is the fifth generation of cellular network technology which is applied with the objective to deliver high data rates with ultra-low latency, while enabling high reliability of the communication. In industry, the introduction of 5G has been linked with, for example, new applications in industrial internet-of-things (IoT) and machine-to-machine communications. It is also widely studied in the automotive industry as an enabler for autonomous driving.

This paper focuses on the following two aspects related to 5G and autonomous machinery. First, based on a literature study, we introduce the key impacts of 5G connectivity on mining operations, considering the changes that are expected to be enabled by 5G technology. This includes considerations of increased use of automation, human-machine interactions, as well as overall optimization of the mining operations. This section results in a structured view of key opportunities and challenges introduced by 5G in the mining sector.

Second, we focus on safety requirements of new 5G-enabled autonomous systems. This includes an overview of relevant standards and guidelines that support the development and provide relevant requirements for the systems. As a result, we present a preliminary framework of safety requirements that need to be addressed in development of 5G-enabled autonomous mining machinery.

14:20
Rui Li (Orange Innovation, France)
Bertrand Decocq (Orange Innovation, France)
Anne Barros (CentraleSupélec - Université Paris Saclay, France)
Yiping Fang (CentraleSupélec - Université Paris Saclay, France)
Zhiguo Zeng (CentraleSupélec - Université Paris Saclay, France)
A petri net-based model to study the impact of traffic changes on 5g network resilience
PRESENTER: Rui Li

ABSTRACT. With the evolution of telecommunication technology, the 5G and beyond networks are supposed to provide a high traffic volume and high-speed connection for end-users and a broader range of applications for the industry. To this end, technologies such as Software Defined Network (SDN) and Network Function Virtualization (NFV) are generalized for 5G and beyond networks. Thus, this system becomes more flexible and meets the application requirement under different circumstances. However, this flexibility results in increasing network management complexity. Resilience is always a critical performance indicator for telecommunication networks, no matter how complex the structure becomes. As defined by International Telecommunication Union, network resilience is the ability to provide and maintain an acceptable level of service in the face of faults and challenges to normal operation based on prepared facilities. Before implementing the whole system, operators must be aware of network resilience to provide continuous services meeting the Service Level Agreements (agreements between a service provider and a customer on service performance parameters, e.g., end-to-end latency and packet loss). To assess resilience, it is necessary to understand the 5G structure and build a model based on it. 5G network is a complex system, from a multi-layer and an end-to-end perspective. 5G network is composed of interdependent physical and/or virtual elements. These elements may work together to attain a higher resilience level. Nonetheless, a disturbance propagating between these elements would drop the resilience level as well. Therefore, we need to properly model such a complex system and map the service requests on the telecommunication architecture to evaluate 5G network resilience. This paper addresses this problem by proposing a colored Petri Net-based model, a well-known tool for system modeling. This model describes how three network layers (network management/orchestration, 5G, datacenter infrastructure) function and work together. We also study a specific functionality of network virtualization, auto-scaling, that appears to be a critical enabler that helps 5G be resilient to traffic variations. The result carried out by discrete event simulation of such model shows the practicality of Petri Net in modeling 5G virtualized networks and in evaluating some resilience metrics. This Petri Net-based model may provide a new perspective for operators to estimate the telecommunication network performance. This model will also be helpful to configure the network to fulfill the resilience requirements of the vertical industries.

14:40
Ali Maatouk (Huawei Technologies, France)
Fadhel Ayed (Huawei Technologies, France)
Wenjie Li (Huawei Technologies, France)
Harvey Bao (Huawei Technologies, France)
Dandan Miao (Huawei Technologies, France)
Ke Lin (Huawei Technologies, France)
Xin Chen (Huawei Technologies, France)
Enrico Zio (MINES Paris, PSL University and Politecnico di Milano, Italy)
A Mathematical Framework for the Evaluation of System Expected Utility Not Satisfied Under Periodic Demand
PRESENTER: Ali Maatouk

ABSTRACT. In this paper, we consider a general system whose reliability can be characterized with respect to a periodic time-dependent utility function related to the system performance in time. When an anomaly occurs in the system operation, a loss of utility is incurred that depends on the instance of the anomaly’s occurrence and its duration. Under exponential anomalies’ inter-arrival times and general distributions of maintenance time duration, we analyze the long-term average utility loss and we show that the expected utility loss can be written in a simple form. This allows us to evaluate the expected utility loss of the system in a relatively simple way, which is quite useful for the dimensioning of the system at the design stage. To validate our results, we consider as a use case scenario a cellular network consisting of 660 base stations. Using data provided by the network operator, we validate the periodic nature of users’ traffic and the exponential distribution of the anomalies inter-arrival times, thus allowing us to leverage our results and provide reliability scores to the aforementioned network.

15:00
Khaled Sayad (CentraleSupelec, Université Paris Saclay, Orange Innovation Networks, France)
Yiping Fang (CentraleSupelec, France)
Anne Barros (CentraleSupelec, France)
Zhiguo Zeng (Centralesupelec.fr, France)
Benoît Lemoine (Orange Innovation Networks, France)
Interdependency-Aware Resource Allocation for High Availability of 5G-enabled Critical Infrastructures Services
PRESENTER: Khaled Sayad

ABSTRACT. The introduction of the fifth generation of mobile technologies (5G) in critical infrastructures (CIs) operations will allow the delivery of more sophisticated critical services. Smart grids, intelligent transportation systems (ITS), and industrial internet of things (IIOT) are examples of 5G-enabled critical infrastructures which are highly dependent on the information and communication technology (ICT) infrastructure. Achieving 5G performances in terms of latency and connectivity requires massive softwarization of critical services. That is, critical services are hosted as software applications in virtualized edge (small) data centers, which raises concerns about the high exposure to cyber-risks, frequent maintenance interventions, and failure propagation caused by CIs interdependencies. In order to ensure service continuity during DC maintenance operations, services are migrated to available DC where the service level objectives (SLOs), in terms of latency and high availability are fulfilled. However, this requires the construction of a dense edge DC network to support such redundancy and high service availability, leading to high and long-term capital expenditure (CapEx). In this work, we propose a framework that enables CIOs to effectively share their DC infrastructure to achieve CI network-level resilience. Critical services hosted in a DC subject to a disruptive event, for example, a maintenance operation, are migrated to a close DC operated by another CI operator. By doing so, we guarantee high service availability and mitigate failure propagation if the hosting DC is dependent on the services impacted by the DC maintenance. Moreover, DC sharing will help CIs operators to optimize their CapEx by avoiding the installation of new edge DCs. We formulate a mixed-integer nonlinear program (MINLP) to model the migration process. Also, an epidemic model is formulated to capture failure propagation, based on which, an interdependency-aware overbooking strategy is designed to increase resource usage and decrease the request blocking rate. The model will be tested on real network topologies under different settings.

14:00-15:20 Session 17E: S.09: Novel strategies for the safety assessment of dynamic and dependent systems I
Chair:
John Andrews (University of Nottingham, UK)
Location: LG-20
14:00
Silvia Tolo (University of Nottingham, UK)
John Andrews (University of Nottingham, UK)
Fault Tree analysis including component dependencies
PRESENTER: Silvia Tolo

ABSTRACT. The inability of commonly used risk assessment methodologies to model component dependency is often recognised to be a barrier towards more realistic modelling and the accurate representation of systems complexity. This is a significant limitation associated with current, widely adopted techniques such as Fault and Event Trees which rely on the underlying assumption of stochastic independence among system’s components. This rarely finds justification in engineering practice, with most real-world systems relying on the interaction and hence mutual influence of components operational conditions. This results in elaborate, dynamic networks of dependencies which are further influenced by shared environmental conditions and common cause failures. The incapability to represent such underlying relationships translates into a source of analysis inaccuracy which may lead to misjudgement and wrong decisions. The problem is far from new to the scientific community, and several alternative tools have either been developed with the explicit purpose of overcoming the current modelling limitations or borrowed from areas of interest not strictly associated with risk and safety analysis. In spite of these efforts, the application of such techniques in engineering practice is still limited. This is mostly due to the computational burden associated with dynamic analysis and simulation techniques, such as in the case of Petri Nets and Markov Models, or to inherent limitations in capturing the full range of dependency types, as in the case of Dynamic Fault Trees. The research presented aims at providing mathematical solutions able to tackle the two main challenges associated with the modelling of dependency within the context of reliability and safety system analysis: on the one hand, the achievement of adequate flexibility for realistically representing a wide variety of system settings and interrelationships; on the other, the computational feasibility of the methodology, still retaining the familiarity of commonly used approaches (i.e. FT\ET), so to match the needs and requirements of real-world industrial applications. The proposed methodology relies on the use of Binary Decision Diagrams as well as on the manipulation of joint probabilities according to Bayes’ theorem. The technique is demonstrated using a simple numerical application for verification, while its applicability and computational feasibility is discussed in details.

14:20
Silvia Tolo (University of Nottingham, UK)
John Andrews (University of Nottingham, UK)
AN INTEGRATED MODELLING FRAMEWORK FOR COMPLEX SYSTEMS SAFETY ANALYSIS: Case-studies
PRESENTER: Silvia Tolo

ABSTRACT. The ever-increasing complexity of engineering systems has fuelled the need for novel and efficient computational tools able to enhance the accuracy of current modelling strategies for industrial systems. Indeed, traditional Fault and Event Tree techniques still monopolize the reliability analysis of complex systems despite their limitations, such as the inability to capture underlying dependencies between components or to include degradation processes and complex maintenance strategies into the analysis. However, the lack of alternative solutions able to tackle large scale modelling efficiently has contributed to the continued use of such methodologies, together with their robustness and familiarity well rooted in engineering practice. The current study investigates the application of a novel modelling framework for safety system performance which retains the capabilities of both Fault and Event Tree methods, but also overcomes their limitations through the circumscribed use of more exhaustive modelling techniques, such as Petri Nets and Markov Models. In order to describe the methodology developed and demonstrate its validity, five case-studies focusing on a simplified industrial plant cooling system are analysed. These cover a range of component dependency types and system settings which cannot be fully represented through the use of conventional fault and event trees. In more detail: • Case Study A relies on conventional assumptions such as full independence with no component degradation. • Case Study B investigates the inclusion of component degradation in the analysis while maintaining the assumption of independence. • Case Study C focuses on dependencies resulting from shared basic events between two or more subsystems. • Case Study D includes dependencies triggered by secondary procedures or processes, which may be not strictly connected with the hardware function (e.g. maintenance, load, surrounding conditions etc.). • Case Study E considers the overlapping of the dependency types investigated in Case C and D. The results obtained are compared to those achieved with existing techniques, in order to verify the accuracy of the implemented algorithms as well as to explore the effects of different assumptions on the analysis.

14:40
Michel Batteux (IRT SystemX, France)
Tatiana Prosvirnova (CERT Onera, France)
Antoine Rauzy (NTNU, Norway)
A Guided Tour of AltaRica Wizard, the AltaRica 3.0 Integrated Modeling Environment
PRESENTER: Michel Batteux

ABSTRACT. AltaRica 3.0 is the third version of the AltaRica modeling language. AltaRica 3.0 relies on the S2ML+X paradigm, i.e. on the idea that any modeling language used for safety and reliability studies (and beyond) is actually the combination of an underlying mathematical framework and a set of constructs to structure the models. In the case of AltaRica 3.0, the mathematical framework is the notion of guarded transition systems (the X) and a versatile set of object- and prototype-oriented constructs (S2ML). The result is a very powerful language that enables to taking into account dynamic phenomena (e.g. system reconfigurations) and comes with a set of efficient assessment tools. The objective of this paper is to make a guide tour of AltaRica Wizard, the integrated modeling environment supporting AltaRica 3.0. This includes the editor to author models, the interactive simulator to execute step by step sequences of events, the compiler to Boolean equations (fault trees), the generator of critical sequences and the stochastic simulator. Each of these assessment tools embeds advanced treatment algorithms whose correctness is mathematically established.

15:00
Chao He (University of Duisburg-Essen, Germany)
Abderahman Bejaoui (University of Duisburg-Essen, Germany)
Dirk Soeffker (University of Duisburg-Essen, Germany)
Situated and personalized monitoring of human operators during complex situations
PRESENTER: Dirk Soeffker

ABSTRACT. 1. Background Human behavior monitoring classically refers to the detection of human movements or a simple recognition of activities in limited known spaces. The monitoring of human activities in the context of concrete operating tasks often focuses on the detection of operating errors, unauthorized actions, or implicitly on the violation of protection goals. This contribution uses a qualitative description approach (Situation-Operator-Modeling (SOM)) with which the logic of human interaction in given formalized context as action sequences as well as the situational, i.e. contextual, application of individual single actions can be realized. Using the example of human driving behavior for driving situations on highways, the application of the method is presented in detail. The monitoring of concrete example drivers in real time will be demonstrated. The examples show that a direct warning or assistance will be helpful.

2. Goal of the work Situation-Operator-Modeling (SOM) can be applied to model human-machine interaction. Within the SOM approach processes in the real world are considered as sequences of scenes and actions, which are modeled as situations and operators, respectively. In driving process, a SOM action space can be generated to describe different possible action sequences and options human driver available. Meanwhile, human performance reliability score (HPRS) proposed in previous works is calculated based on driving data collected from driving simulator with the modified fuzzy-based CREAM (cognitive reliability and error analysis method) approach. Therefore, situated and personalized HPRS number could be assigned to the action sequence in SOM action space. In this case, an event-discretized behavior model situated and personalized monitoring human driver performance with human reliability number could be generated.

3. Applied methods In this contribution, the SOM approach is applied to generate action space for driving behaviors. The modified fuzzy-based CREAM approach is adopted to calculate HPRS number for each action.

14:00-15:20 Session 17F: H-workload: Human mental workload in safety critical applications
Chair:
Location: LG-21
14:00
Carlo Caiazzo (University of Kragujevac, Faculty of Engineering, Italy)
Marija Savković (University of Kragujevac, Faculty of Engineering, Serbia)
Marko Djapan (University of Kragujevac, Faculty of Engineering, Serbia)
Arso Vukićević (University of Kragujevac, Faculty of Engineering, Serbia)
Milan Radenković (Academy of Professional Studies Šumadija, Department in Kragujevac, Serbia)
Miloš Jovičić (University of Kragujevac, Faculty of Engineering, Serbia)
Framework of modular industrial workstations in a collaborative environment
PRESENTER: Carlo Caiazzo

ABSTRACT. Modern organisations aim to improve key economic parameters (productivity, effectiveness) in order to be competitive in global market. Furthermore, contemporary organizations strive to improve the health and safety of workers. One of the possible solutions to achieve that goal is to modernise production processes throught the integration of lean principles and innovative technologies of Industry 4.0. Despite the growing trend of automation and application of these advanced modern technologies, in many industrial tasks ( such as monotonous and repetitive assembly operations) it is not possible to implement full digitalization and workers perform them without a sense of enthusiasm and satisfaction. The focus of this research paper is on the transformation of a traditional assembly workstation into a modular human-robot workstation where the operator and collaborative robot share activities to improve workplace safety and worker's performance. The proposed modular assembly workstation (which is designed in accordance with the anthropometric, physiological characteristics of the operator) with integrated innovative elements (collaborative robot, poka-yoke system) is the basis for conducting advanced researches in the field of neuroergonomics using an innovative EEG system different scenarios – from a manual repetitive assembly task to collaborative assembly task - to prove that it will improve the physical, cognitive and organizational ergonomics and at the same time increase productivity and effectiveness.

14:20
Loïck Simon (Université Bretagne Sud : Lab-Sticc, France)
Clément Guerin (Lab-STICC, France)
Philippe Rauffet (Université Bretagne Sud, France)
Jean-Philippe Diguet (IRL CNRS CROSSING, Australia)
Integrating Transparency to Ecological Interface Design
PRESENTER: Loïck Simon

ABSTRACT. Ecological interfaces are used in many fields to facilitate the supervision of a dynamic and complex environment. These ecological interfaces can be designed based on analyses from the Cognitive Work Analysis (CWA) approach. These analyses can be completed by the exploitation of different conceptual frameworks to facilitate the design of the interface. To create an interface dedicated to human-machine cooperation, we propose to use the conceptual framework of transparency in addition to the CWA analyses. This communication shows what are the constraints that transparency models will highlight. This integration of models in ecological interface design implies a better understanding of the intrinsic differences of the predominant models. This understanding involves an analysis of the strengths and weaknesses of each model in interface design. In conclusion, the adaptive use of transparency in ecological interfaces seems to be a research perspective with great potential.

14:40
Vanessa Bertholdo Vargas (Instituto Tecnológico de Aeronáutica, Brazil)
Mayara Gomes Bovo (Instituto Tecnológico de Aeronáutica, Brazil)
Mário Crema Junior (Instituto Tecnológico de Aeronáutica, Brazil)
Moacyr Machado Cardoso Junior (Instituto Tecnológico de Aeronáutica, Brazil)
Jefferson de Oliveira Gomes (Instituto Tecnológico de Aeronáutica, Brazil)
WORKLOAD ANALYSIS OF HEALTH WORKERS DURING COVID-19 VACCINATION AND ORGANIZATION OF QUEUES AT UBS IN CITY OF FRANCA (SP-BR)

ABSTRACT. One of the understandings about workload is the combination of psychic and cognitive factors, with the psychic load being related to the affective factor at work and added to the cognitive demands of tasks such as reasoning. Health professionals are faced with dangerous, unhealthy environments and conducive to health risks, adding to the pressures and demands of their own work, and even more, the entire context of the COVID-19 pandemic that we are currently experiencing, this ends up favoring the development of mental illnesses, such as anxiety and depression. In Brazil, we have our Unified Health System - “Sistema Único de Saúde” (SUS), consisting of health actions and services under public management, through joint and articulated work between the Ministry of Health and state and municipal health secretariats. Furthermore, it is universal and free. The Basic Health Units - “Unidades Básicas de Saúde” (UBS) are the preferred gateway to the Unified Health System (SUS), and aim to address most of the population's health problems, without the need for referral to hospitals and emergencies. Based on that, this work aimed to analyze aspects of the workload of employees within the UBSs, applying the NASA-TLX (National Aeronautics and Space Administration and Task Load Index) method. The NASA-TLX is a multidimensional procedure that provides a global quantitative assessment of workload, based on a weighted average of the six-dimensional workload assessment: mental demand; physics; temporal; level of achievement; effort; and frustration. As methodological procedures, it was used the bibliographic survey of exploratory character, from books, scientific articles and academic works, as well as a descriptive field research, with the application of the method, obtaining secondary data for a qualitative analysis. As field research, a case study was carried out and the NASA-TLX measurement method was applied in a UBS, in the city of Franca - SP (BR). Franca is a Brazilian municipality in the interior of the state of São Paulo, it is the 77th most populous city in Brazil and the 9th most populous city in the interior of the state of São Paulo. It has an area of ​​605,679 km² and its estimated population in August 2021 was 358,539 inhabitants. In this study, the workload of employees at this UBS was compared in COVID-19 vaccination activities, in which employees apply COVID-19 vaccines to the population, and in customer service/COVID-19 vaccination campaign organization queue, in which employees explain, talk and answer the population's doubts about how to organize the lines for the application of COVID-19 vaccines. As a result of the application and analysis of the results, it was concluded that the workload is high in both activities, that the physical performance in the vaccination activity was greater than in the service provided to the public. Furthermore, the effort in the service activity was greater than in the vaccination activity. In addition, the achievement level scale, or self-performance, was identified as the lowest intensity for both activities, which can be explained by the fact that they are practical and routine activities, not demanding much from self-performance.

15:00
Virginia Silva Gomes (Aeronautics Institute of Technology (ITA), Brazil)
Raphael Gomes Cortes (Aeronautics Institute of Tecnology, Brazil)
Ruan Carlo Ferreira (Aeronautics Institute of Technology (ITA), Brazil)
Nadyélle Deboleto Oliveira Gomes (Brazilian Air Force, Brazil)
Mauro Pascale de Camargo Leite (Brazilian Air Force, Brazil)
Emilia Villani (Aeronautics Institute of Technology (ITA), Brazil)
Moacyr Machado Cardoso Junior (Aeronautics Institute of Technology (ITA), Brazil)
Assessment of mental workload in aeromedical transport in Brazil during the COVID- 19 pandemic

ABSTRACT. On December, 31st 2019, the World Health Organization (WHO) was warned about several cases of pneumonia in the city of Wuhan, in the People's Republic of China. One week later, on January, 7th 2020, the Chinese authorities confirmed that a new type of coronavirus was identified, called SARS-CoV-2. On March, 11th 2020, the disease caused by it was characterized by the WHO as a pandemic. With the advance of the contagious cases in Brazil and consequently overcrowding of hospitals in some regions and considering the continental distances of the country, the demand for air transport of patients with that disease has increased significantly. The transport of critical patients in those situations may require Air Intensive Care Units (ICUs), and in Brazil it is being carried out by air private operators and by government organizations such as the Brazilian Air Force. The cabin environment of an aircraft is generally more confined than an ambulance, which increases the potential for contamination and the difficulty of the patient care during transport. The health team involved in the transport is composed by doctors, nurses, physiotherapists and nursing technicians. Each professional has a pre-defined and important role in patient care. The transport of patients with infectious diseases may increase mental and physical overload of the entire crew involved in the process. This can be justified by the care provided to critically ill patients, the risk of contamination by the transport team, and the use of complete personal protective equipment (PPE) such as disposable hooded coverall, N95 respirators masks and face shield. The PPE was put on before receiving the patients on the origin runway and only removed after their transfer to the ambulance on the destination runway and disinfection of the aircraft cabin. During the use of the protective equipment, the professional cannot ingest liquids, eat and make physiological needs. This PPE, despite being essential, can be uncomfortable as it hinders mobility and the exchange of heat between the skin and the environment. To evaluate mental overload under these conditions, NASA-TLX questionnaire was used, which analyzes and calculates a weighted average of six subscales: mental demand, physical demand, temporal demand, frustration, effort and performance. A sample of nine (9) Brazilian Air Force health team members participating in aeromedical transports of patients with COVID-19 were asked to complete the NASA-TLX questionnaire after the flight, as well as a questionnaire on transport conditions, flight duration, number of patients transported, clinical status of patients, complications and deaths. Transport time was considered as the period from the patient's reception in the ambulance in the runway origin to the moment of transference of the patient to the ambulance in the runway destination. Questionnaires were completed using electronic forms. Each flight condition was evaluated separately, as well as the results to relate the flight characteristics to the reported workload. The health professionals interviewed were: 2 physicians, 2 nurse, 1 physiotherapist and 4 nursing technicians. The average time of the 9 flights evaluated was 4 hours. The results have shown that the effort factor was greater than on longer flight durations, which can be explained by the prolonged use of the PPE used during all missions. The number of transported patients ranged from 1 to 5 and the clinical status also varied between flights, from patients without signs of severity, severe and stable to unstable. Transport with unstable patients showed the highest final weighted rating, in addition to the greatest mental demand and reported frustration. The assessment of mental and physical overload in situations of pandemic and health emergencies is important to understand the limits of the crew and can help to reduce the risks of the operation and provide safer transport conditions. Indeed, this paper has the goal to understand the mental workload risks related to emergency health flight transport, and with it to foster safety and operational risks mitigations for this type of missions, such as resizing health team, work shifts adequations and improving PPE technology.

14:00-15:20 Session 17G: Maintenance Modeling and Applications III: Systems and Networks
Chair:
Claudia Fecarotti (Eindhoven University of Technology, Netherlands)
Location: CQ-009
14:00
Lucas Equeter (University of Mons, Belgium)
Phuc Do (Lorraine University, CRAN, UMR CNRS 7039, France)
Pierre Dehombreux (University of Mons, Belgium)
Benoît Iung (Lorraine University, CRAN, UMR CNRS 7039, France)
Opportunistic maintenance for multi-component system with structural dependence under resource constraints
PRESENTER: Lucas Equeter

ABSTRACT. The optimization of maintenance policy with respect to system availability or maintenance costs is an open research subject. Recent contributions go beyond simplistic models and include more realistic phenomena such as economic or structural dependence [1], imperfect maintenance [2], and the constraints on resource, including workers shortages [3]. While dependence between components is not a new concept, it has been the subject of increasing attention, and the combination of the different dependences (structural, economic, stochastic) is seldom investigated in the maintenance field [4]. Further, literature remains scarce on the impact of dependences with resource constraints on maintenance policies optimization, and the rare occurrences of combined approaches do not include opportunistic maintenance policies (see for example [5]). In this work, we present an opportunistic maintenance approach for multi-component system with economic and structural dependence as well as constraints on resource. The structural dependence between components is modeled as the impact of assembly/disassembly operation of components on the lifetime of connected components. Then, a virtual age-based structural dependence model is proposed. The opportunistic maintenance is defined as preventive maintenance actions undertaken at any system stoppage if the reliability of the maintainable component is below a given threshold, and the necessary resources is available. A discrete event simulation-based (Monte-Carlo simulation) optimization approach is developed, in which corrective and preventive maintenance actions are simulated as well as reliability-based opportunistic maintenance. This type of opportunistic maintenance policy uses the estimated reliability of equipment to make the decision of undertaking additional maintenance actions. Further, the discrete event simulation includes the use and scarcity of resource, such that opportunistic maintenance cannot be undertaken if required resource is already allocated or unavailable. In this framework, structural dependence is simulated through a virtual aging of structurally dependent equipment, virtually increasing the equipment age by a fraction of the remaining life of the equipment. Economic dependence is also simulated as a side effect of opportunistic maintenance: simultaneous actions have a lesser impact on availability due to sharing the production stoppage costs. A sensitivity analysis shows a strong influence of the aging ratio on the average maintenance cost, which is expected, but that it also can be efficiently mitigated by the opportunistic maintenance if appropriately optimized. The simulation is optimized through particle swarm optimization, with respect to system availability and average maintenance cost, for all possible resource configuration. This process provides the optimal opportunistic action threshold, expressed as equipment reliability, for optimizing the long-term behavior of the system with respect to the selected indicator. The obtained results show that given the system parameters, resource constraints have a deleterious effect on the opportunistic maintenance capabilities, limiting the number of actions that can be undertaken at a given time. In consequence, resource constraints attenuate the ability of the opportunistic maintenance policy to mitigate the influence of stochastic degradation. Regardless, an optimized opportunistic maintenance policy is shown to always be preferable to a preventive/corrective policy for the developed model.

[1] D.-H. Dinh, P. Do, and B. Iung, “Multi-level opportunistic predictive maintenance for multi-component systems with economic dependence and assembly/disassembly impacts,” Reliab. Eng. Syst. Saf., vol. 217, no. 108055, Jan. 2022, doi: 10.1016/j.ress.2021.108055. [2] C. Letot, P. Dehombreux, G. Fleurquin, and A. Lesage, “An adaptive degradation-based maintenance model taking into account both imperfect adjustments and AGAN replacements,” Qual. Reliab. Eng. Int., vol. 33, no. 8, pp. 2043–2058, Dec. 2017, doi: 10.1002/qre.2166. [3] S. Bouzidi-Hassini, F. Benbouzid-Si Tayeb, F. Marmier, and M. Rabahi, “Considering human resource constraints for real joint production and maintenance schedules,” Comput. Ind. Eng., vol. 90, pp. 197–211, Dec. 2015, doi: 10.1016/j.cie.2015.08.013. [4] M. C. A. Olde Keizer, S. D. P. Flapper, and R. H. Teunter, “Condition-based maintenance policies for systems with multiple dependent components: A review,” Eur. J. Oper. Res., vol. 261, no. 2, pp. 405–420, Sep. 2017, doi: 10.1016/j.ejor.2017.02.044. [5] M. Chen, C. Xu, and D. Zhou, “Maintaining Systems With Dependent Failure Modes and Resource Constraints,” IEEE Trans. Reliab., vol. 61, no. 2, pp. 440–451, Jun. 2012, doi: 10.1109/TR.2012.2192590.

14:20
Ipek Kivanc (Eindhoven University of Technology, Netherlands)
Claudia Fecarotti (Eindhoven University of Technology, Netherlands)
Néomie Raassens (Eindhoven University of Technology, Netherlands)
Geert-Jan van Houtum (Eindhoven University of Technology, Netherlands)
Multi-objective maintenance optimization for multi-component systems over a finite life span
PRESENTER: Ipek Kivanc

ABSTRACT. After-sales service is a challenge for original equipment manufacturers who provide technical services related to their products since the importance of key performance indicators such as maintenance costs and system downtime differ from customer to customer. As we seek the trade-off between total maintenance costs and system downtime, we propose a bi-objective optimization model to plan preventive maintenance for systems with a large number of heterogeneous components under a mixture of maintenance policies. Unlike most existing methods that assume an infinite planning horizon, we obtain the optimal policies over a finite horizon, which is the practical case for most systems. We use a two-stage bottom-up approach to optimize the maintenance plan at the component and system levels, respectively. We formulate the single-unit replacement problem at component level as a Markov decision process (MDP) and utilize an iterative procedure to optimize the interval of scheduled visits at the system level. We present a numerical example to illustrate the proposed approach and obtain the efficient Pareto solutions. We also investigate how the maintenance concept can be customized for individual customers.

14:40
Abu Md Ariful Islam (Norwegian University of Science and Technology (NTNU), Norway)
Jørn Vatn (Norwegian University of Science and Technology (NTNU), Norway)
A survey of multi-component maintenance optimization subject to condition-based maintenance

ABSTRACT. In recent times, interest in condition-based maintenance (CBM) has had an upward trend in academia and industry. There have been many works in the research areas of CBM and scheduling of multi-component systems separately. However, the number of works combining these two areas is still rather limited. Although research in CBM is more prevalent for a single component, it is often more practical to consider multiple components. Unfortunately, it is not straightforward to transform an individual CBM model to include more components due to several dependencies that play critical roles to influence optimality. In this paper, we present a review of the papers from the year 2017-to 2022 that involve both CBM and maintenance grouping. We classify the papers based on the dependencies and degradation models used to develop Remaining Useful Life (RUL) prognostics. We further investigate the type of uncertainties that are addressed and how they are dealt with.

15:00
Lavínia Araújo (Universidade Federal de Pernambuco, Brazil)
Isis Lins (Universidade Federal de Pernambuco, Brazil)
Caio Souto Maior (Universidade Federal de Pernambuco, Brazil)
Márcio Moura (Universidade Federal de Pernambuco, Brazil)
Diego Figueroa (Universidade Federal de Pernambuco, Chile)
Enrique Droguett (UCLA, United States)
A Quantum Optimization Modeling for Redundancy Allocation Problems
PRESENTER: Isis Lins

ABSTRACT. Reliability engineering studies are often conducted to minimize the probability of failure in complex systems, for instance, the redundancy allocation problem (RAP). The focus is to assign a number of parallel components to achieve the best possible overall system reliability within budget constraints. This approach has a non-linear nature and is an NP-hard combinatorial optimization (CO) problem. In the case of small instances, it can be solved exactly. Otherwise, meta-heuristics (e.g., genetic algorithms) have been developed and applied to deal efficiently with the problem. Meanwhile, quantum computing has gained space for solving CO problems. Often, they must be stated as quadratic unconstrained binary optimation (QUBO) models, considering the limitations and advantages of quantum devices, as the quantum machines themselves and the simulators in digital computers. Our paper models RAP as a binary linear problem to be translated into a QUBO model to be solved using a quantum approach. We perform computational experiments on small instances. We advocate that quantum optimization should be on the reliability engineering research agenda to follow the recent advancements in quantum computing.

14:00-15:20 Session 17H: Maritime and Offshore Technology: risk analysis II
Chair:
Ingrid B Utne (Department of Marine Technology, NTNU, Norway)
Location: CQ-105
14:00
Jakub Montewka (Gdansk University of Technology, Poland)
Przemysław Krata (Gdańsk University of Technology, Poland)
Tomasz Hinz (Gdańsk University of Technology, Poland)
Mateusz Gil (Gdynia Maritime University, Poland)
Krzysztof Wróbel (Gdynia Maritime University, Poland)
Probabilistic model for estimating the expected maximum roll angle for a vessel in the turn
PRESENTER: Jakub Montewka

ABSTRACT. The safety of ships in service is governed by international rules and regulations. One of these regulations (Intact Stability Code) specifies the maximum allowable roll angle for passenger ships when performing a turn. It is estimated statically, assuming favourable hydro-meteorological conditions, such as calm water, and the roll angle depends on the ship's stability. However, the practical estimation of the expected maximum angle before the expected turn remains a challenge, especially when it comes to adequately accounting for ship behaviour in waves and the resulting roll. This, in turn, can lead to underestimation of the ship's safety level, especially under operating conditions that differ from the design conditions. Therefore, this paper presents a data-driven probabilistic metamodel that estimates the expected maximum roll angle for a selected ship type based on information about the surrounding waves. The metamodel is developed in the context of a probabilistic casual model known as Bayesian Belief Networks (BBNs) that enables inference in the presence of uncertainty. Moreover, machine learning algorithms are used to assist in the process of data-driven modelling. As input data for the metamodel, we use the results of an external numerical model that simulates ship motions in six degrees of freedom, called LaiDyn. LaiDyn simulations are performed for a selected ship type and a wide range of operating conditions, including wave parameters, the relative angle of wave attack on the ship, ship speed, and rudder angle. Each simulation provides a 2D trajectory of a ship performing a turn, along with the roll angle trajectory for a given set of conditions. The metamodel presented here provides the maximum expected roll angle for a given set of input data (ship speed, rudder angle, wave parameters) in a probabilistic manner, taking into account the nonlinearity of the analysed phenomena and the associated uncertainties. Such a representation of the analysed situation reflects the operating conditions much more accurately and is, therefore, more suitable for the decision-making process that the model is intended to support. The developed metamodel can be used as a decision support tool on board a vessel to inform the crew whether, in the course of the expected manoeuvring strategy (applied rudder angle under the actual environmental conditions), the developed roll angle is within the acceptable range or whether another manoeuvring strategy must be sought that meets the maximum roll angle requirements.

14:20
Fernanda M. de Moura (University of Sao Paulo - USP, Brazil)
Leonardo O. Barros (Research and Development Center of Petrobras – CENPES – Petrobras, Brazil)
Rene T. C. Orlowski (Research and Development Center of Petrobras – CENPES – Petrobras, Brazil)
Marcelo R. Martins (University of Sao Paulo - USP, Brazil)
Adriana M. Schleder (Sao Paulo State University – UNESP, Brazil)
Reliability Analysis of a Deepwater Christmas Tree System focusing on non-production stoppage

ABSTRACT. The operations carried out on offshore oil platforms are surrounded by complex circumstances that involve the safety of people, the preservation of the environment and the economic success of national and multinational organizations. Therefore, it is essential to optimize process costs and minimize the risks involved. Since the Christmas Tree System (XTs) is one of the most important in the production of this industry, it was chosen for the application of a reliability analysis. Firstly, the system configuration was defined for a better understanding of the system. Subsequently, the causes that can lead the system to a production stop and leakage, as well as the associated failures, were raised. Then, a survey regarding failure probabilities was performed and it was possible to define the reliability of the system, equipment and components considering the production stoppage and leakage as the non-desired events. For this, two fault trees were created, in which the first considered the top event “production stop” and the second one considered “leakage”. The first event was chosen due to the great economic impact it can cause for the company; the second, due to the great environmental impact it can generate. The results demonstrate which components should receive greater attention or priority in order to avoid such events. Currently, in the literature, there are not many studies demonstrating XTs configuration, along with reliability analysis, specifically focusing non production stoppages. The relevance of this work is precisely the demonstration of an underwater Christmas tree system, the types of failure modes and the reliability analysis carried out, since it is not something trivial to find. It is important to mention that this study is part of a larger project that has already been carried out by LabRisco-USP and aims to optimize risk-based inspections on the equipment.

14:40
Sheng Xu (Norwegian University of Science and Technology, Norway)
Ekaterina Kim (Norwegian University of Science and Technology, Norway)
Stein Haugen (Safetec Nordic AS, Norway)
Impact of the ice navigation experience on the determination of CPT for BN model focusing on Arctic navigation
PRESENTER: Sheng Xu

ABSTRACT. With increased ship traffic in the Arctic, the study of shipping risks in ice-infested waters has gained increased attention. Bayesian Networks (BN) appear to be a particularly popular tool among risk management frameworks. The Conditional Probability Table (CPT), which is used to quantify relationships between variables, is a critical component of the development of BNs. With an emphasis on Arctic navigation, the CPT is determined primarily through expert elicitation and is influenced by ice navigation experience. In this study, we elaborate on the designed questionnaire and Røed method for the determination of CPT and analyze the input data provided by five experts with varying backgrounds. The analysis was conducted using decision bias in variable weight assignment and outcome distribution index R of the sub-model. The preliminary findings indicate that a lack of experience with ice navigation may contribute to higher decision bias in factors such as ice conditions, ice channel, hydrometeorology, and ship maneuverability status. The smaller weight of the variable could result in a negligible change in the probability distribution of CPT. The results of this study demonstrate the importance of considering ice navigation experience when conducting expert elicitation for Arctic navigation, as well as the limitations of the Røed method for assessing the CPT for a BN model.

15:00
Trond Kongsvik (Norwegian University of Science and Technology, Norway)
Hanne Finnestrand (Norwegian University of Science and Technology, Norway)
Changes in framework conditions in the Norwegian petroleum industry: What are their relations to safety?
PRESENTER: Trond Kongsvik

ABSTRACT. Framework conditions are important for safety, as they affect the possibilities for keeping risk under control. In this literature study, which is a follow-up of a similar study from 2011 (Rosness et al., 2011), 119 articles were reviewed to identify emerging framework conditions and their relation to safety in the petroleum industry. Changes in external framework conditions have led to company internal cost reductions and efficiency measures. In turn, this have contributed to more organizational complexity, and also putting the tripartite cooperation between the state, the employer organizations, and the trade unions under pressure. Different mechanisms that could lead to higher major accident and work environment risk are discussed.

14:00-15:20 Session 17I: Learning from Accidents
Chair:
Victor Hrymak (Technological University Dublin, Ireland)
Location: CQ-107
14:00
Scarlett Tannous (Paris Dauphine University - PSL, France)
Myriam Merad (Paris Dauphine University - PSL, France)
Jan Hayes (RMIT University, Australia)
Major accidents and risk prevention policies in the chemical and petrochemical industry in France: Paving the way towards an assessment framework
PRESENTER: Scarlett Tannous

ABSTRACT. Major events are characterized by their low probability (rare occurrence) and large impacts (significant damage). There are several debates on whether a major technological accident in a high-risk industry, like the chemical or petrochemical Seveso Upper Tier establishments, can be considered as a Black Swan or not. Whether it is considered as an “unknown unknown” or an “unknown known” event, the common attribute of these events is the inability to predict and/or the propensity to deny their occurrence. The errors of predicting and forecasting extreme rare events have been raised by Nicholas Nassim Taleb in his book, The Black Swan. Drawing on this philosophy, this research aims at investigating the evolution of the major industrial risks prevention policy changes in France and some of the relevant indicators of success. The research focuses on the chemical and petrochemical sector as it plays an important role in the French Economy despite the country tending towards deindustrialization. Several policy evolutions have taken place following accidents such as the AZF major accident in 2001 and Lubrizol-Normadie Logistique fire in 2019. To gain a better understanding of the mechanisms that generate major accidents at a macro level, the goal of this study is to investigate how the chemical and petrochemical industry and its major risk prevention policy/regulations have been evolving throughout the years. It aims to determine some of the influential and/or correlated aspects affecting the effectiveness of major risk prevention policy and its evolution. The methodology relies on a longitudinal study based on data analyses and placed in parallel with the evolution of the prevention policies in France. The quantitative primary data is collected from different national and international databases such as the ARIA database (Analysis, Research, and Information on Accidents); eMARS (the electronic Major Accident Reporting System); Insee (National Institute of Statistics and Economic Studies); Eurostat; and OECD.Stat. They include publicly available data. For consistency purposes, the 2013 – 2020 timeframe is considered at this stage of the work as the data availability for each of the collected indicators is different. Additionally, based on this data, these indicators illustrate the precursors, also called the intermediate outcomes along with the common indicators of ultimate outcomes (e.g., number of major accidents). They are essential measures of regulatory performance. The results reveal the variation of major and occupational accidents with respect to, among several indicators, the number of inspections, the payment credits related to the prevention of technological risks and pollutions throughout the years. The results also demonstrate some influential socio-economic trends related to the chemical and petrochemical industry. Additionally, several compatibility issues related to the indicators’ definitions/scopes among different databases were encountered. Future work aims to couple this analysis with field interviews with the stakeholders involved in the risk regulatory system to obtain a better understanding of the values, practices, and preferences to be considered when assessing risk prevention policies and their application.

14:20
Sina Øyri (SHARE - Centre for Resilience in Healthcare, University of Stavanger, Norway)
Siv Hilde Berg (SHARE - Centre for Resilience in Healthcare, University of Stavanger, Norway)
Improving quality and safety by independent, external investigation? The Norwegian Healthcare Investigation Board framework
PRESENTER: Sina Øyri

ABSTRACT. The aim of this paper is to provide insights into the role of analytical models applied in safety investigation of serious adverse events in the Norwegian healthcare system. This study reports findings from the context of the Norwegian Healthcare Investigation Board’s framework, with reflections provided about independent, external investigation in healthcare and its links to quality and safety. Results retrieved from documentary evidence showed that the government expected the Investigation Board to conduct independent, thorough, and systematic examination to reveal causality and system failures, with the potential of preventing recurrence of adverse events. Investigation reports outlined a variety in the methodological and analytical foundations applied during the investigations, by the Investigation Board. Results also indicated potential drawbacks with independent, external investigation. These are indicated by issues of case selection, time and resource consumption and competence building within the investigatory board. The framework presented in this paper could potentially contribute to increase the recognition of the complexity and nuances in the accident or serious adverse event under scrutiny, without hampering conditions for individual and organizational learning. Overall, this paper could contribute to methodological development of independent, external systemic accident investigation methods in healthcare.

14:40
Edvard Aamodt (Rise Fire Research, Norway)
Anne Steen-Hansen (RISE Fire Research, Norway)
Ole Anders Holmvaag (Rise Fire Research, Norway)
LEARNING FROM FIRE INCIDENTS – Analysis of a devastating fire in a building with municipal housing in Norway
PRESENTER: Edvard Aamodt

ABSTRACT. This article presents an analysis of a fire in a municipal apartment building used as housing for people with challenges connected to drug addiction. The fire took place in Norway 8th of August 2021. The incident happened during the night and the fire was spreading quickly and intensely via the external wooden balconies. The combination of risk factors both connected to the fire development and the characteristics of the occupants raises the potential for fire fatalities. This analysis seeks to understand why the fire spread with such a speed, and how everyone in the building survived without injuries. The analysis identified both technical and human factors that may help to answer these questions. The findings suggest that there were deficiencies connected to the technical fire safety design that if improved could have reduced the fire damage. Factors promoting the fire spread and fire intensity include the choice of wood material used in the construction of the balconies, no sprinkler system installed on the balconies and a large fire load on the balconies caused by the occupants’ tendency to accumulate possessions on the balconies. Factors contributing to the outcome of no injuries and no fatalities included occupants being awake during these late hours, and the strong social network between them. Such a network should be seen as a positive factor regarding robustness against fire and could be encouraged.

15:00
Ole Andreas Engen (University of Stavanger, Norway)
Marie Røyksund (University of Stavanger, Norway)
Marja Katariina Ylönen (University of Stavanger, Norway)
Lisbet Fjæran (University of Stavanger, Norway)
Jacob Kringen (University of Stavanger/The Norwegian Directorate for Civil Protection, Norway)
New risk concept and the public governance of the Norwegian petroleum industry. What enables or inhibits the practical enforcement and refinement of the concept?

ABSTRACT. When the PSA-N rephrased the risk concept to be the consequences of an activity with associated uncertainties in 2015, it was expected that such change would impact the regulatory strategies, most notably as materialized in audits and inspections. The regulatory objective behind changing the definition was, through highlighting uncertainties rather than probabilities, to improve the understanding and the competence related to the risk management processes. Røyksund and Engen (2020) identified factors that influenced the implementation of the risk concept in PSA-N immediately after 2015. However, the study emphasized that consequences for regulatory practices take time, and the substantial effects will therefore appear at a later stage. The authors also underlined the need for further clarification of the practical use of the uncertainty-based risk perspective and suggested to investigate whether interdisciplinary inspections provide a particularly fertile context for how uncertainty-base perspectives are followed up.

The Petroleum industry in the North Sea have been at the forefront in developing management-based and purpose- based regulations. The risk concept is embedded in the regulatory framework and in the intersection between standards and procedures, which constitutes a key mechanism for “self-regulation” or internal control. A basic premise for the regulatory strategy is that the companies act as responsible actors; that they identify, assess and describe the potential risks, implement relevant safety barriers, and adapt management systems. This model, typically referred to as “enforced self-regulation”, combines legally binding norms and “self-regulation” based on industrial standards and best practices in the industry (Engen and Lindøe 2019).

A consistent application of an enforced self- regulation requires therefore a comprehensive and systematic review of how the provisions are understood and how the standards and concepts should be used to meet the requirements. There is an inherent tension between following comprehensive guidelines and best practices, and the desire for the industry to continually innovate and implement new expertise and scientific knowledge that may improve safety. In the paper, we seek to further explore the governmental enforcement mechanisms and how PSA-N has adjusted its practice according to the new risk concept during a longer period. Through examining inspection reports and inquiry reports published by the PSA - N in the period of 2017-2021, we follow up Røyksund and Engen (2020) on how the new risk concept has been articulated in the interaction between the regulator and regulated, and what factors that enable or inhibit the practical enforcement and refinement of the concept. The Qualitative Data Analysis software package, NVIVO facilitates the analysis.

References Engen, O.A., and Lindøe, P.H., (2019) “Coping with Globalisation: Robust Regulation and Safety in High-Risk Industries” in Le Coze, J.C. Safety Science Research. Evolution, Challenges and New Directions. CRC press 2019

Røyksund, M.; and Engen. O.A., (2020) “Making sense of a new risk concept in the Norwegian petroleum regulations” in Safety Science Volum 124.

14:00-15:20 Session 17J: S.25: Climate Change and Extreme Weather Events Impacts on Critical Infrastructures Risk and Resilience
Chair:
Francesco Di Maio (Politecnico di Milano, Italy)
Location: CQ-106
14:00
A.H.S Garmabaki (Luleå University of Technology, Sweden)
Johan Odelius (Luleå University of Technology, Sweden)
Gustav Strandberg (Swedish Meteorological and Hydrological Institute, Sweden)
Adithya Thaduri (Luleå University of Technology, Sweden)
Stephen Mayowa Famurewa Famurewa (Luleå University of Technology, Sweden)
Uday Kumar (Luleå University of Technology, Sweden)
Javad Barabady (UIT, Norway)
Climate change impacts assessment on railway maintenance
PRESENTER: A.H.S Garmabaki

ABSTRACT. Majority of infrastructures were conceptualized, designed and built without any in-depth analysis of climate change impacts. Climate changes have negative impact on railway system and related costs. Increased temperatures, precipitation, sea levels, and frequency of extreme adverse weather events such as floods, heatwaves, and heavy snowfall, create specific risks for railway infrastructure assets, operations and maintenance. Research has shown that adverse weather conditions are responsible for 5 to 10% of total failures and 60% of delays on the railway infrastructure in northern Europe. In Sweden, weather-related failures in switches and crossings (S&C) cause about 50% of train delays, where winter maintenance of S&C cost on average 300MSEK annually. The paper exploring a pathway towards climate resilience in transport networks and urban build environments to reduce future disturbances due to extreme climate conditions by effective maintenance program. The objective will be achieved by utilizing RAMS (reliability, availability, maintainability and safety) methodology and integrating infrastructure degradation modelling with metrological and satellite information. A qualitative research methodology is employed in the study using a questionnaire as a tool for information gathering from experts from several municipalities in Sweden, Swedish transport infrastructure managers, maintenance organizations, and train operators. finally, recommendations considering adaptation options is provided to ensure an effective and efficient railway transport operation and maintenance.

14:20
Rouzbeh Shirvani (Department of Energy, Politecnico di Milano, Italy, Italy)
Tarannom Parhizkar (The B. John Garrick Institute for The Risk Sciences, University of California, Los Angeles (UCLA), United States)
Resilience-based Electric Sector Optimization in Response to Extreme Weather Conditions with Distributed Generation Systems

ABSTRACT. Extreme weather events stemming from climate change can cause significant damage and disruption to power systems. Failure to mitigate and adapt to climate change and its cascading effects can lead to short and long-term issues. The profound costs of outage in power systems, integrated with the impacts on individual safety and security from loss of critical services, necessitate an urgent need to guarantee resilience in electric power systems.

This article proposes a framework to optimize the electricity sector design to be more resilient to climate change and extreme weather events by using distributed generators. The proposed framework considers components’ dynamic behaviour and interdependencies under an uncertain environment. Climate data and socio-economic factors are used to generate demand and supply pattern scenarios. The generated scenarios are simulated and utilized in a stochastic optimization model to find the optimal resilience-based system design.

14:40
Evelyn Mühlhofer (Weather and Climate Risks Group, ETH Zürich, Switzerland)
Elco Koks (Institute for Environmental Studies, VU Amsterdam, Netherlands)
Giovanni Sansavini (Institute of Energy and Process Engineering, ETH Zurich, Switzerland)
David N. Bresch (Weather and Climate Risks Group, ETH Zürich, Switzerland)
Chahan M. Kropf (Weather and Climate Risks Group, ETH Zürich, Switzerland)
A disaster risk modeller’s approach to infrastructure failure cascades: Mapping impacts globally.

ABSTRACT. Critical infrastructures (CIs) such as power lines, roads, telecommunication and healthcare systems across the globe are more exposed than ever to the risks of extreme weather events in a changing climate. Damages to CIs often lead to failure cascades with catastrophic impacts in terms of people being cut off from basic service access. Yet, there is a gap between traditional CI failure models, operating often at local scales, with detailed proprietary, non-transferrable data, and the large scales and global occurrences of natural disasters, calling for the integration of perspectives from several fields to approach the complexities of such interconnected systems [1]. We demonstrate a way to bridge those incompatibilities by linking a globally consistent and spatially explicit natural hazard risk modelling platform (CLIMADA [2]) to a CI failure cascade model. The latter makes use of complex network theory, which has previously been demonstrated as a useful approach to capture many interconnected CIs across large regions [3] and is built to work with publicly available infrastructure data and dependency heuristics between CIs to represent a system-of-systems at national scales. The interoperable modelling chain [4] computes natural hazard-induced infrastructure component damages, produces consequent CI failure cascades and translates those technical failures into population clusters experiencing disruptions to basic service access.

In this contribution, we showcase how a comparable and systemic risk view on cascading CI failures from natural hazards can be obtained across different regions. To this end, we run the developed modelling chain for probabilistic tropical cyclone hazard sets in 2-3 tropical cyclone exposed countries and evaluate their impacts on a technical and social dimension.

References [1] Zio, Enrico. 2016. “Challenges in the Vulnerability and Risk Analysis of Critical Infrastructures.” Reliability Engineering & System Safety 152 (August): 137–50. https://doi.org/10.1016/j.ress.2016.02.009. [2] Bresch, David N., and Gabriela Aznar-Siguan. 2021. “CLIMADA v1.4.1: Towards a Globally Consistent Adaptation Options Appraisal Tool.” Geoscientific Model Development 14 (1): 351–63. https://doi.org/10.5194/gmd-14-351-2021. [3] Thacker, Scott, Raghav Pant, and Jim W. Hall. 2017. “System-of-Systems Formulation and Disruption Analysis for Multi-Scale Critical National Infrastructures.” Reliability Engineering & System Safety, Special Section: Applications of Probabilistic Graphical Models in Dependability, Diagnosis and Prognosis, 167 (November): 30–41. [4] Mühlhofer, Evelyn, Elco E. Koks, Chahan M. Kropf, Giovanni Sansavini and David N. Bresch. (in review). "A Generalized Natural Hazard Risk Modelling Framework for Infrastructure Failure Cascades". https://doi.org/10.31223/X54M17

15:00
Oscar Urbina Leal (University of Minho, ISISE, Department of Civil Engineering, Portugal)
Pilar Baquedano Julià (Institute for Sustainability and Innovation in Structural Engineering (ISISE), Portugal)
Tiago Miguel Ferreira (Department of Geography and Environmental Management, University of the West of England, UK)
Alexander Fekete (TH Köln – University of Applied Sciences, Germany)
Jose C Matos (University of Minho, ISISE, Department of Civil Engineering, Portugal)
Elisabete Teixeira (University of Minho, ISISE, Department of Civil Engineering,, Portugal)
Integrated Risk and Resilience Assessment for Critical Civil Infrastructures – A Case Study Proposal for Fire Risk in Northern Portugal

ABSTRACT. Urban areas are frequently affected by fires due to flammable materials and building density, easing the propagation of the fire, resulting in high fire risk and generating high impacts. In addition, climate change exacerbates the conditions that trigger fires worldwide due to the increment of the media temperature caused mainly by greenhouse emissions and prolonged droughts. Critical Civil Infrastructures assessment involves identifying and evaluating the risks that may negatively influence its operations, diagnosing and predicting multiple weak points, and proposing measures for strengthening its resilience. Furthermore, risk assessment is often utilized to formulate mitigation strategies to reduce the impacts of a hazard. On the other hand, a resilience assessment provides restoration strategies for the infrastructure to reach an expected performance level after the extreme event. An integrated approach of risk and resilience assessment is a frequent research topic. Both present an inherent synergy that permits the comprehensive evaluation of the Critical Civil Infrastructure from the risk identification until the recovery and rehabilitation stage, i.e., from the pre-event until the post-event situation. This study aims to present a framework to assess risk and resilience in a Critical Civil Infrastructure in the Northern region of Portugal against fire hazard. An overview of the framework and the characteristics of the case study will be displayed.

14:00-15:20 Session 17K: S.30: Synergies between Machine Learning, Reliability Engineering and Predictive Maintenance II
Chair:
Biswajit Basu (Trinity College Dublin, Ireland)
Location: CQ-010
14:00
Asma Ladj (IRT Railenium, France)
PHM development in railways: key enablers and challenges

ABSTRACT. In recent years, railways play a vital role in freight and passenger transport. As safety-critical systems, it is crucial to accurately detect and predict, as early as possible, any faults that may affect their operation. In this context, prognostic and health management (PHM) offers promising opportunities for the design and implementation of effective predictive maintenance strategy. To allow successful development of PHM in railway industry, it is important to study its key enabling technologies. Hence, we provide in this paper an overview of fundamental pillars for PHM deployment. This includes condition monitoring technologies, namely sensing and networking techniques. Besides, powerful methods for robust and accurate data processing based on artificial intelligence are presented. Moreover, we highlight barriers in railway PHM development, which should be addressed by both the research community and the industry.

14:20
Blazhe Gjorgiev (ETH Zurich, Switzerland)
Laya Das (ETH Zurich, Switzerland)
Seline Merkel (ETH Zurich, Switzerland)
Martina Rohrer (Swissgrid AG, Switzerland)
Etienne Auger (Swissgrid AG, Switzerland)
Giovanni Sansavini (ETH Zurich, Switzerland)
Coupling physics-based models and machine learning for predictive maintenance of power line insulators
PRESENTER: Blazhe Gjorgiev

ABSTRACT. Power grids are challenged by an aging infrastructure, as well as the energy transition, which is adding even more stress to the transmission assets. One of the most critical assets in the power transmission infrastructure are the overhead transmission lines. They are often subject to fluctuations in the environment such as daily and seasonal variations in humidity and temperature, as well as rain, wind, and lightning. These conditions can deteriorate the electrical properties of lines as well as supporting structures on towers, i.e., insulators, thereby weakening the insulation of the lines. This allows leakage of current from the lines, which, if left unchecked, leads to short circuits and, ultimately to the loss of power supply. Therefore, the timely identification of unhealthy insulators via online monitoring systems is crucial to avoid unplanned outages of power lines. Online monitoring systems contribute to condition-based and predictive maintenance and can improve traditional maintenance schemes. Thereby unnecessary asset replacements can be avoided, and unhealthy assets can be detected before failure; this in turn reduces costs and improve efficiency and reliability. The availability of data corresponding to different operating conditions plays an important role in reliably detecting and localizing faults. However, obtaining an adequate amount of data for different types, severity, and location of faults to reliably identify distinguishing patterns may not be doable in most systems. Under these circumstances, a physics-based model of the system often allows one to generate data under different operating conditions. In this work, we develop a physics-based model of an existing power line. The model is calibrated to produce leakage currents that are well aligned with the measured ones. The model is then used to produce large sets of leakage current data while simulating different states of fault at different locations along the power line. The generated data is used to train multiple state-of-the art machine learning methods, including long-short term neural networks and convolutional neural networks. Furthermore, we exploit different extraction techniques and automate the machine learning pipeline using custom layers. Using test data sets, the obtained models show accuracies higher than 99%.

14:40
Raphaël Langhendries (Paris 1 Panthéon Sorbonne & Safran Aircraft Engines, France)
Jérôme Lacaille (Safran Aircraft Engines, France)
Turbofan exhaust gas temperature forecasting and performance monitoring with a neural network model

ABSTRACT. Exhaust Gas Temperature (EGT) denotes the temperature of the exhaust gas when it leaves the turbine. EGT is an important parameter for measuring the energy efficiency of a turbofan engine. Indeed, the heat energy produced by an aircraft engine corresponds to a loss of power. Therefore, forecasting the exhaust gas temperature is a key task to monitor the engine performance and schedule maintenance operations. In this paper, we propose a new method for forecasting EGT throughout engine life. The EGT is regarded as a time series, for each flight we aim to predict the EGT during the cruise. Our model is a neural network that leans on recurrent networks and attention mechanisms to compute a \emph{state vector} that represents the wear of the engine. Moreover, we show that this \emph{state vector} can be used to monitor the engine's energy efficiency over time.

15:00
Laya Das (ETH Zurich, Switzerland)
Mohammad Hossein Saadat (ETH Zurich, Switzerland)
Blazhe Gjorgiev (ETH Zurich, Switzerland)
Etienne Auger (Swissgrid AG, Switzerland)
Giovanni Sansavini (ETH Zurich, Switzerland)
Object detection-based monitoring of power transmission insulators
PRESENTER: Laya Das

ABSTRACT. Power transmission insulators are used to support transmission lines, while preventing leakage of current to the ground. The health of insulators is an important factor in reliable power delivery through the electric grid. Deep learning-based prediction of insulator health has been receiving increasing attention as a health monitoring technique in academia as well as in industry. However, most models developed in the literature lack adequate amount of data that is required for reliable training of deep neural network that can contain up to millions of parameters. As a result, such models are prone to overfitting to the training dataset and might not generalise well. This work addresses this challenge and uses a relatively large dataset to pre-train deep neural network models. The pre-trained models are used to make predictions on a entirely new dataset, i.e., set of tower images taken in different country than the training and test datasets. The generalised performance of the models suggests that pre-training results in the models learning a fairly robust representation of insulators. Transfer learning of the pre-trained models with a small fraction of the target dataset further improves the performance of the models.

15:40-17:00 Session 18A: S.32 In memory of Ioannis A. Papazoglou: new methods and applications on quantified risk assessment for process and energy systems
Chair:
Myrto Konstandinidou (NCSR DEMOKRITOS, Greece)
Location: LG-22
15:40
Myrto Konstantinidou (NCSR DEMOKRITOS Systems Reliability and Industrial Safety Laboratory, Greece)
Olga Aneziris (NCSR DEMOKRITOS Systems Reliability and Industrial Safety Laboratory, Greece)
Effie Marcoulaki (NCSR DEMOKRITOS Systems Reliability and Industrial Safety Laboratory, Greece)
Zoe Nivolianitou (NCSR DEMOKRITOS Systems Reliability and Industrial Safety Laboratory, Greece)
The contribution of Ioannis A. Papazoglou to the development of QRA in technological systems
PRESENTER: Olga Aneziris

ABSTRACT. Dr. Ioannis A. Papazoglou (1949- 2021) has been active in the international scene of risk assessment & management for the past 40 through various positions in European and internationals organizations, associations and committees. He is considered one of the pioneers in Quantitative Risk Assessment both in the nuclear and chemical industry. The highlights of his work in Systems Reliability and Industrial Safety Laboratory (SRISL), the lab that he has founded, are presented by his colleagues.

16:00
Myrto Konstandinidou (NCSR DEMOKRITOS, Greece)
Konstantinos Kirytopoulos (National Technical University of Athens, Greece)
George Chatzistelios (National Technical University of Athens, Greece)
Emmanouil Dermitzakis (National Technical University of Athens, Greece)
Increasing the quality of road infrastructure through systemic approach in safety management

ABSTRACT. The value of road infrastructure is greatly appreciated in contemporary cultures and lives. Such infrastructure binds communities together by facilitating communication and commerce. Their acceptance and use are now widespread across the world. Road tunnels are critical road infrastructure elements because they increase transportation flow inside metropolitan areas, allow for crossing high terrain, and reduce the environmental impact, travel time, and transportation costs (Kirytopoulos et al., 2020). According to Santos et al. (2017), one of the most significant parameters for evaluating the quality of roadway operators is safety. Road safety improvement is a top goal in the realm of transportation management and transportation safety is critical for any country's authorities since it significantly influences both the economic growth and quality of life. Makarova et al. (2021), stated that management choices aiming at reducing the risk of road accidents should be based on a systematic approach to determining the causes and severity of accidents. The basis of risk assessment principles points exactly to this direction, i.e. reducing the probability and the magnitude of potential road accidents, especially those related to fire (Ntzeremes & Kirytopoulos, 2019). Theories on accident causation abound in the literature and the evolution of accident causation models over time shows a shift from the representation of sequences of events to the dynamic analysis of the whole system (i.e. systemic approaches). By using the system theory, the analyzed system is treated as a whole, not as the sum of its parts and safety is regarded as an emergent property that arises from the relationships among the parts of the system, i.e. how they interact and fit together. The purpose of this paper is to propose a framework on enhancing the quality of highway services through increased and systemic approach for safety issues. The present study is focusing on a very specific and critical element of the road infrastructure, that is tunnels.

16:20
Olga Aneziris (NCSR "DEMOKRITOS", Greece)
Risk assessment for truck to ship liquefied natural gas bunkering

ABSTRACT. Over the last few years, there has been an increased demand for liquefied natural gas (LNG) as marine fuel, owning to the requirements for reducing hazardous gas emissions from ships. In order to enhance the use of LNG in the maritime industry, specially designed port facilities have been established worldwide providing some or all of the key bunkering methods, namely tank to ship, truck to ship, and ship to ship (Aneziris et al., 2020). In this paper risk assessment for truck to ship LNG bunkering is carried out by exploiting the results of the projects “Risk management system for design and operation of installations for LNG refuelling” (TRiTON) financed by the Greek government, and the “SUstainability PERformance of LNG-based maritime mobility – Plus” (SUPER-LNG PLUS) financed by Interreg-Adrion. Truck to ship bunkering constitutes a simple method, when storage tanks cannot be installed in the port areas or when LNG demand is low. In brief, risk assessment is conducted by following the basic steps: a) hazard identification and likelihood assessment, b) consequences evaluation, and c) risk integration. First, the Master Logic Diagram (MLD) technique is used to identify the initial events that create a disturbance in the installation and have the potential to lead to an LNG release during a ship to ship bunkering operation. Corrosion in tanks, pipelines and other parts, and excess external heat owing to nearby external fire are merely some of the identified initial events. Moreover, safety functions and systems for preventing LNG release, such as emergency shut-down (ESD) and pressure safety valves (PSV), are identified. Event trees are developed to describe the accident sequences, from the initial event occurrence until the LNG release and define the final damage states. By exploiting available failure rate data, the frequency of each damage state is estimated. In parallel, the consequences of LNG release are estimated on the basis of the dose an individual receives from heat radiation or overpressure. Finally, iso-risk contours are calculated by combining the frequencies of the various accidents with the corresponding consequences. A case study for a Greek port is, herein, presented.

16:40
Linda Bellamy (White Queen Safety Strategies, Netherlands)
Joy Oh (RETIRED Ministry of Social Affairs and Employment, Netherlands)
Factors for success in developing new methods in collaborative projects seen through the story of Dr Ioannis A. Papazoglou
PRESENTER: Linda Bellamy

ABSTRACT. In the 1980s, a project was started between the UK and Greece concerning the development of a link between the technical (in the form of QRA) and human (in the form of a process safety management audit) components of risk. This background initiated further integration projects, with a marrying up of a number of organisations in the UK, Greece, the Netherlands, Norway and Denmark and involving regulators and inspectorates, universities, consultancies and major hazard companies. There was much diversity in the stakeholders of this group - different functions, disciplines, languages, goals and resources. Because safety is an applied science, a multidisciplinary team was needed for the integration problem in risk assessment. Amongst the needs for members this included involvement of: - The client e.g. for an EU funded project, the client (representative) from the EU. - Scientists and mathematicians for the modelling - Discipline experts - Data specialists - People for turning the models into practical tools - Government policy makers for having the policy vision of the end result - Inspectorates who enforce the law - End users from the areas of application To put together such a team, money is not enough. The bait is that the problem has to be cutting edge to attract people drawn in by the complexity of the problem and committed to finding a solution, knowing that the process will be an interesting one along a path of learning. It helped in this case that the program manager had a reputation for managing interesting and challenging projects. Some people may come to the project with initial doubts and uncertainties but can be convinced by the reputation of the project manager and a room full of excited experts who are all prepared to work together to achieve the defined goal. Central to the scientific developments in the integration projects was risk assessment modelling and tool development carried out by Dr Ioannis Papazoglou and his team at the research institute Demokritos in Greece. Dr Ioannis Papazoglou died in October 2021, and in his memory this paper will cover his work on the projects with which the authors were involved, explaining his central role and how we all linked in to his conversion of a scientific model to a mathematical one and then to a practical tool. That mathematical model became the driver for all the other components of the solution. New thinking and the struggle to understand one another and resolve conflicts is never really discussed as an issue in scientific papers, while it is essential to progress. One key factor to success is that the people in the team have to have a “click” with one another. They like working together. Another is the reliability in delivering a product – knowing you can depend on one another, including the project manager who will be helping with team building and steering on course. In the context described, the following will be discussed concerning how the team handled: - The idea of a holistic risk quantification methodology - Developing the model-driven components and collecting the data to fill the model - Links between the risk quantification model and the other components such as safety management. Specific leading edge research will be briefly highlighted: - The EU I-Risk project for integrating safety and environment, technical and management modelling - The Dutch WORM and ORCA project for occupational quantified risk assessment so that so that internal and external safety can operate on the same playing field - Bowtie-builder and Storybuilder research tools - Further development of research tools for major hazards - The Dutch OHIA model, an integration of safety and health In these projects, some of the problems we had to resolve were: - How would we define the centre of the bow-tie in occupational QRA? - How to develop a dynamic QRA model that addressed changes in the quality of risk management over time. - What information is needed for calculating exposure to occupational hazards, the missing data in the risk calculation?

The conclusion is that success comes about, not only from having expertise in scientific and practical areas, but also from the creative context which fosters the best performance. Dr Ioannis Papazoglou showed us what he could do: his central role in the building of the scientific model, the conversion to a mathematical model, defining the data requirements and building the engine to do the calculations. He made the model operational and applicable in real life.

15:40-17:00 Session 18B: Supply Chains Management
Chair:
Amr Mahfouz (TU Dublin, Ireland)
Location: CQ-008
15:40
Jake Langton (PHM Technology, Australia)
Evan Apostolou (PHM Technology, Australia)
Paddy Conroy (PHM Technology, Australia)
Managing Asset and Supply Chain Risk Using A Digital Risk Twin
PRESENTER: Paddy Conroy

ABSTRACT. The effect of covid on supply chains, labour resources and sparing allocations has exposed fragility in many industry business approaches and exposed the need to understand the risks associated with just-in-time supply models.

Effectively managing operational risk is not achieved through superior forecasting but by understanding the vulnerabilities and resilience of a system (Madni and Jackson, 2009, Zitzmann, 2014). Vulnerabilities may exist due to asset failure, performance loss, labour shortages, supply chain disruptions or external events. This paper proposes a model-based solution to analysing operational vulnerabilities by distinguishing between two distinct but related problems. The first problem relates to the ability of a fleet of physical assets to provide function. The second relates to the capacity to support asset function through the supply of raw materials, spare parts, and labour from external sources.

Decades of risk analysis and safety certification of complex engineering assets have led to the development of procedures designed to understand and minimise risk. This paper will demonstrate the ability to identify and mitigate supply chain risk using a Digital Risk Twin (DRT) and provide a description of key steps to model a supply chain and asset vulnerabilities using Fuzzy Cognitive Maps (FCM) (Feyzioglu et al., 2007 ), failure diagrams and criticality assessments (Rudov-Clark and Stecki, 2009). It will be demonstrated that the DRT can be used to generate a set of model-based (risk/reliability) analysis outputs that will ensure the determination and understanding of vulnerabilities and risks in a supply chain. These analysis outputs include, model-based Failure Mode and Effects Analysis (FMEA), Fault Trees, and Common Mode Analysis.

The DRT enables risk to be modelled from the physical component to the supply chain level, mapping the flow of material and parts needed to sustain a company's operation (production and supply chain level). This uniform approach, based on engineering principles, enables a common language of risk analysis to be developed within an organisation and linked to a centralised, traceable risk model.

References: FEYZIOGLU, O., BUYUKOZKAN, G. & ERSOY, M. S. Supply chain risk analysis with fuzzy cognitive maps. IEEE International Conference, 2007 2007 IEEE. MADNI, A. & JACKSON, S. 2009. Towards a Conceptual Framework for Resilience Engineering. Systems Journal, IEEE, 3, 181-191. RUDOV-CLARK, S. & STECKI, J. 2009. The language of FMEA: on the effective use and reuse of FMEA data. Australian International Aerospace Congress. ZITZMANN, I. How to Cope with Uncertainty in Supply Chains? - Conceptual Framework for Agility, Robustness, Resilience, Continuity and Anti-Fragility in Supply Chains. Hamburg International Conference of Logistics (HICL), 2014.

16:00
Sina Rebekka Moen (PricewaterhouseCoopers / University of Stavanger, Norway)
Sigrid Haug Selnes (University of Oslo, Norway)
Janne Merete Hagen (NVE, Norway)
Ove Njå (University of Stavanger, Norway)
Digital Security in the Norwegian Power Systems’ Supply Chains

ABSTRACT. This paper presents a case study that addresses supply chain models and digital vulnerabilities in the Norwegian electric power system. The study is based on assessments of two different hypothetical models. The two models show the difference between a centralized and decentralized supply chain with associated vulnerabilities and attack techniques. The study combines a literature review and follow up interviews with key personnel in the power supply sector. The findings point to the importance of having an overview of digital vulnerabilities in the supply chains. Vulnerabilities can be found in every part of the chain, which makes it important to view the supply chain as a whole and not just as individual components or services. There are varying degrees of concern among relevant stakeholders related to the complexity of the supply chains and the uncertainty that accompanies the lack of overview. To achieve digital security in supply chains, it is important to have a conscious relationship to digital security in the procurement processes.

16:20
Lei Zhang (Nanjing university of science and technology, China)
Jian Zhou (Nanjing university of science and technology, China)
Yizhong Ma (Nanjing university of science and technology, China)
Lijuan Shen (ETH Zürich, Future Resilient Systems, Singapore-ETH Centre, Singapore)
Resilience of and recovery strategies for cyber-physical supply chain networks
PRESENTER: Jian Zhou

ABSTRACT. Today’s supply chains are becoming cyber-physical supply chain networks (CPSCN). Although the interdependence between different enterprises/nodes improves transportation efficiency, it also aggravates the vulnerability of the network, which requires a resilient CPSCN to mitigate the loss of network performance caused by different disruptions. Most of the research on the resilience of supply chain networks mainly focuses on the defense of network nodes, but few focus on resilience-based recovery optimization. In this paper, two different sequential recovery schemes (i.e., result-based recovery and resource-based recovery) are proposed based on the limitation of resources. Then, four heuristic algorithms are proposed to determine the order of optimal recovery. Using the Barábasi-Albert network (BA)-BA synthetic network as a case study, the resilience of the network under two sequential recovery schemes is analyzed. Finally, simulation results demonstrated the effectiveness of the proposed sequential recovery schemes in the resilience of the network.

15:40-17:00 Session 18C: S.13: Modeling Complexity in predicting Reliability and Resilience of Systems, and Systems of Systems
Chair:
Pierre Dersin (ALSTOM Transport, France)
Location: CQ-007
15:40
Ravdeep Kour (lulea university of technology, Sweden)
Amit Patwardhan (lulea technical university, Sweden)
Ramin Karim (lulea technical university, Sweden)
Pierre Dersin (Luleå University of Technology, Sweden)
Jaya Kumari (lulea technical university, Sweden)
A cybersecurity approach for improved system resilience
PRESENTER: Ravdeep Kour

ABSTRACT. The ongoing digitalisation of industrial system-of-systems is bringing new challenges in managing, monitoring, and predicting the overall reliability performance. The overall reliability of a cyber-physical system, like railway is highly influenced by the level of resilience in its inherent digital item.

The objective of this paper is to propose a systematic approach, based on an enhanced Cyber Kill Chain (CKC) model, to improve the overall system resilience through monitoring and prediction. The proposed cybersecurity approach can be used to assess the future cyberattack penetration probabilities based on the present security controls.

With the advancement in cybersecurity defensive controls, cyberattacks have continued to evolve through the exploitation of vulnerabilities within the cyber-physical systems. Assuming the possibility of a cyberattack it is necessary to select appropriate security controls so that this attack can be predicted, prevented, or detected before any catastrophic consequences to retain the resilience of the system. Insufficient cybersecurity in the context of cyber-physical systems, like railways, might have a fatal effect on the whole system availability performance including safety.

However, to improve the overall resilience of a cyber-physical system there is a need of a systematic approach to continuously monitor, predict, and manage the health of the system’s digital items with respect to security.

Furthermore, the paper will provide a case-study description in railway sector, which has been used for the verification of the proposed approach.

16:00
Humberto Carneiro De Sousa (EDF R&D, France)
Maxime Gey (EDF R&D, France)
Jules Druet (EDF R&D, France)
Anthony Legendre (EDF R&D, France)
New methods at EDF R&D to do RAMS analysis on wind farm networks

ABSTRACT. Wind Energy deployment is increasing in many parts of the world, with a 53% year-on-year increase in 2020. Ensuring both the reliability and the availability of new generation sites is of considerable importance for production facilities. The article sets out a prototyped version of the “K6” library developed for the KB3 platform [1]. It includes renewable energy assets and makes it possible to carry out a dependability study on these new systems. The new library includes additional components, in order to incorporate studies of wind farm networks. The changes consider the capacity of some equipment including electricity lines and power transformers, without exceeding the maximum power they can deliver. It is also now possible to quantify the energy that could/can be delivered without a failure in some part of the network i.e. the Non-Distributed Energy. This can be calculated considering the wind turbine capacity factor and, and therefore the wind speed. EDF R&D produces a quantitative analysis of the risks and gives recommendations on how to manage them. The main results are: the quantity of the Non-Distributed Energy in a period, the assessment of the likelihood of feared events, the contribution of failure sequences for a feared event, enabling testing of critical parts of the network as well as the reliability and availability benefits when we compare a standard network with an alternative solution. To illustrate the applications of this version of the K6 library, we will present a case study of an offshore wind farm electrical system. We will set out the differences between a standard and a looped architectural model. Using the new K6 library, it is possible to take the wind speed into consideration which allows for the dynamic behavior of some components including the power limit of the transformer and the capacity factor of the wind turbines. After modeling and parameterizing each wind farm electrical system, its behavior will be checked using a step-by-step simulator in KB3. Then, a Monte Carlo simulation will be performed for each capacity factor point of the wind turbines using YAMS [2] in order to quantify the Non-Distributed Energy, which is a critical indicator for production facilities. Finally, the results obtained by propagating in Markov graph for the KB3 platform will be used by the FigSeq tool [2] to quantify the reliability, the availability, and the failure sequences for a determined feared event. The study will allow us to compare the Non-Distributed Energy of both architectures and the major equipment which contributes to reduced wind farm transmission capacity. The new features of the K6 library are a valuable new resource for EDF R&D allowing us to respond appropriately to the demands of dependability studies of the growing renewable energy sector and the new energy technologies such as offshore wind farms. References 1. J. Sanchez-Torres and T. Chaudonneret, « Reliability and Availability of an undustrial wide-area network », 21e Congrès de maitrise des risques et de sûreté de fonctionnement, Reims, 2018. 2. M. Bouissou, Y. Dutuit et S. Maillard, «Chapter 7 Reliability analysis of dynamic phased massion system: comparison of two approaches», Modern Statistical And Mathematical Methods In Reliability, vol. 10, pp. 87-104, 2005.

16:20
Pierre Dersin (Eumetry and Luleå University of Technology, Sweden)
Kenza Saiah (Alstom, France)
Alban Péronne (Alstom, France)
Andrea Staino (Alstom, France)
David Moszkowicz (Alstom, Spain)
Achieving and predicting Resilience and Capacity in Urban Rail and Multimodal Transportation Systems
PRESENTER: Pierre Dersin

ABSTRACT. Background

A key performance characteristic of urban transportation systems is resilience. Beyond operational availability, this characteristic includes the ability to rebound when the system is subjected to external disruptions, which can be technical failures, passenger actions, or even cyberattacks. For instance, one indicator of resilience is the average time to return to nominal operation, i.e., operation of the network according to theoretical timetable or headway (time between two consecutive trains) after a disturbance; another is the total resulting fleet-wide delay. From an economic point of view, system capacity, i.e., throughput, the maximal number of vehicles which can flow through the system per unit of time, is the relevant yardstick. There is an obvious trade-off between resilience and throughput: the closer the system is operated to its capacity (to achieve a higher throughput), the less resilient it is, since, with shorter buffer times, small local perturbations will be more likely to propagate and to cause large delays. This leads to the concept of congested infrastructure, i.e., a network which is operated close to its theoretical capacity will be extremely sensitive to local disruptions. In assessing or predicting the resilience of an urban rail transportation system, both operation and maintenance policy are relevant: maintenance impacts the availability of the fleet and the infrastructure, while operation policy impacts the way in which the system reacts to external disturbances. If predictive maintenance is applied, it should be dynamically adjusted to the evolving health state of the assets. And, since the transportation system exists to fulfil passenger demand, it is ideal also to adapt operations and maintenance policy dynamically to the fluctuating demand.

Methods

When dealing with multimodal transportation, such as rail and road, the object of the study becomes a system of systems: a set of heterogeneous systems that are independently managed albeit having to coordinate somewhat, as they have the same end user: the passenger. Passenger demand is an uncontrollable external input to which the system has to adapt by adjusting supply dynamically. .

Machine Learning methods have been resorted to by Alstom, with their Mastria® Solution, in order to predict demand fluctuations in real-time, using various sources of data such as ticketing, counting devices, vehicles´ weight, wifi, etc. Thus, passenger flow data can be updated very frequently in an origin-destination matrix.

On the other hand, resilience prediction methods used essentially consist of Monte-Carlo simulations [1] of various external disturbance events, based on their expected probabilities, from which appropriate KPIs can be derived (as expectations or other distribution parameters).

Results and Perspectives

Predicting the transportation system disruptions allows to adapt operations for optimal traffic management. Those predicted data are used as inputs to the model to optimize supply, i.e., the traffic management of the network, in response to the anticipated demand. Traffic management includes the injection and withdrawal of vehicles , adaptation of vehicle speed and station dwell times, and routing. In case of incidents, alternative solutions can be proposed, especially in a multimodal context. Several traffic management policies can be compared, and their impact on resilience assessed. Alstom’s resilience prediction tool makes it possible both to simulate typical disruption scenarios over a long-term horizon and to predict key performance indicators under given traffic management policy assumptions. Traffic management policy can be identified to optimize resilience. Future research can address aggregation, i.e., beyond the detailed description of topology and individual assets, trying to identify global system characteristics. For instance, some studies [2] have evidenced the link between connectivity and resilience: the functional dependence of the latter on the former tends to be characterized by an inverted U curve—there is an intermediate degree of connectivity that achieves maximum resilience. As an example, in multimodal transportation, if the rail and road mode are totally disconnected, passenger demand rise on the rail network will have no impact on the bus network (no connection) but rail capacity shortage cannot be compensated by bus capacity (i.e. no resilience beyond that of the rail network itself).

References [1] ” The Monte-Carlo Simulation Method for System Reliability and Risk Analysis”, E.Zio, Springer, 2013. [2] ” Complexity Science”, Thomas Homer-Dixon, ‘Oxford Leadership Journal’, Vol.2, Issue 1, Jan. 2011.

16:40
Gianluca Filippi (University of Strathclyde, UK)
Edoardo Patelli (University of Strathclyde, UK)
Massimiliano Vasile (University of Strathclyde, UK)
Marco Fossati (University of Strathclyde, UK)
Multi-layer Resilience Optimisation for Next Generation Drone Logistic Networks
PRESENTER: Gianluca Filippi

ABSTRACT. The paper will present a novel approach to the design optimisation of a resilient Drone Logistic Network (DLN) for the delivery of medical equipment. It is proposed an integration methodology between Digital Twin (DT) models and optimisation processes, with the goal to optimises both network topology and delivery planning.

The DLN is a complex system composed of a high number of different classes of drones and ground infrastructures which interconnections give rise to the whole network behaviour.

The paper will focus on the definition of the different types of uncertainty affecting the DT and on the uncertainty quantification metrics. In particular, network resilience will be analysed which is considered the ability of the system to absorb the shock due to unexpected events and recover after it by evolving and adapting. The paper will describe the relationship between the concepts of system's reliability, robustness and resilience. In this direction, the different functional and relational types of sub-systems and components connections will be modelled by a multi-layer graph. Each node of this graph will be populated with a local reliability and robustness model and the global effect of uncertainty on the network's resilience will be finally analysed and used to drive the optimisation process.

The paper will describe the overall logic of the multi-layer resilience model and its quantification as an emergent behaviour of the complex system. It will then illustrate its applicability to the case of a Drone Network delivering medical items in Scotland.

15:40-17:00 Session 18D: S.29: Natural Language Processing, Knowledge Graphs and Ontologies for RAMS I
Chair:
Marcio Das Chagas Moura (Federal University of Pernambuco, Brazil)
Location: CQ-106
15:40
Dario Valcamonico (Politecnico di Milano, Italy)
Piero Baraldi (Politecnico di Milano, Italy)
Enrico Zio (MINES Paris, PSL University and Politecnico di Milano, Italy)
Anna Crivellari (Eni NR Division, Italy)
Luca Decarli (Eni NR Division, Italy)
Laura La Rosa (Eni NR Division, Italy)
A Taxonomy for Modelling Reports of Process Safety Events in the Oil and Gas Industry

ABSTRACT. The Process Safety Management System (PSMS) of an industrial asset relies on multiple and independent barriers for preventing the occurrence of major accidents and/or mitigating their consequences on people, environment, asset and company reputation. It is, then, fundamental to assess the performance of the barriers with respect to the occurrence of Process Safety Events (PSEs), i.e. unplanned or uncontrolled events during which a Loss Of Primary Containment (LOPC) of any material, including non-toxic and non-flammable material, occurs. An essential aspect of PSMS is learning from incidents and taking corrective actions to prevent their recurrence. For this, a procedure for timely and consistently reporting and investigating PSEs is generally implemented. After the occurrence of a PSE, a report containing free-text and multiple-choice fields is filed to describe the PSE, its causes and consequences, and to provide a quantification of its the level of severity with reference to predefined Tier levels, as per API RP 754 guidelines. This work investigates the possibility of text-mining and structuring the knowledge on the performance of the PSMS from an electronic repository of PSE reports. The methodology developed falls within the framework of Natural Language Processing (NLP), combining Term Frequency Inverse Document Frequency (TFIDF) and Normalized Pointwise Mutual Information (NPMI) for the automatic extraction of keywords from the PSE reports. Then, a taxonomy is built to organize the vocabulary in a top-down structure of homogeneous categories, such that semantic and functional relations between and within them can be defined. Based on these relations, a Bayesian Network (BN) is developed for modeling the PSEs consequences. The proposed methodology is applied to a repository of real reports concerning the PSEs of hydrocarbon facilities of an Oil and Gas (O&G) company.

16:00
Conal Brown (University of Liverpool, UK)
Effect of uncertainty on Technical Language Processing of inspection data

ABSTRACT. This contribution analyses a simple problem facing Technical Language Processing (TLP) techniques when applied to real, imperfect data that contains uncertainty. Data about equipment age is analysed from a database used to manage inspections of static equipment located on industrial sites across the UK. The dataset comprises of more than 72000 equipment ‘header’ records. One column in this table is the equipment construction date, an important parameter for predictive maintenance analysis because many deterioration mechanisms are correlated with time in service. The construction date data field is free text rather than a date data type. Although this potentially allows more nuanced data collection from users, the column contains a lot of ‘bad data’; many thousand entries are null, imprecise or ambiguous. The analysis considers how algorithmic interpretation of these ‘bad’ dates within a TLP workflow influences the age distribution of the equipment, which is an important parameter in reliability analyses.

16:20
Ralf Mock (Step Commerce AG, Switzerland)
Computer-assisted Text Analytics on Resilience by Latent Dirichlet Allocation Models

ABSTRACT. Resilience has been a topic at ESREL conferences and associated papers since 2007. Comparable interest can also be found in other conferences, as documented e.g. in the literature database SinceDirect. This means that there is a considerable amount of comparatively uniform texts that can be analyzed by Natural Language Processing (NLP). The proposed paper shows what conclusions can be drawn from this for the use of the term resilience. In addition, comparisons can be made between the content of ESREL papers and the broader publication framework in Sciencedirect and, if it exists, indicate a bias.

The text analysis is based on the papers with titles and abstracts on resilience from ESREL conferences and the ScienceDirect database from 2007 to 2021 (261 and 3097 papers, resp.). From the ESREL conferences, all papers from the resilience sessions were selected, as well as papers from other sessions where resilience is present in the title. The dataset from ScienceDirect includes all papers with title and abstract with the search term resilience in the search field "Title, abstract or author-specified keywords" and is further restricted to the article types Review and Research from the areas Engineering, Energy and Computer Sciences. This corresponds roughly to the criteria of an ESREL conference. The further text processing and analysis is carried out in several steps: The text is processed and cleaned up using stopwords, tokenization and lemmatization, for example. This helps to provide a simple text statistic for a first overview of, e.g., the determination of word frequencies and combinations.The actual LDA text models (Latent Dirichlet Allocation) are made with the Python packages Scikit-learn and Gensim. The latter additionally allows better estimates about the quality of the model. Since the texts are processed in different ways, this also serves to improve the interpretability of Topics, which are the results of LDA text analysis. Topics are the keyword lists generated by the LDA models that best summarize the content of all the documents analyzed (or that reflect their content with a certain degree of probability).

Both LDA models are widely used for conducting text analytics. What is new in the context of the proposed paper is to apply these models in relation to resilience. The application of the term resilience is questioned in order to analyze its relations to the core issues of the ESREL conference in more detail, e.g., in relation to risk, safety and reliability. The comparison with another literature data source shows where differences exist, if any.

The paper includes (and visualizes) text statistics and the themes identified by LDA models and their interpretation. Preliminary results for ESREL papers show that resilience should be primarily understood as a system management concept, mainly for critical infrastructures. There is a stronger link to risk, a weaker one to safety and none to reliability. The ScienceDirect papers strengthen this impression. Energy and community infrastructures emerge as specified critical infrastructures. The presented results are accompanied by measures to assess the quality of the model, e.g. probabilities, perplexity, average topic coherence and coherence score. The advantages and disadvantages of the approach and the strengths and weaknesses of interpretations are compiled.

16:40
Elena Zaitseva (University of Zilina, Slovakia)
Vitaly Levashenko (University of Zilina, Slovakia)
Jan Rabcan (University of Zilina, Slovakia)
Miroslav Kvassay (University of Zilina, Slovakia)
Sergey Stankevich (Scientific Centre for Aerospace Research of the Earth (CASRE), National Academy of Sciences of Ukraine, Ukraine)
A New Data Mining Based Method for Analysis of different data types in Prognostics and Health Management
PRESENTER: Elena Zaitseva

ABSTRACT. Prognostics and Health Management (PHM) is an integrated technology that allows analyzing and forecasting failures and avoiding accidents to improve system operation, safety and reliability, and maintenance. The use of this technology involves the simultaneous processing of various knowledge, information, and data of various types that differ in nature. The different types of data in PHM need different methods for the analysis and evaluation. The combination of results obtained by various methods and their joint use in prognostic tasks requires additional investigations and developments. Therefore, the development of a universal method that allows processing different types of data and presenting them in a single format for prognostic tasks is a relevant problem. A new method for analyzing different types of data based on classification is proposed in this paper. The proposed method allows the processing of such data types as signals, expert data, numerical sequences, linguistic data, and categorical data.

15:40-17:00 Session 18E: S.09: Novel strategies for the safety assessment of dynamic and dependent systems II
Chair:
Silvia Tolo (University of Nottingham, UK)
Location: LG-20
15:40
John Andrews (University of Nottingham, UK)
Silvia Tolo (University of Nottingham, UK)
Dynamic Tree Theory: A Fault Tree Analysis Framework
PRESENTER: John Andrews

ABSTRACT. Fault Tree Analysis has its origins back in the 1960’s and its development is attributed to Watson of Bell Laboratory when analysing the causes of an inadvertent launch of the Minuteman Intercontinental Ballistic Missile. The time dependent mathematical framework, known as Kinetic Tree Theory (KTT) was added at the end of the decade by Vesely. In this framework, the analysis of the fault tree is performed in two stages. The first delivers the qualitative analysis producing minimal cut sets, the second phase, quantitative, then produces the system failure mode probability or frequency. Performing the calculations in stage two requires assumptions about the operation and design of the system which will result in the independence of all basic events. In most commercial packages there are also very limited models for the component probabilities which assume constant failure and repair rates, and maintenance strategies limited to dealing with either non-repairable components, or repairable components whose failures are revealed and unrevealed. Since the 1970s advances have been made in the technologies employed in the systems, along with their operation and maintenance. This limits the ability of these traditional techniques to represent modern system performance. Since the 1970s advances have also been made in the Fault Tree Analysis capabilities, one of the most significant being the exploitation of Binary Decision Diagrams. This gives a framework in which more complex models can be integrated to remove the assumptions of: • Component independence • Component constant failure and repair rates • Simplistic maintenance strategies Petri Nets and Markov processes have been used to model the complexities and the results integrated into the fault tree analysis process exploiting the properties of Binary Decision Diagrams. This framework of Dynamic Tree Theory is presented.

16:00
Silvia Tolo (University of Nottingham, UK)
John Andrews (University of Nottingham, UK)
Ian Thatcher (Rolls-Royce, UK)
David Stamp (Rolls-Royce, UK)
Predicting the Risk of Jet Engine Failure using Petri Nets
PRESENTER: Silvia Tolo

ABSTRACT. Traditional risk analysis techniques such as Fault Trees and Event Trees fail to model complex aspects of systems behaviour such as components dependencies, degradation, or common cause failures, limiting their capability of representing modern engineering systems. When modelling the failure frequency of a jet engine the deficiencies in the traditional methodologies limit the ability to adequately represent the engineering behaviour. Such systems have dense networks of dependencies and hence in a high degree of complexity. Moreover, due to their safety critical nature as well as intensive operation conditions, maintenance is a crucial aspect of these systems lifecycle, introducing further sources of dependency or failure. Failing to take into account these aspects reduces the confidence which can be placed in the predictions of the systems behaviour and processes, requiring pessimistic assumptions to be implemented. The current study provides an alternative solution for the safety analysis of a jet engine based on the use of Petri Nets. The model implemented covers the interval between major engine overhauls, taking into account both in-flight operation and on-wing interventions. Components degradation processes as well as dependencies and common cause failures are included in the analysis, in order to offer a realistic representation of the system behaviour. The numerical analysis of the proposed model is investigated and discussed, together with the capabilities of the adopted technique and its comparison to other available methodologies.

16:20
Adolfo Crespo Márquez (University of Seville, Spain)
A Road Map for Digital Transformation in Maintenance.

ABSTRACT. The study of the digital transformation of maintenance in the context of industry and infrastructure is highly topical and interesting. According to reports from various institutions (notably the Industrial Internet Consortium), one of the business areas where this transformation is expected to be most significant is maintenance. It is therefore important to analyze why maintenance can benefit from this transformation and how to do it: What are the new technologies and tools with the greatest potential impact on maintenance and why? How can this transformation process be mandated and realized? How will emerging asset management platforms and new intelligent maintenance Apps impact companies? Etc. As soon as you start to delve into the subject, it is easy to realize that digital transformation is both an organizational challenge and a major technical challenge, and that a strategic planning process is necessary to address it. Appropriate lines of action need to be drawn so that the organization can define and control the data model for the management of its assets, and can make appropriate use of this data to make its management processes more efficient. To increase asset performance using 4.0 technologies, it is necessary to face new technical problems and challenges: the non-ergodicity of data processes in many assets, the selection of the dimension of the number of data needed to explain their performance, the way to consider and interpret risks, the way to use such risk assessment for dynamic maintenance scheduling, etc.

16:40
Raffael Wallner (Norwegian University of Science and Technology, Norway)
Mary Ann Lundteigen (Norwegian University of Science and Technology, Norway)
Approaches to utilize Digital Twins in Safety Demonstration and Verification of Automated and Autonomously Controlled Systems
PRESENTER: Raffael Wallner

ABSTRACT. Traveling has already become much safer within the last decades. Nevertheless, there are still too many fatalities due to accidents in, e.g., aviation, road and marine traffic. According to statistics the human factor poses the most significant risk. Hence, increasing safety and reducing the risk to human life and the environment is one of the main incentives for introducing autonomous systems to several means of transport. Therefore, the operational safety needs to be assured when approving autonomously controlled ships, vehicles or planes for general use. Safety assurance, however, proves to be a challenging task for autonomous systems. Conventional methods to proof safety are often not applicable. Additionally, it is not possible to count on human operators to deal with unknown situations during operation. This work describes some of the issues that arise when dealing with safety assurance of autonomous systems. Safety assurance and verification by demonstrating safe operation in simulations is discussed as approach for these issues. Furthermore, possibilities to utilize digital twins in those safety demonstrations are investigated.

15:40-17:00 Session 18F: H-workload: Human mental workload in safety critical applications
Chair:
Ivan Gligorijevic (MbrainTrain, Serbia)
Location: LG-21
15:40
Shuo Yang (politecnico di torino, Italy)
Micaela Demichela (politecnico di torino, Italy)
Jie Geng (Zhejiang University of Finance and Economics, China)
Ling Wang (Zhejiang University of Finance and Economics, China)
Zhangwei Ling (Zhejiang Academy of Special Equipment Science, China)
Better Understanding of the Human Roles Changing in Process Industry Digitization Using a Dynamic Model methodology
PRESENTER: Shuo Yang

ABSTRACT. The rapid evolution of internet-connected systems and digital technologies is gradually changing the human roles in the intelligent production loop of the process industry. It is essential to understand these changes to model the interactions and analyse the holistic risk of the process industry smart factory as a social-tech system. Some basic theoretical work about these changes has been done. Nunes et al. (2015) proposed a taxonomy of the human roles in Human-In-The-Loop Cyber-Physical Production Systems. Chiara Cimini et al. (2020) analysed the industry 4.0 technologies augment relevant human capabilities. This research tends to employ a more quantitative analysis methodology. Based on statistic and prediction data reports about industrial automation and industry 4.0, a dynamic model of ten years digitising a process industry site is developed to model the human roles changing within this process. The result underlines the importance of paying more attention to the cognitive functions of human roles, especially in the design, maintenance, and emergency phases.

16:00
Kelly Steiner (ONERA, France)
Bruno Berberian (ONERA, France)
Nicolas Lantos (ONERA, France)
Sinan Dogan Haliyo (ISIR, France)
Jean-Christophe Sarrazin (ONERA, France)
QUANTIFICATION OF COGNITIVE COST IN SENSORIMOTOR ACTIVITY
PRESENTER: Kelly Steiner

ABSTRACT. 1. General Appearance The key to the development of HMI technologies lies in the acquisition of knowledge and in the integration of different disciplines such as biomechanics and the neurosciences by industrials. In the field of aeronautics, one of the most significant challenges with most flight control design is human workload. Indeed, in the field of Human Factors and Ergonomics, it is admitted that cognitive overload results in failure to cope with hugely demanding complex systems. As a result, the level of cognitive workload is a dimensional criterion that must be minimized in the early stages of designing such flight control systems. However, in order to achieve this objective, it is necessary to have a detailed, multi-level understanding of the control mechanisms of the action so that the flight controls allow optimal control of the machine's own movement. The term 'optimal' used here may raise many questions as to the meaning of the function that such a device can play. A particularly relevant theoretical framework for refining the consideration of effector properties at a local level is that of optimal control. A particularly important issue in the field of motor control in neurosciences concerns the understanding of the principles underlying the production of human movements. This problem can be posed in terms of inverse optimal control through the question "What cost functions does the central nervous system (CNS) optimise to coordinate the different degrees of freedom required to produce a movement? In this framework, a cost function is defined by a biomechanical criterion, selected by the CNS, to be minimised during the performance of a given movement (Albrecht, 2013). In this context, our study aims at validating criteria of movement optimality (kinematics specifically) as a measure of cognitive cost with respect to their relationship with a subjective measure established in the field of ergonomics (NASA-TLX). For our case study, we are interested in the movement of the arm. Thus, we formulate two hypotheses: (1) a decrease of the optimality of the movement with the increase of the difficulty of the task (i.e. presence of nonlinearity in the kinematics of the movement); (2) a sensitivity of the optimality metrics in agreement with the subjective measure of the cognitive cost (i.e. the same effect of the increase of the difficulty of the task). 2. Methods The experiment is a classic Fitts' reciprocal pointing task (Fitts 1954). Fitts' law involves the speed and accuracy of the movement by setting spatio-temporal constraints determined by the difficulty index (DI). It allows to characterize the performance of the movement and the optimality of the movement through changes of motor strategies. The participants had to position themselves on the target areas as quickly and accurately as possible by modulating only the position of the cursor on the medio-lateral axis. We had 8 pilots who performed the task in a geocentric reference frame with a positional control mode applied to the stick. They had a familiarization phase of the task and then an experimental phase with 8 blocks of 5 DIs of interest randomized for each block. One trial corresponds to one DI. During each trial, 30 targets are to be reached, i.e. 15 cycles/trial. A cycle is defined as a round trip: A-B-A and a trial is validated if the success percentage is between 75 and 95%. For the even blocks, they had a NASA-TLX form completed after each trial. 3. Materials For this experiment, we used the SCHEME experimental platform of Onera with a model aircraft type EC225, to which a mini Brunner® CLS-E Joystick (BRUNNER Elektronik AG) is integrated. All measurements are made according to the type of movement (pronation vs supination) in the effector space defined as the participant's arm space. The dependent variables treated are the kinematic variables: trajectory, movement time, phase planes, Hooke portraits (Mottet and Bootsma 1999). As well as, a subjective measure of cognitive cost, obtained thanks to the NASA-TLX scale readapted in French by Cegarra and Margado in 2009. This scale is composed of six independent criteria that the subjects must evaluate, on the basis of their feelings, on non-numerical scales. 4. Results and discussion First, we studied the relationship between task complexity and movement optimality. First, Fitts' law is verified for all our participants so that we observe a degradation of the performance (movement time) with the increase of the DI. Moreover, the kinematic analysis (phase plane) reveals a modification of the motor strategy, translated by a decrease of the linearity of the movement, with the increase of the DI. In a second step, we studied the complementarity of the optimality criteria to existing classical measures of cognitive cost We note an increase in the global score of the NASA-TLX with the increase of the DI. Also, when analyzing individually the mental and physical scores, we observe a linear increase for the mental score while the physical score seems to vary little with DI. Our results seem to show a sensitivity of the optimality metrics in agreement with the subjective measure of cognitive cost, i.e. the same effect of increasing difficulty on the objective and subjective metrics. 5. Conclusions The objective of this work was to characterize the sensitivity of optimality metrics to an existing measure of cognitive cost. From a fundamental point of view, this research work allows a better understanding of the relations between the optimality of the movement and the cognitive cost inherent to the execution of a task. From an application point of view, this study constitutes a new methodological brick in the user-centered design approaches of assistive systems. References Albrecht S., (2013), Modeling and Numerical Solution of Inverse Optimal Control Problems for the Analysis of Human Motion, Technische Universität München. Cegarra J., & Morgado N. (2009), Étude des propriétés de la version francophone du NASATLX. In Communication présentée à la cinquième édition du colloque de psychologie ergonomique (Epique) (pp. 28-30). Fitts P.M., (1954). The information capacity of the human motor system in controlling the amplitude of movement. Journal of Experimental Psychology 47, 381–391.

16:20
Iveta Eimontaite (Cranfield University, UK)
Sarah Fletcher (Cranfield University, UK)
Angelo Rizzi (Fidia, Italy)
Alfio Minissale (Comau, Italy)
Fabio Abba (Comau, Italy)
Exoskeleton-Enhanced Manufacturing: A Study Exploring Psychological and Physical Effects on Assembly Operators’ Wellbeing
PRESENTER: Iveta Eimontaite

ABSTRACT. Industry 4.0 offers possibilities for manufacturing, whilst presenting new opportunities and challenges for the human workforce. Exoskeletons have the potential benefits of reducing fatigue and physical strain in manufacturing, however, the novelty of exoskeletons and surrounding ethical issues raise concerns amongst the stakeholders. The current case study investigated the introduction of an upper body exoskeleton designed to support posture. The main focus was to evaluate changes in operators’ workload, and physical discomfort following the introduction of the exoskeleton over a 20-months period. After three months, operators reported a decrease in temporal demand and an increase in performance of the NASA TLX instrument. Interestingly, operators’ physical discomfort at the three-month time point increased from not uncomfortable to quite uncomfortable in the shoulder, arm and middle back regions dropping back after the 20-months period. The results suggest that self-reported performance increased, and temporal demand decreased, however, increased discomfort scores indicate that two months might not be long enough for the exoskeleton to be integrated into operators’ mental body schema. The paper will discuss further implications and provide suggestions for exoskeleton introduction to manufacturing environments.

16:40
Asgeir Drøivoldsmo (Institute for Energy Technology, Norway)
Espen Nystad (linda-sofie.lunde-hanssen@ife.no, Norway)
Linda Sofi Lunde-Hanssen (linda-sofie.lunde-hanssen@ife.no, Norway)
Analytical estimation of operator workload in control rooms: How much time should be available for surveillance and control?

ABSTRACT. Automation and extended opportunities for remote control is accelerating the shift towards centralisation of more oil and gas producing assets into a decreasing number of Norwegian Continental Shelf (NCS) control rooms. Onshore or offshore control of subsea, unmanned and normally not manned installations has developed into the preferred way of developing oil fields. This way of controlling multiple assets from one control room has made more efficient use of facilities and staffing both necessary and possible. However, each new well and piece of equipment that need supervision and attention added to a control room operator, adds more burden to the task of keeping an updated overview and understanding of the process state. Other industries like electricity production have even bigger challenges, where control of different types of hydro power production facilities, windfarms and future hydrogen production are located in the same operation and control centres.

Realising that mental workload is an intrinsically complex and multifaceted concept, this paper uses a wide definition of workload suitable for the job design area, referring to workload as the portion of the operator’s limited capacity required to perform a particular task.

This study presents results from three different Norwegian Continental Shelf (NCS) offshore control rooms and one hydro and wind power control room, using a method for subjective and analytical estimation of workload. By mapping all work tasks that are performed during a "normal" working day, estimating the duration of execution of each task and estimating the adjusted duration of tasks performed in parallel (such as surveillance tasks), an estimate for operator time to maintain overview and control of the process were made. A total of approximately 3000 work tasks has been mapped and categorised on 10 – 15 different parameters with time estimates and type of task as the most central parameters. The results from the studies were used as a judgment criterion to decide whether it was necessary to increase the control room staff or not. Control room staffing levels has traditionally been estimated based on studies of critical scenarios, ensuring that the crew has time to take necessary preventive actions to keep control of the process. The main vehicle for studies has been simulations with a combination of subjective and objective measurement methods. In this study, we have analysed combined data collected in control room projects in the time period from 2019 - 2021. The Integrated Operations Man Technology Organisation (IO MTO) method [1] was during the project period used to collect the data per installation. Common to all NCS control rooms was that they were in a situation where it had to be decided whether the staffing was sufficient to take on additional tasks from remote control of platforms or subsea templates. For the electricity production control room, the situation was similar with extension of their responsibility for control of additional wind farms, hydro power production facilities and production of hydrogen from electricity.

Results from all the four control rooms are coherent in the sense that all three NCS control rooms and the hydro power control room have a subjective judgement that there amount of planned work should not occupy more than a maximum of 75% of the time available on a shift, which is in line with observations from the literature [2]. Detailed results are presented showing time spent for different categories of tasks, the variation in type of tasks across control rooms and industry, laying the foundation for the discussion of classification principles of tasks and how to estimate workload for the different task categories.

The paper discusses the findings from the control room studies in comparison to recommendations from the literature [2,3], advantages and shortcomings of the method compared to other more established and renowned workload estimation methods. The need to establish more well-defined measures (numbers) and definitions of how large a share of a control room operator's working day should be filled with regular tasks, has received special focus. The cross-industry comparison between the offshore oil and hydro power production provides some insight into the validity of the method across application domains.

References [1] Drøivoldsmo, A., and E. Nystad. "The Man, Technology and Organisation MTO Function Allocation Method Used for Optimal O&M Staffing in a Greenfield Project." Paper presented at the SPE Intelligent Energy International Conference and Exhibition, Aberdeen, Scotland, UK, September 2016. doi: https://doi.org/10.2118/181102-MS [2 ]Energy Institute, 2021, Guidance on ensuring safe staffing levels, London: Energy Institute, 1st edition December 2021, www.energyinst.org [3] Kirwan B, Ainsworth LK, Eds, 1992, A guide to task analysis, London, Taylor and Francis

15:40-17:00 Session 18G: Maintenance Modeling and Applications IV: Machine Learning applications
Chair:
Anne Barros (CentraleSupelec, France)
Location: CQ-009
15:40
Halit Metehan Dilaver (Eindhoven University of Technology, Netherlands)
Alp Akçay (Eindhoven University of Technology, Netherlands)
Yingqian Zhang (Eindhoven University of Technology, Netherlands)
Geert-Jan van Houtum (Eindhoven University of Technology, Netherlands)
Integrated Planning of Operations and Maintenance for Multi-Unit Systems with Resource Dependency: A Supervised Learning Approach

ABSTRACT. In many sectors (i.e. aviation, chemical production, maritime, power plant, railway) multi-unit systems are built to satisfy a common operational target. These units deteriorate depending on the level of usage and need to be maintained as a consequence of the deterioration. Mostly, in such systems, in order to ensure operational reliability, the total operating capacity is higher than the operational target. This excess capacity allows the operator to have an influence on maintenance needs by adjusting usage levels. In addition to usage-based maintenances, it is also often necessary to perform calendar-based inspections on these units to ensure they conform with the specifications necessary for a safe operation. These maintenance and inspection activities of the units may require the same scarce maintenance resources, such as skilled workforce, maintenance tools, or workshop capacities. It is important to incorporate the implications of usage decisions on maintenance needs to obtain efficient schedules, resulting in an integrated planning problem for scheduling the usage, maintenance and inspection activities. In this study, we formulate this integrated planning problem for multi-unit systems with resource dependency as a Mixed Integer Linear Programming (MILP) model. Our numerical experiments show that integrated planning of operations and maintenance planning is highly computationally challenging and the level of complexity is increasing drastically as the number of units is increasing. For the large problem instances that cannot be solved optimally, a branch and bound variable selection policy based on supervised learning is applied. We train our policy with small problem instances that we can solve optimally (i.e. lower number of units) and use the learned policy to guide the branch and bound algorithm when solving larger problem instances. With this study, we propose an integrated planning model and a practical application method for large-scale multi-unit systems and demonstrate the promising performance of applying machine learning based strategy to solve large problem instances.

16:00
Calliane You (Research lab G-SCOP, team GCSP, France)
Olivier Adrot (Research lab G-SCOP, team GCSP, France)
Jean-Marie Flaus (Research lab G-SCOP, team GCSP, France)
Jam detection in waste sorting conveyor belt based on k-Nearest Neighbors
PRESENTER: Calliane You

ABSTRACT. This paper is a joint work with Aktid, a company that builds waste sorting plants.

Context: Nowadays, one important way to be sustainable is by recycling waste, as it reduces the consumption of new raw materials and resources to make new objects. Waste sorting plants are a critical component in the lifecycle of waste as they gather waste into “bales” made of the same material, which enables industries to reuse waste easily. In France the amount of recyclable waste has increased by 35% between 2005 and 2017 [1]. This increase is difficult to handle for waste sorting plants, that are currently subject to failures that can take up to 10% of their operating time. Waste that are not sorted in time are incinerated or buried, so optimizing the operation time of waste sorting plants is crucial.

For a waste sorting plant, major problems are slow dynamic jams on conveyor belt to be detected as early as possible and failures that will require appropriate maintenance.

Failures that arise in waste sorting plants have various causes. First, one common assumption is that the input waste is recyclable, but this is not always the case, as the waste is pre-sorted by ordinary citizen who make mistakes. Second, a material can be recyclable per se but not recyclable by a specific waste plant. All waste flows that are mechanically treated end up in manual sorting before going to a waste baler machine making compacted cubes of one type of recycled waste called bale. The bales must follow a constant quality meaning having a certain percentage of purity of the main material (e.g.: 98% PET plastic) is mandatory, else they are returned to be recycled again. All these previous factors lead to a significant constraint, the flow is composed of objects of varying materials, shapes, density, humidity which are prone to be stuck inside the machines and make the course of the day unpredictable.

Currently, the operators use thresholds entered manually for each machine. When a jam occurs, the conveyor belt motor forces, so the electric current exceeds the threshold, then the conveyor belt stops. When a machine stops, the entire production line stops for safety. And this threshold method is depending on the operator experience and does not prevent jams. Therefore, sorting plants need a monitoring and diagnostic system that can detect jams with the less human intervention possible.

Problematic: The aim is to limit human laborious work (around 12 km of walk per shift) and improve responsiveness to dysfunction phenomena. So that operators intervene only when needed to avoid production shutdowns. And therefore to optimize the current production time. To manage 200 plants with a hundred machines each, a method using machine learning that adapts to each conveyor belt in the sorting plant is needed. There is therefore a constraint on an inexpensive method in terms of deployment and maintenance and to be real time on all the machines.

One big issue is that there are several symptoms of jams: objects passing by force, some will not be solved at once and therefore generate several phases of consecutive stops. Some jams will only be partially treated between two shifts and still be there when resuming with the new shift. Even an experienced operator cannot easily differentiate and prevent them. We will also have to minimize pre-treatment of the raw data to feed our k-NN to make it adaptable to any conveyor belt, and for the real-time prediction constraint.

Proposed method: The proposed method is to use time series classification to detect jams. The classification will use past data collected from production activity.

There is a multitude of algorithms for time series classification but not specifically on waste plant and its constraints. To determine baseline performance on our case, we will test and compare the results of the k-nearest neighbors algorithm (k-NN), the simplest algorithm which appears in various comparative studies [2]. The k-NN is a supervised learning classification algorithm: it learns to class a new observation by learning from given classes (jam, normal, fault...) from a train set. In our case we only have temporal data of electric current.

We will list all the dysfunction phenomena that can be used to differentiate a normal functioning state from an abnormal one. The data will be assessed with a sliding window and try different size. We’ll start by studying a k-NN with the non-chronological features (mean, median, variance ...) to observe if there is a loss of information that would influence the efficiency of the classification. And also to see if these features are sufficient and minimize the complexity to answer the real-time question. The train and test set are different, and the k-NN will only learn on the train set. To evaluate the performance, we will use a confusion matrix, our goal is to limit undetected anomalies, meaning false negatives [3]. The analyzed results will be obtained on real production data and expertise from operators.

Results: For the moment, the performance shows less than 6% of false negative on the test set.

Discussion: The performance satisfies the operators, but it deserves to be improved by testing more test set and conveyor belts. And to try a live test on site approved by the operators who are the final users. This method is not the definitive answer to the problem, but we can use it as a start of a different methods' combination.

Conclusion: The outlook is to test others machine learning algorithms to compare the performances and make a robust applied real-time algorithm.

References: [1] Ademe, a French Governmental Agency for ecological transition. Déchets chiffres-clés Édition 2020. https://www.ademe.fr/sites/default/files/assets/documents/dechets_chiffres_cles_edition_2020_010692.pdf. Page 39. [2] Liu, Ruonan and Yang, Boyuan and Zio, Enrico and Chen, Xuefeng. Artificial intelligence for fault diagnosis of rotating machinery: A review. Mechanical Systems and Signal Processing, Volume 108, August 2018, Pages 33-47. [3] Sokolova, M and Lapalme, G. A systematic analysis of performance measures for classification tasks. Information Processing & Management, Volume 45, Issue 4, July 2009, Pages 427-437.

16:20
Ipek Dursun (Eindhoven University of Technology, Netherlands)
Alp Akcay (Eindhoven University of Technology, Netherlands)
Geert-Jan van Houtum (Eindhoven University of Technology, Netherlands)
Bayesian Learning in Age-based Maintenance for Multiple Single-Component Machines under Population Heterogeneity
PRESENTER: Ipek Dursun

ABSTRACT. An age-based maintenance policy is applied for multiple single-component systems with a finite lifespan. The lifespan consists of multiple periods of equal length. Each component fails randomly and can be replaced preventively at the beginning of each period. If a component fails within a period, it is replaced correctively. We assume there are two populations where components come from: weak and strong. The components for the systems that we consider always come from the same population. The type of population is unknown but there is an initial belief available. We build a partially observable Markov decision process model to find the optimal age-based policy where the objective is to minimize the total cost throughout the whole lifespan for all systems together. To resolve the uncertainty regarding population heterogeneity, we update the belief by using the joint data collected from all machines at the end of a period. We generate insights on the benefit of learning under the optimal policy. Additionally, the reduction in optimal cost per machine in the multi-machine setting is compared to the single-machine setting. Furthermore, the reasons behind the reduction in optimal cost and the effect of various input parameters on this reduction are analyzed.

16:40
Hasan Misaii (University of Tehran and University of Technology of Troyes, Iran)
Mitra Fouladirad (Aix Marseille Université, France)
Firoozeh Haghighi (University of Tehran, Iran)
Optimal Corrective Maintenance Policy Encountering Competing Risks Using Machine Learning Algorithms
PRESENTER: Hasan Misaii

ABSTRACT. In this presentation, a series system is considered which is periodically inspected. At inspection times the failed components are replaced by a new one. Therefore from a component point of view, the maintenance is perfect and from a system point of view, the maintenance is imperfect. Three different scenarios related to the components lifetime distribution are considered. Firstly, the lifetime distribution of the components are assumed to be known with known parameters, secondly, the lifetime distributions are assumed to be known with unknown parameters and, thirdly lifetime distributions are considered to be unknown. A cost-based maintenance optimisation is carried out. The inspection interval is the decision parameter in all scenarios. The first scenario is considered as a benchmark to other scenarios. For the second and third scenarios, machine learning algorithms are utilised to estimate the decision parameter and to derive the long-run average total maintenance cost in the parametric and non-parametric contexts. A comparison of the optimal parameters is proposed.

15:40-17:00 Session 18H: S.26: Bayesian Networks for Oil&Gas Risk Assessment
Chair:
Luca Decarli (Eni, Italy)
Location: CQ-105
15:40
Márcio Delvô Mendes (Universidade Católica de Petrópolis, Brazil)
José Cristiano Pereira (Universidade Católica de Petrópolis, Brazil)
Saulo Alessandro Marinho de Freitas (Universidade Católica de Petrópolis, Brazil)
RISK ASSESSMENT IN FUEL, OIL, AND CHEMICALS STORAGE FACILITIES USING FMEA AND BAYESIAN BELIEF NETWORKS AIMING AT IMPROVING RELIABILITY - A CASE STUDY

ABSTRACT. Currently, most remote and isolated communities depend on the reliability of fuel, oil, and chemicals storage facilities. The size and complexity of storage facilities plants, together with the nature of the products handled, means that analysis and control of the risks involved are required. Statistics show that the reduction in process accidents and the losses from major accidents in the oil and gas processing and storage industry have not decreased over the years. Current risk approaches in storage tanks emphasize improving the reliability in the design rather than maintaining safe operation. This paper presents a method for improving reliability by conducting risk assessment in the daily operation of chemical, fuel, and oil storage facilities plants based on a combination of FMEA and BBN. The method allows sensitivity analysis and prioritization of preventive and corrective measures to minimize the probability of failure to maintain a safe operation. The proposed method includes identification, classification, assessment, and response to risks A case study was carried out covering several fuel storage facilities, which have different storage and operation capabilities. The operation process was mapped out with the help of process experts, and an in-depth literature review on storage facilities was conducted to identify risk factors. PFMEA has been conducted in storage tanks operation, focusing on the most critical process step. The risk factors obtained with PFMEA were combined using BBN, which allowed a sensitivity analysis and the process safety improvement by focusing on the most critical operational aspects As a result, the method shows that sensitivity analysis can detect the most significant risks in the process and improve the storage system's reliability. The conclusion is that effective decision-making can be made based on the proposed risk assessment method. The contribution is significant since the proposed method allows process optimization and risk reduction in the storage of chemical products and permits decision-makers to assign funds for critical activities that can impact the safety of the process and system reliability. It is believed that the present study will augment the knowledge of the process, maintenance, and safety engineers/managers and help in the decision-making process.

16:00
Abdollah Kiani (University of Stavanger, Norway)
Riana Steen (University of Stavanger, Norway)
On the boundaries of probabilistic risk assessment in the risk-based inspection: An alternative approach BN-RBI
PRESENTER: Abdollah Kiani

ABSTRACT. The motivation for conducting a probabilistic risk assessment (PRA) is providing decision-making support on the choice of arrangements and measures to deal with identified risks. By estimating risk, the decision-maker is informed. However, recent developments in risk management as a scientific field founded on the idea that the application of PRA is irrational and potentially misleading, particularly in cases associated with large uncertainties about likelihoods and outcomes. Using an example of risk-based inspection (RBI) for explosion-protected equipment, we demonstrate that PRA also has an important role in risk management, even when the uncertainties are large. However, addressing the wide range of specificities and other complexity in such context goes far beyond the boundaries of PRA theory. An alternative is a probabilistic approach grounded on the Bayesian Network modelling (BN-RBI). By its flexible nature, the application of BN allows for using information from various data sources and provides a more realistic risk picture for RBI purposes. Still, there are some issues according to the study's results: (1) converting qualitative risk zones to appropriate quantitative parameters requires a precise definition of zone classification, which is lacking in existing inspection data; (2) insufficient data for modelling consequences of ignition in the Norwegian petroleum industry, as these events are rare; (3) the subjectivity element in converting the consequence of failure to monetary value, that is based on the analyst's knowledge and preference. However, despite these challenges, we demonstrate that applying the BN-RBI approach allows for developing a dynamic casual and consequence risk picture.

16:20
Shengnan Wu (China University of Petroleum Beijing, China)
Laibin Zhang (China University of Petroleum Beijing, China)
Yangfan Zhou (China University of Petroleum Beijing, China)
Dynamic Bayesian network-based reliability analysis of deepwater shear ram preventer incorporating process demand
PRESENTER: Shengnan Wu

ABSTRACT. As an important equipment to ensure the safety of deepwater drilling operations, the ram preventers perform to shear the drill pipe and seal the well in case of emergency. This paper proposes a new dynamic Bayesian network-based reliability analysis model for deepwater shear ram preventer subject to degradation and process demand. Such model contains a multi-state transition model used to simulate the transition between states of a single component. A dynamic Bayesian network model dynamic multi-states is established to describe the dynamic characteristic logical relationship between components that cause the ram preventer fail to shear drill pipe in automatic shearing process. The degradation is simulated by the time-dependent failure rate by following the Weibull rules to predict safety performance system during different testing intervals. The reliability of the deepwater shear ram preventer system is estimated for low demand systems in terms of the influence of various events that cause its failure and parameters considering the process demand. The accuracy of the proposed model is verified for a case study of automatic shearing ram preventer system. It is demonstrated that the proposed model provides sufficiently robust results for the effects of key factors of demand rates, testing characteristic, degradation in low demand mode of operation.

16:40
Gulcin Sarici Turkmen (The Ohio State University, United States)
Alper Yilmaz (The Ohio State University, United States)
Tunc Aldemir (The Ohio State University, United States)
Deep Transformer Network for Prediction of the Nuclear Power Plant Accident Progression

ABSTRACT. Dynamic probabilistic risk assessment (DPRA) data sets are used to train a Transformer Network (TN) model to predict possible nuclear power plant (NPP) behaviour under accident conditions as the accident evolves. The data set consists of approximately 10,000 scenarios generated for a 4-loop pressurized water reactor with station blackout as the initiating event using RELAP5-3D/RAVEN. The temporal data obtained from the DPRA simulations are first pre-processed and then fed into the TN to predict peak cladding temperature and core outlet temperature. The TN model was retrained with 10,000 scenarios to increase the model accuracy by applying the Transfer Learning approach. The experimental results show that the TN can obtain good performance and possess benefits over other neural network methods.

15:40-17:00 Session 18I: S.11: Standardization in Risk Analysis and Safety
Chair:
Luca Landi (department of engineering - univerisity of perugia, Italy)
Location: CQ-107
15:40
Luca Landi (department of engineering - univerisity of perugia, Italy)
Fabio Pera (Inail, Italy)
Ernesto Del Prete (INAIL, Italy)
Massimiliano Palmieri (Department of Engineering, University of Perugia, Italy)
Ejection test requirements for parts of machine tools: part 1 standardization opportunities to improve the state of the art
PRESENTER: Luca Landi

ABSTRACT. Requirements for design of machine safeguards are clearly stated in Machine Directive (2006/42/CE). In particular all the types of safeguards are designed to prevent the access to the work zone to the operator during machining. The subset of safeguards called guards (named also as “physical guards”) shall provide protection against ejection of “parts” during operation, such as chips, tools and workpiece fragments. In addition, even if the initial aim for some type of guards is to protect against the effects of coolants, swarf and also noise, it is possible that these components have to be considered as guards as requested by Machinery Directive (2006/42/CE) because of whole design even considering specific tasks of the life of the machine (e.g., maintenance, setting…). In the first part of this paper, we discuss about safety requirements of these protections as prescribed in Machine directive and how the requirements are fulfilled in different standards related to machine tools cutting. The state of the art of general type B and C standards is presented to highlight new opportunities of clarifying the standardization content based on tests performed in research facilities and published in the last five years. Some of the underlying assumption of standardized testing needs to be clarified and changed accordingly to those new findings

16:00
Luca Landi (department of engineering - univerisity of perugia, Italy)
Fabio Pera (Inail, Italy)
Ernesto Del Prete (INAIL, Italy)
Giulia Morettini (Department of Engineering, University of Perugia, Italy)
Carlo Ratti (INAIL, Italy)
Ejection test requirements for parts of machine tools: part 2 testing energy equivalence hypothesis and weak points of vision panels
PRESENTER: Fabio Pera

ABSTRACT. Requirements for design of machine safeguards are clearly stated in Machine Directive (2006/42/CE). In particular all the types of safeguards are designed to prevent the access of the operator to the work zone during machining. The subset of safeguards called guards (named also as “physical guards”) shall provide protection against ejection of “parts” during operation, such as chips, tools and workpiece fragments. In this second part of this paper, we will present some results obtained by new testing performed during in a joint research INAIL/UNIPG in Monte Porzio Catone (Rome) laboratories. In particular, results of two new sets of tests performed on 4mm polycarbonate sheets of 300X300 mm will be shown. The first set of tests is conducted in order to highlight and measure, through R&I curves statistical approach, the possible “weak border effect” of vision panels of machine tools. As described in part 1 of the article the requirement of Machine Directive is to provide protection against projection of material and objects. Harmonized type C standards require to test specified panel samples in the weakest point but performing a single shot on in the centre of the panel sample. Results and standardization possibilities will be discussed in the paper. The second set is conducted in order to proof, even in in a single case, the so-called energy equivalency hypothesis of projectiles often used in testing of machine tool guards’ retention capabilities. Because of not all the possible objects ejected by the safeguarded work zone of the machine can be used in the test, some type C standards states that an equivalent testing is possible through standardized projectiles, assuring the same energy of the impacting object. In this second set of tests, three different projectiles of equal shapes but with different masses will be used in order to highlight and measure, through R&I curves statistical approach, the equivalent energy hypothesis. Results and standardization possibilities will be discussed in the paper.

16:20
Mitchel Polte (Institute for Machine Tools and Factory Management, Technische Universität Berlin, Germany)
Nils Bergström (Institute for Machine Tools and Factory Management, Technische Universität Berlin, Germany)
Heinrich Mödden (German Machine Tool Builders’ Association, Germany)
Analysis of the effect of cutting fluids on the impact resistance of polycarbonate sheets by means of a hypothesis test
PRESENTER: Nils Bergström

ABSTRACT. Vision panels in machine tools protect the operator from ejected fragments in case of an accident. Due to its excellent impact resistance, polycarbonate is used as material for such vision panels. However, when exposed to cutting fluids the impact resistance of polycarbonate vision panels decreases significantly. A previous study examined the effect of two different cutting fluids on polycarbonate sheets by impact tests. Both cutting fluids employed were highly alkaline but differed in composition, with one cutting fluid containing phenoxyethanol as solvent and the other dicyclohexylamin as amine. Due to the exposure to cutting fluids, a maximum decrease of 10 % in impact resistance was observed. However, the results were subject to considerable scatter, such that the decrease in impact resistance could be the result of statistical scatter instead of the exposure to cutting fluids. Owing the limited number of test samples in impact tests, a pronounced scatter is a typical phenomenon observed when studying the impact resistance of polycarbonate sheets. The effects of material alterations on the impact resistance of polycarbonate arising from contact with cutting fluids are initially difficult to distinguish from scattering due to the inertia of chemical degradation processes. However, a statistical evaluation permits to draw meaningful conclusions even in the case of pronounced scattering. Despite the advantages offered by a statistical evaluation, impact test results are rarely analyzed by statistical means, leaving the influence of the different cutting fluids and its constituents to remain uncertain. Therefore, the present study examines the influence of cutting fluids containing phenoxyethanol and dicyclohexylamin on polycarbonate sheets subjected to impact tests by means of a hypothesis test. By comparing the results of impact tests of polycarbonate sheets with and without prior exposure to cutting fluids the influence of cutting fluids on the impact resistance is assessed. In addition, a Monte Carlo simulation is performed to determine a threshold above which a decrease in impact resistance is statistically detectable to a statistical degree. As a result, it is shown that the previously observed decrease in impact resistance was within the range of the scatter of polycarbonate sheets without prior contact to cutting fluids. These results are confirmed by the Monte Carlo simulation. For future studies, the present investigation provides an estimate of the time that polycarbonate sheets can be exposed to cutting fluids containing phenoxyethanol and dicyclohexylamin without significant loss in impact resistance.

16:40
Martha Chadyiwa (University of Johannesburg, South Africa)
Vuyazi Vinolia Mongwe (University of Johannesburg, South Africa)
Emmanuel Emem-Obong Agbenyeku (University of Johannesburg, South Africa)
Thokozani Mbonane (University of Johannesburg, South Africa)
Phoka Rathebe (University of Johannesburg, South Africa)
Shalin Bidassey-Manilal (University of Johannesburg, South Africa)
Enireta Makanza (University of Johannesburg, South Africa)
Bheki Magunga (University of Johannesburg, South Africa)
Claris Siyamayambo (University of Johannesburg, South Africa)
Personal exposure to respirable crystalline silica dust at selected coal fired power stations in Bethal, Mpumalanga province
PRESENTER: Martha Chadyiwa

ABSTRACT. Coal-fired power stations comprise of industries that burn coal to produce steam for the primary purpose of electricity generation. The burning of coal releases several pollutants that are known to cause climate change and global warming. Workers in coal-fired power stations engage in a range of work tasks or processes which may involve handling or exposure to respirable dust, including coal dust, crystalline silica dust or coal fly ash. Recent studies have shown that crystalline silica exposure remains one of the detrimental concerns in mining, construction and general industry. A quantitative study was conducted to determine employees’ level of exposure to respirable crystalline silica at a coal-fired power station in Mpumalanga Province, South Africa. A total of 34 employees participated in the study. The study revealed that the male employees (n=27) were most predominant in the coal handling plant as compared to the females (n=7). This study determined the mean exposure value of 0.969 mg/m3 for respirable coal dust which was found to be below the recommended occupational exposure limit (OEL) of 2 mg/m3 as set by the Department of Labour (DoL). While the study determined the mean exposure value for crystalline silica as 0.184 mg/m3 which was found to exceed the recommended OEL of 0.1 mg/m3 as indicated by the DoL. Results from this study confirm occupational exposure to crystalline silica is a well-established hazard in the mining industry therefore, the use of personal protective equipment is highly recommended.

15:40-17:00 Session 18J: S.25: Climate Change and Extreme Weather Events Impacts on Critical Infrastructures Risk and Resilience
Chair:
Masoud Naseri (University of Tromsø - The Arctic University of Norway, Norway)
Location: CQ-006
15:40
Jana Marková (Czech Technical University in Prague, Klokner Institute, Czechia)
Karel Jung (Czech Technical University in Prague, Klokner Institute, Czechia)
Miroslav Sykora (Czech Technical University in Prague, Klokner Institute, Czechia)
Climatic actions in changing climate for structural design
PRESENTER: Jana Marková

ABSTRACT. Specific features of climatic actions should be considered in structural design and reliability assessments of existing structures. Underlying physical processes, load durations, seasonality effects and mutual correlations between climatic actions make them largely different from other variable actions (such as imposed or traffic loads) and from other natural hazards (e.g. earthquakes). Climatic actions are all related to the physics of the atmosphere. Understanding the interdependencies between climate variables can be very important since a misrepresentation of the joint physical process may lead to a possible underestimation of hazards and structural risks. For example, understanding the interdependencies between wind, temperature and precipitation can improve the prediction of extreme events such as floods and droughts but also the assessment of climate change impacts on built infrastructures. Analyses of interactions of climatic actions revealed a weak positive correlation for atmospheric icing load and wind velocity at the investigated locations in Norway and Czech Republic. It appears that both climatic variables can be considered asymptotically independent. A negative correlation is observed for temperature and icing load as expected, an increase of icing load is obtained as temperatures decrease. Extremes of climatic actions, especially joint extremes, may lead to serious damages to the society, economy and environment. The combination of extreme environmental processes can be also particularly critical for structural design of lightweight structures. The technical subcommittee CEN/TC 250/SC1 responsible for actions on structures in Eurocodes recommended periodical revisions of the models for climatic actions at intervals of 15-20 years. Although the data from climate models provide information about future trends for climatic parameters, there is considerable dispersion in the data depending on relevant parameters and the emission scenario under consideration. The quantification of future extremes with low uncertainty required for reasonable estimates of design values (fractiles associated with long return periods) is unavailable at present. Regular re-examination of weather parameters considering uncertainties in extremes of climate actions should be used for verification and updating of partial factors and combination factors for climate actions.

16:00
Euan Macdonald (University of Strathclyde, UK)
Edoardo Patelli (University of Strathclyde, UK)
Enrico Tubaldi (University of Strathclyde, UK)
Extreme Storm Surge Classification for Risk Assessment of Coastal Infrastructure
PRESENTER: Euan Macdonald

ABSTRACT. According to the UK Climate Projection report, the UK has seen an increase in the frequency and magnitude of extreme water level over the previous decades which has led to an increase in the number of coastal flooding events. This has the potential to cause disruption to coastal transport networks, damage to infrastructure and even fatalities. Local council authorities have the responsibility of developing flood risk management strategies to mitigate the impacts of coastal flood events when they occur, in alignment with the Flood Risk Management Act 2009. The SEPA Coastal Hazard Mapping Study has highlighted several vulnerable areas along the Scottish coastline but is limited in its ability to predict water level heights. This can potentially result in a significant underestimation of extreme water levels and of their impact. It is therefore necessary to develop a robust storm surge prediction tool that utilises available weather forecast information, to inform the relevant authorities. This paper describes the preprocessing procedure and neural network design for the classification of extreme (>1m) storm surge heights at Millport in the Firth of Clyde. Due to the rarity of these extreme events, building the balanced design matrix required for for binary classification problems is difficult since less than 0.1% of datapoints are in the “extreme” minority class. To achieve equally populated classes, the population of the minority class has to be significantly increased and the populations of the majority class, significantly reduced. The extreme value set is extended by adding values from a database of synthetic storms. This larger extreme value set is then oversampled using a synthetic minority oversampling technique. To improve the networks ability to distinguish between large surges (between 0.75m & 1m) and extreme surges (>1m), the 0.75-1m interval is oversampled using the same method. The remainder of this class is then populated by randomly sampling at regular intervals across the remainder of the surge heights. The result is a 6000 point dataset with two equally populated classes. Neural networks with large numbers of parameters and short training datasets are prone to overfitting as the network is large enough to learn the data instead of the patterns through it. Hence to increase the available number of network architectures, a principle component analysis is carried out to reduce the number of inputs from 166 to 24 while preserving 95% of input data variability. Different network architectures are explored on these transformed inputs.

16:20
Leslie Mooyaart (Delft University of Technology, Netherlands)
Alexander Bakker (Rijkswaterstaat, Netherlands)
Johan van den Bogaard (Rijkswaterstaat, Netherlands)
Bas Jonkman (Delft University of Technology, Netherlands)
Storm surge barrier performance
PRESENTER: Leslie Mooyaart

ABSTRACT. With climate change, development in coastal zones and subsidence, coastal flood risk is rising. To limit flood risk rise, coastal protection needs to improve to account for these effects. This research looks into a specific type of coastal flood protection system: flood protection systems with storm surge barriers. Storm surge barriers are large hydraulic structures which aim to lower extreme water levels behind the barrier. These movable barriers only close-off an estuary during storm surge conditions and, therefore, preserve navigability and water quality in normal conditions. In normal or even mild storm surge conditions, hinterlying flood protection prevents floods. Many options exist to improve a storm surge barrier, including raising or strengthening the barrier, increasing the closure reliability or changing its operation. These improvement options all have a different effect on storm surge barrier performance, i.e. the ability of storm surge barriers to lower extreme inner water levels. While the effect of individual performance of some subsystems on storm surge performance were analysed, an generic overview of how storm surge barrier equipment, structures and operations contribute to performance is lacking. Consequently, it is hard to identify the most effective measures to improve storm surge barriers. This paper provides a generic model which establishes storm surge barrier performance. The model calculates inner water level distribution based on 1) the extreme value distribution of storm surges, 2) storm surge barrier properties such as closure reliability and structural reliability and 3) internal hydraulic effects such as internal wind set-up and river discharge. The paper provides an overview of storm surge barrier properties and internal hydraulic effects. The paper will illustrate how these storm surge barrier properties and internal hydraulic effects affect storm surge barrier performance. This knowledge helps to get quick insight into the dominant storm surge barrier properties and internal hydraulic effects with respect to the performance of the overall flood protection system, i.e. its ability to reduce the flood frequency.

16:40
Claudia Morsut (University of Stavanger, Norway)
Ole Andreas Engen (University of Stavanger, Norway)
Climate risk discourses and risk governance in Norway

ABSTRACT. This paper uncovers how Norwegian national discourses on climate risks are manifested. By drawing on document analysis of official public documents on climate change and climate risks and interviews with relevant key actors in the Norwegian national governmental administration, the paper reveals how discourses about climate risks can be associated with an increasing use of risk governance approaches in Norway. To reach this aim, Norwegian official national discourses will be analysed by establishing relations between risk governance and riskification to sustain the investigation on how national discourses of climate change threats have been translated into concepts such as climate risk and climate change adaptation. Empirically, the analysis provides an overview of the climate risk debates in Norway to bring out the character of this debate within the broader European climate risk discourse. The very act of governance and control of risks is based on ideals of rational planning and predictability. Risk governance brings together various local, national, private as well as public actors in arrangements and discourses to cope with environmental issues and transboundary risks, purposefully steering the behaviour towards the goals of analysing, understanding, and acting upon certain risks. However, climate risks are dynamic and unpredictable. Climate risks are cascading since they impact several systems (physical infrastructures, economies, ecosystems, societies etc.) and, at the same time, they lead to challenges on how to assess and manage climate risks across different systems - systems which reflect different levels of vulnerability and exposure. In the societal context, climate risks are thus impossible to tackle without considering the dynamic socio-economic aspects that drive exposure and vulnerability, making the endeavour of governance and control of climate risks even more demanding. In this sense, climate risks can be studied as systemic risks (Li et al. 2021), a concept that risk governance advocates have recently brought into light as a new risk dimension. The systemic risk concept - inspirated by Beck’s thesis on risk society (Beck 1992) - aims to ease the management of new and constructed risks that do not respect state borders. Systemic risks encompass different risk phenomena as well as economic, social, and technological developments and policy-driven actions at the regional, national, and international level. The concept entails “endangering potentials with wide-ranging, cross-sectoral, or transnational impacts where conventional risk management and regional or even national risk regulation are insufficient” (Renn et al. 2020: 3). Systemic risks pose significant threats to societies because they can destroy not only the system of origin, but also propagate beyond its boundaries. In democratic societies, risk governance, by seeking to cope with systemic risks, requires democratic legitimization, justification from those who are affected, assurance of due processes, and reference to societal values, such as social justice and sustainability. By considering climate risks as systemic risks, we can thus argue that climate risks constitute a societal threat, which may call upon extraordinary measures that authorities have to take. Indeed, there is a strand of scholars (Diez et al. 2016; Dupont 2018; Warner and Boas 2019) who argue for the securitisation of climate change and the need to protect the referent object (in this case a society) against threats from climate change, like climate risks. In this regard, the discourses about how to deal with climate risks represent an internalizing of existential threats into action. Since the institutionalization and professionalization of the field of climate change (and climate change adaptation) the last ten years, governments have increasingly approached threats in terms of climate risk. A different, but relevant approach about climate change is riskification (Corry 2012). Riskification considers security in the same vein as securitisation, but the focus shifts from threat to risk, in the sense that the riskification considers which conditions could damage the referent object and how we can govern them (Friis and Reichborn 2016). Hence, riskification follows a risk governance logic since it seeks to frame issues towards possible and random sources of harm that must be addressed. It assigns probabilities to known dangers and brings a mindset of bringing hazards ‘under control’, often through technocratic processes. Corry (2012: 248) argues that riskification tends to lead to “programmes for permanent changes aimed at reducing vulnerability and boosting governance-capacity of the valued referent object itself”. Understanding risk governance in relation to riskification relies on a combination of realism and constructivism that integrates knowledge about tools from risk analysis with insights from social and cultural studies about risk, critical risk studies and securitization. The complex relationships between requirements for risk reduction, mitigation, adaptation and political regulatory systems open for tradeoffs and conflicting values, as well as for discourses on how to govern climate risks. Against this theoretical backdrop, the paper explores how the Norwegian government deals with climate change and climate adaptation by relying on expert risk knowledge in the form of facts, correlations and causal models. The analysis reveals how risk analysis and risk governance models institutionalize bodies of knowledge, which gain the status and currency of expressing the ‘truth’, and how risk definitions, such as systemic risks, define and dominate discourses, in addition to shaping the understanding of climate change and climate change adaptation.

Beck U. 1992. Risk Society: towards a New Modernity. Sage Publications: London. Corry O. 2012. Securitisation and 'Riskification’. Millennium: Journal of International Studies 40(2): 235-258. Dupon C. 2019. The EU’s collective securitisation of climate change. West European Politics 42(2): 369-390. Friis, K. and Reichborn-Kjennerud, E. 2016. From Cyber Threats to Cyber Risks. In K. Friis and E. Reichborn-Kjennerud. Conflict in Cyber Space: Theoretical Strategic and Legal Perspectives. Routledge: Abingdon, 27-44. Li H.-M. et al. 2021. Understanding systemic risk induced by climate change. Advances in Climate Change 12(384-394). Renn O. et al. 2020. Systemic Risks from Different Perspectives. Risk Analysis DOI: 10.1111/risa.13657. Warner J. and Boas I. 2019. Securitization of climate change: How invoking global dangers for instrumental ends can backfire. Politics and Space 37(8): 1471–1488.

15:40-17:00 Session 18K: Food Safety
Chair:
Christine O'Connor (TU Dublin, Ireland)
Location: CQ-010
15:40
Amany Aly (TU Dublin, Ireland)
Julie Dunne (TU Dublin, Ireland)
Acrylamide Awareness and Related Domestic Food Practices among the Residents of the Republic of Ireland
PRESENTER: Amany Aly

ABSTRACT. Limited knowledge about acrylamide risks, combined with a lack of good food preparation practices, is leading to daily exposure to high levels of acrylamide contamination from home-made meals across a number of different nations. The aim of this study was to explore in an Irish context to what extent the actual specific domestic food practices and food preferences can give an indication of the extent that consumers may be exposed to acrylamide risks, while also gauging the level of acrylamide knowledge among adult inhabitants in Ireland. In this study, mixed open-end and closed-end questions have been used in a questionnaire (N=555, between March to June 2019). 39% of respondents had heard about a harmful component formed during heat treatment of carbohydrate-rich food, while 14.8% recognized the term ‘acrylamide’. Awareness of the carcinogenic effects of acrylamide reached 94.5% among those who had heard about a harmful compound. Awareness that a high cooking temperature is the main contributor in acrylamide generation was 85.4%. Some domestic practices are concerning, such as the level and duration of soaking fresh potatoes before cooking. While 40% were soaking potatoes, only 17% were soaking potatoes for 30 minutes or more. Also of concern is the higher percentage of people who desire a medium-golden to brown appearance on the surface of roasted potatoes, chips and toasted bread. Overall, these results substantiate the need for educational initiatives tailored towards developing knowledge about good cooking practices and changing food habits to minimize exposure to acrylamide from home-prepared meals and food preferences.

16:00
Loriana Ricciardi (Dipartimento innovazioni tecnologiche e sicurezza degli impianti, INAIL, Italy)
Luciano Di Donato (Dipartimento innovazioni tecnologiche e sicurezza degli impianti, INAIL, Italy)
Giulia Marroni (Dipartimento di Ingegneria Civile e Industriale, Università di Pisa, Italy)
Gabriele Landucci (Dipartimento di Ingegneria Civile e Industriale, Università di Pisa, Italy)
Hazard assessment of confined space operations in the framework of process facilities: development of real time hazard-based indexes
PRESENTER: Giulia Marroni

ABSTRACT. Confined spaces represent a critical safety issue in the process industry, especially when dealing with maintenance and inspection of process equipment. Accidents occurred during confined spaces operations led to several injuries and fatalities in the past, exhibiting an increasing trend. Standard job safety analyses do not specifically address the complex risks of working within confined spaces, often providing qualitative evaluation, without linking the evaluation with the dynamic evolution of the system under analysis. The present work introduces a quantitative methodology able to capture specific site data to drive the identification and assessment of confined space hazard. The method introduces specific considerations on process and environmental predictions, in order to quantitatively evaluate the hazard associated with a given operation. The example of food industry and in particular edible oil refining and wine production are taken into account. In the case of edible oil refining process, extracted edible oils may feature high residual solvent content, typically hexane, introducing the potentiality of polluted confined space when carrying out maintenance operations and storage and process equipment. In the case of wine industry, accessing fermentation units to carry out sludge removal may expose operators to toxic and/or flammable atmospheres. A specific thermodynamic analysis, based on the evaluation of top space vapour composition given the type of residual sludge/liquid, is firstly carried out to derive preliminary indications on the hazards associated with the confined space, based on the information derived from standard quality procedures (i.e., crude oil inlet composition monitoring, analysis of sludge composition, etc.) and from the environment (i.e., ambient temperature). In order to consider the effect of hazardous gases stratification, a three-dimensional CFD (computational fluid dynamics) simulation is then developed, in order to predict concentration profiles and derive simplified correlations for real-time estimation of toxic and flammable concentrations inside the equipment, hence supporting the definition of specific hazard-based indexes. Based on the outcomes of the method, beside determining a hazard ranking of the considered operation, specific needs for personnel (training, certifications, individual protection systems, etc.) and for the industrial facility (sensors placements, preliminary settings, etc.) are derived. Finally, in order to apply the results obtained, a case-study based on the analysis of an actual refinery plant is presented and discussed.

16:20
Maria Alejandra Restrepo Mejia (Aria srl - Analisi dei Rischi Industriali e Ambientali, Italy)
Gianfranco Camuncoli (Aria srl - Analisi dei Rischi Industriali e Ambientali, Italy)
Salvina Murè (Aria srl - Analisi dei Rischi Industriali e Ambientali, Italy)
Eleonora Pilone (Aria srl - Analisi dei Rischi Industriali e Ambientali, Italy)
Shuo Yang (Polytechnic of Turin - DISAT Department, Italy)
Micaela Demichela (Polytechnic of Turin - DISAT Department, Italy)
DEVELOPMENT OF AN EVALUATION AND DECISION SUPPORT METHOD FOR FOOD SAFETY MANAGEMENT ALONG THE SUPPLY CHAIN

ABSTRACT. Food industries need to establish very high quality and safety standards in response to consumer expectations and in order to face possible critical health consequences. Nowadays, there is a growing demand to extend food safety control from the single process to the entire supply chain, with the aim of granting prompt interventions to improve health safety. In order to meet this objective, in this paper an evaluation and decision support method for risk management is proposed: its primary purpose is the improvement of the safety conditions of the final product, to ensure the consumers a more complete safeguard of their health and needs.

The proposed methodology includes the entire supply chain (from cradle to gate), through two phases that involve: 1) semi-quantitative risk analysis techniques and 2) efficiency indicators or KPIs related to safety, sustainability and effectiveness of the processes. The methodology is currently undergoing a validation through the application on a hazelnut-based products industry. The identification of the potential hazards was developed along the entire supply chain, trying to point out the critical factors that favor contamination and, in this way to define the KPIs. This process returned the critical points in which prevention and intervention measures will be required, in order to manage and control contamination risks.

The methodology has demonstrated to be valid for identifying potential hazards and critical points and recognizing the possible factors that constitute a threat along the supply chain. The next step of the process will consist of the installation of sensors in the critical points identified; these measurements will make possible further improvements in the methodology and guarantee greater safety for companies and consumers.

16:40
Flavio Luis Almeida (Catholic University of Petropolis, Brazil)
Jose Cristiano Pereira (Catholic University of Petropolis, Brazil)
RISK ASSESSMENT IN PURIFIED WATER PROJECTS USING ANALYTIC HIERARCHY PROCESS AND GOAL TREE SUCCESS TREE – A TOOL FOR DECISION MAKING

ABSTRACT. The study presents a risk analysis to assist in choosing the design of purified water by osmosis reverse installations in health and biological research establishments. The process of producing and distributing purified water is sensitive in terms of contamination by microorganisms and electrical conductivity when considering the flow regime and the risk of standing water in the pipeline. Several risks are present in the design of installations for the production and distribution of purified water. Those responsible for the correct choice of the purified water production and distribution system should be taken in response to risks. As a methodological approach, research in standards and regulations was conducted, risk analysis was performed in the treated water purification process and distribution projects, AHP was used to prioritize the risks, and GTST was used to define the actions as responses to risks. Identifying the technologies available from different suppliers are factors considered important in the process. The results show the enormous source of uncertainty in the normal processes of production and distribution of purified water, with different regulations, which can compromise the integrity of the project, the sustainability, and the assertiveness of the delivery of the work. A risk analysis conducted as a global enterprise strategy to complement the risk analyzes of isolated disciplines can mitigate tangible risks and identify intangible risks. It increases the enterprise's certainty, legitimacy, and adequate application of public resources. The research points out that the production and distribution of purified water for laboratories is susceptible to contamination. It can make the enterprise unfeasible. The study contributes in two ways. First, it provides elements for identifying a safe production and distribution technology. Second, it contributes to developing a risk analysis based on decision trees that can be used in future projects for other purposes.

17:00-18:00 Session 19: Plenary session: Cybersecurity challenges for ML Prof Ernesto Damiani UNIMI, & Cyber Security Challenges in the age of Metaverse Puneet Kukreja Cybersecurity Practice Leader, EY Ireland

Cyber Security Challenges in the age of Metaverse Puneet Kukreja Cybersecurity Practice Leader, EY Ireland

&

Cybersecurity challenges for Machine Learning Prof Ernesto Damiani University of Milano. Italy 

Chair:
Edoardo Patelli (University of Strathclyde, UK)
Location: CQ-006