ESREL 2022: 32ND EUROPEAN SAFETY AND RELIABILITY CONFERENCE (ESREL) - DUBLIN 2022
PROGRAM FOR TUESDAY, AUGUST 30TH
Days:
previous day
next day
all days

View: session overviewtalk overview

08:30-09:30 Session 8: Plenary Session: Plenary session: Advances of Computational Risk Assessment in Industry. Curtis Smith, INL Luca Decarli , ENI;

Plenary session: Advances of Computational Risk Assessment  in Industry.

Curtis Smith, Director for the Idaho National Laboratory Nuclear Safety and Regulatory Research Division

&

Luca Decarli , International Oil and Gas Producers member in Process Safety Sub-Committee ENI;

Chairs:
Enrico Zio (MINES Paris, PSL University and Politecnico di Milano, Italy)
Terje Aven (Universitetet i Stavanger (UiS), Norway)
Location: CQ-006
09:30-10:50 Session 9A: Risk and territorial planning
Chair:
Tina Comes (TU Delft, Netherlands)
Location: CQ-006
09:30
Anna Ståhle Bofjäll (Lund University, Sweden)
Understanding the problems and challenges of managing multiple kinds of risks in urban planning: preliminary results from an interview study

ABSTRACT. In many urban areas the most suitable land is already in use, and the available land that is left is often exposed to several kinds of hazards, for example flooding, hazardous industries, transportation routes for dangerous goods, landslides and train derailments. In Sweden, detailed development plans are used by the municipalities to regulate land use. The risk management process for detailed development plans is not limited to a single decision but consists of a series of decisions, made by different actors with different goals, knowledge, power, limitations and incentives. This diversity makes it likely that the actors will have different perspectives on the risk management process and perceive different issues as problematic. In order to investigate what problems and challenges different actors perceive regarding the risk management of detailed development plans affected by multiple kinds of risks, an interview study has been carried out. The study results indicate that it is often unclear who actually makes the decisions regarding what risk is acceptable, and what the decisions are based upon.

09:50
David Javier Castro Rodriguez (Politecnico di Torino, Italy)
Simone Beltramino (Politecnico di Torino, Italy)
Mattia Scalas (Politecnico di Torino, Italy)
Eleonora Pilone (Politecnico di Torino, Italy)
Micaela Demichela (Politecnico di Torino, Italy)
Territorial representation of the vulnerability associated with the Seveso installations in a Nord Italian municipality

ABSTRACT. The process industry is recognized to be a source of hazards, both, instantaneous and distributed over time and space. The chemical plants are not anymore recognized as independent single units; they are, on the contrary, completing parts of a much larger system that is generated from the flows that stream from one to another generating a Macro System that is rooted on the territory. Within this complex context, special relevance received the major risk installations or “Seveso” plants. Since the implementation of the European Directive “Seveso” in Italy is mandatory for the municipalities which host a Seveso plant in their territory; instruments that include the criteria for the areas around these plants are required to the Urban and Land use planners. From this, the goal was to represent a territorial vulnerability indicator associated with the Seveso installations and impose binding areas around them which identifies the areas of exclusion and observation established in the legislation. A Nord Italian municipality was used as a case study scenario, as part of the research activities of the Responsible Risk Resilience Centre of Turin Polytechnic (R3C) for measuring urban resilience. The “Guidelines for the assessment of industrial risk in land use planning” of the Piedmont Region were used as a legal framework. Industrial buildings with major accident risks were identified. Scores are assigned for areas of exclusion and observation. Space-dependent analyses using geographical information system (GIS) and a thematic map were generated. The results increase the awareness of the territorial vulnerability against major risk accidents and contribute to support the resilience-based decision-making in the design of technical measures. Further research is required to include this indicator in the framework of multi-risk assessment of territorial vulnerability.

10:10
Katarzyna Pietrucha-Urbanik (Rzeszow University of Technology, Rzeszow, Poland, Poland)
Barbara Tchorzewska-Cieslak (Rzeszow University of Technology, Rzeszow, Poland, Poland)
Sewer system operational feedback analysis in view of developing a risk-informed management culture

ABSTRACT. The subject of the article is the analysis of the operational feedback experience of the sewer system to contribute in the development of a risk-informed management engineering culture. The analysis was made based on the operating data obtained from the sewage system company. The risk of failure of the sewer network with the use of a risk matrix, and the risk of losing the operability of the sewer system are assessed. The analysis was performed for the following factors: sewer pipe functions, sewer pipe material and diameter, type of sewer supply network, failure and season. The presented analysis of the failure in terms of cost of planned maintenance is an important issue which will allow for failure prediction and the initial estimation of the costs of failure removal, which in turn, will allow for a long-term budget planning in sewer companies.

10:30
Nazli Yonca Aydin (TU DELFT, Netherlands)
Supriya Krishnan (TU DELFT, Netherlands)
Hongxuan Yu (TU DELFT, Netherlands)
Tina Comes (TU DELFT, Netherlands)
An integrated framework for incorporating climate risk into urban land-use change modeling

ABSTRACT. Cities are complex socio-technical systems (STSs) under tremendous stress due to climate change. To incorporate resilience into urban plans and move towards evidence-based long-term decision-making, we must unravel complex land-use dynamics and the effect of climate uncertainties on cities. Currently, land-use dynamics are explored through Cellular Automata models to investigate the impacts of urban planning scenarios. What is, however, missing to support resilience decisions, is a systematic analysis of long-term climate uncertainties on land-use change. This study addresses this gap by analysing the effects of flood uncertainties on land-use patterns. While conventionally, urban planning decisions for climate uncertainty are based on a few scenarios, we use exploratory modeling to sample and combine uncertain climate variables to scenarios and understand the implications of the climate scenarios on land use via computational experiments. Specifically, we integrate flood probability maps into land-use maps to assess land suitability. Agglomerative clustering allows us to analyze the resulting land-use maps based on their similarity. Finally, we select representative maps from each cluster and compare them with the baseline map. We apply our integrated modeling approach in the Metropolitan Region of Amsterdam (MRA). Our results show spatially explicit alternatives for high-density residential development that is climate-resilient. The proposed framework can be applied to other cities to investigate the long-term impacts of climate uncertainties and adopt resilience-informed decision-making.

09:30-10:50 Session 9B: S.08: Resilience-informed decision-making to improve complex infrastructure systems I
Chair:
Bryan Adey (ETH Zurich, Switzerland)
Location: CQ-008
09:30
Deming Zhu (The Hong Kong Polytechnic University, Hong Kong)
Ruiwei Feng (The Hong Kong Polytechnic University, Hong Kong)
You Dong (The Hong Kong Polytechnic University, Hong Kong)
Optimal intensity measure for the risk assessment of bridges under successive earthquake-tsunami events
PRESENTER: Deming Zhu

ABSTRACT. An offshore earthquake event could produce a submarine landslide and in turn trigger a tsunami, generating chains of hazards to coastal bridges. Therefore, an accurate performance assessment is paramount for such sorts of bridges subjected to successive earthquake-tsunami loadings. In recent years, the performance-based earthquake engineering (PBEE) methodology proposed by the Pacific Earthquake Engineering Research Center (PEER) has become an exceedingly robust approach and is widely used in the seismic risk estimation of structures. In this method, a critical step is to select an appropriate intensity measure (IM) to better link the seismic hazard with the structural demand, which determines the veracity of the evaluation results. Similar to the PEER PBEE methodology, lately, a performance-based tsunami engineering framework for risk assessment of structures subjected to sequential earthquake and tsunami events is put forward. Based on this framework, a number of tsunami intensity measures (TIMs) have been adopted in previous relevant research, wherein only a few studies focus on identifying the optimal TIM for deriving the fragility functions of buildings. However, for coastal bridges which are crucial components of the offshore lifeline system, to the best of the authors’ knowledge, the optimum IM used for implementing the risk assessment of these bridges under cascading shaking-tsunami hazards has not been studied yet. To this end, a typical continuous concrete highway bridge is taken as the bridge example, and the numerical model of this bridge is built in the OpenSees analysis platform. Subsequently, twenty-two pairs of horizontal ground motions are selected from the FEMA P-695 far-field ground motion set and are scaled to consider different shaking intensity scenarios. Meanwhile, the hydrodynamic tsunami loads are generated from an advanced three-dimensional (3D) Computational Fluid Dynamics (CFD) model of the bridge case. On these bases, the sequential earthquake-tsunami loadings are formed by concatenating the ground motion and tsunami time series. Then a total of 330 (22 ground motion-tsunami series × 15 earthquake intensities) nonlinear time history analyses are conducted to obtain the bridge responses. A plethora of commonly-used seismic and tsunami intensity measures are taken as the candidates, and a complete multi-criteria method is presented for the IM selection. On top of this, the optimal IM for the probabilistic performance assessment of coastal bridges subjected to successive earthquake-tsunami loadings is picked out. This paper is able to provide a preliminary reference for the application of suitable IMs to the earthquake-tsunami multi-hazard risk assessment of coastal highway bridges.

09:50
Rui Teixeira (University College Dublin, Ireland)
Beatriz Martinez-Pastor (University College Dublin, Ireland)
Maria Nogal (Delft Technical University, Netherlands)
Alan O'Connor (Trinity College Dublin, Ireland)
A multi-fidelity framework for operational adaptation of engineering systems
PRESENTER: Rui Teixeira

ABSTRACT. Adaptation is expected to play major role in the functioning of future societies. Climate change and other emerging threats characterised by large uncertainties have proved that engineering systems, to be resilient, need to be capable of adapting their operation and recover their functionality when subject to perturbations. Recently, significant research has been conducted in this topic, with optimal decision-making schemes being proposed for recovery and adaptation of different systems under disruptive scenarios. One aspect that has been target of limited research in adaptation and recovery is that of response times when decision-making schemes rely in so-called high-fidelity modelling techniques. Modelling of engineering systems is expected to grow progressively more complex as it becomes more accurate, and with such complexity comes additional modelling costs. The promise of optimal or efficient adaptation and recovery may be hindered by this, as even decision-making that rely in relatively fast modelling, i.e., assuming modelling times of the order of seconds, may become costly if multiple decision-variables and constraints are considered. The present work proposes a framework to address systems adaptation decisionmaking at different fidelities. It assumes that when a perturbation impacts a system, and it loses its stationary condition, analysis time should be fast, and zero-response times enabled. On the other hand, when stationary conditions are in place, there is no such requirement as the system’s operation is expected to have facilitated prediction. To enable zero-optimal response times, system analysis time is divided in slow-time and fast-time. Then, a multi-fidelity framework is introduced to support decision-making that activates different fidelities as a function of response time requirements. Metamodels are assumed to always represent a lower-fidelity model of a more complex model. These have synergy with the idea of zero response times. Application of the proposed framework is discussed on traffic network system analysis. Results show that virtually zero-optimal adaptation responses can be attained with high accuracy, showing that the usage of different fidelities is expected to play a key role in enabling fully adaptive systems, one of the major goals of future systems.

10:10
Adel Mottahedi (Faculty of Mining, Petroleum and Geophysics Engineering, Shahrood University of Technology, Shahrood, Iran)
Ali Nouri Qarahasanlou (Faculty of Engineering, Imam Khomeini International University, Qazvin, Iran, Iran)
Abbas Barabadi (Department of Technology and Safety, UiT the Arctic University of Norway, Tromsø, Norway, Norway)
Organizational Resilience Estimation: Application of Expert Judgment
PRESENTER: Adel Mottahedi

ABSTRACT. Recently, the study of resilience has become more significant because people are more aware of the consequences of natural and human-made disasters. The well-fare of our communities highly relies on continuous access to the vital services supplied by the critical infrastructure systems (CIS). However, the continuous operation of these CIS can be adversely affected by different disruptive events. To survive these disruptions, being resilient is very important for CIS. Generally, the resilience of a CIS can be classified into two parts, including soft resilience and hard resilience. Hard resilience represents the behavior of the technical part of the CIS, and soft resilience represents people and the organization running the CIS before, during, and after the disruption. In this paper, the focus is on the soft part of resilience, which means organizational resilience. Resilient organizations advance despite experiencing situations that are unexpected, uncertain, often adverse, and usually unstable. From an organizational perspective, an organization's resilience is driven by four generic indicators: the Sense of ownership, flexibility, creativity, and initiative. Many factors influence these indicators with different importance levels (weights). A practical index-based methodology is introduced to estimate the organizational resilience index (ORI) by adopting these generic indicators and influencing factors. In the developed methodology, expert judgment is used to quantify the effect of influencing factors. Furthermore, fuzzy sets theory is used to capture the uncertainty and bias of expert judgment. Finally, the application of the proposed methodology is illustrated using a real case study.

10:30
Hossein Nasrazadani (ETH Zurich, Switzerland)
Bryan Adey (ETH Zurich, Switzerland)
Saviz Moghtadernejad (ETH Zurich, Switzerland)
Alice Alipour (Iowa State University, United States)
A simulation-based methodology to assess resilience enhancing interventions for transport systems: A retention basin example

ABSTRACT. This paper proposes a simulation-based methodology to evaluate the resilience of infrastructure systems considering multiple intervention scenarios. The proposed methodology features probabilistic models that are used to simulate the: 1) spatiotemporal formation of hazard events, e.g., heavy rainfall causing flooding, 2) physical and functional impacts on individual infrastructure components, followed by their performance as a system, and lastly, 3) the implementation of response and restoration measures. It also features models that characterize interventions and simulate their effects on models mentioned above. The output of the simulations is a list of metrics, e.g., the reduction in direct and indirect consequences, that can be used to evaluate the effects of interventions. The proposed methodology takes into account the uncertainties related to hazard occurrence and their impact on infrastructure systems in the evaluation of interventions, which is a major advancement over existing studies that use static hazard maps. The proposed methodology is demonstrated by using it to evaluate the benefits of three candidate storm water retention basins on enhancing the resilience of a road network in Switzerland subject to heavy rainfall, flooding, and landslides. The example provides insight into the data required to conduct such a comprehensive analysis with the presented level of detail. The proposed methodology serves as a decision support tool to facilitate the assessment and hence, planning of resilience enhancing interventions.

09:30-10:50 Session 9C: S.02: Artificial intelligence and machine learning for reliability analysis and operational reliability monitoring of large-scale systems I
Chair:
Ji-Eun Byun (Technical University of Munich (TUM), Germany)
Location: CQ-007
09:30
Felix Waldhauser (European Organization for Nuclear Research (CERN), Germany)
Hamza Boukabache (European Organization for Nuclear Research (CERN), Switzerland)
Martin Dazer (Institute of Machine Components (IMA) - University of Stuttgart, Germany)
Daniel Perrin (European Organization for Nuclear Research (CERN), Switzerland)
Wavelet-based Noise Extraction for Anomaly Detection Applied to Safety-critical Electronics at CERN
PRESENTER: Felix Waldhauser

ABSTRACT. In order for systems to fulfill safety-critical functions, it is crucial to keep their failure rates at an extremely low level through appropriate maintenance measures. If operational data is available, data-driven methods can be used to monitor the system's condition and to detect malfunctions. This requires the identification of characteristics related to the system's degradation and the detection of failure precursors. Planning maintenance actions based on failure predictions helps to reduce unexpected failures and to increase system availability and reliability. In this paper, an anomaly detection process is presented enabling remote, data-driven condition monitoring based on detected failure precursors and system malfunctions.

Electronic radiation protection systems are safety-critical and essential for protecting people and the environment from unjustified exposure to radiation. At CERN, the in-house developed and manufactured CROME system is responsible for monitoring the ambient dose rate reliably. Since a variety of measurement variables is continuously logged on databases, data-driven maintenance strategies can be employed for CROME devices. This is especially important since many devices are located in restricted, high-radiation areas where engineers cannot access them, and remote condition monitoring is required.

The proposed process combines noise extraction using wavelet transforms and unsupervised anomaly detection algorithms to increase the detectability of anomalous system behavior of safety-critical electronics. Detected anomalies can then be used for condition monitoring or system improvement. In the context of data processing, useful features for distinguishing between anomalous and normal observations are developed to improve the performance and reliability of anomaly detection algorithms. This includes the quantification of the noise extracted from raw signals using the configured wavelet transforms. Due to missing guidelines in the literature on how to configure the wavelet transform, a signal classification process is developed to compare various configurations by analyzing the performance of a gradient boosted decision tree algorithm. This algorithm is trained to detect synthetically modified samples based on statistical measures calculated for extracted noise signals. The proposed signal classification process allows the selection of the most appropriate configuration of the wavelet transform for a given noise extraction use case by relying on real signal samples and providing possibilities to customize the signal modification.

An Isolation Forest algorithm and a long short-term memory autoencoder model are used as anomaly detection algorithms to detect outliers. This is applied to two types of datasets: the raw measurement data and datasets with statistical features describing the extracted noise signals and other characteristics. Thereby, the discrete signal data is represented as samples with equal time durations. The modular anomaly detection process is embedded in a cloud-based data infrastructure which allows to customize both the selection of input data and the definition of required processing steps according to use case specific requirements. It has been found that the model-based Isolation Forest is suitable to detect signal samples with unusual characteristics based on statistical features. The autoencoder model performs well on detecting time-dependent anomalies such as unexpected deviations or unusual correlations between measurement variables but also identifies spikes with extreme values.

The anomaly detection process presented in this paper proposes a novel and integrative combination of wavelet transforms and unsupervised algorithms to improve the detectability of a broad variety of anomalies. This allows to include noise-related features in the analysis and is particularly appropriate when neither typical failure precursors nor the type of anomalies of interest are known. In this case, the process can be used to detect rare and deviant data events and is applicable to a variety of use cases due to its adaptability. However, detected samples still need to be analyzed manually to determine if they represent actual anomalies or solely infrequent behavior.

Both data-driven maintenance strategies and wavelet transforms for noise extraction are often applied in a variety of research fields, although separately. Admittedly, anomaly detection methods are commonly used for condition monitoring but rarely for safety-critical electronics especially in combination with wavelet-based noise detection. Moreover, features used in this paper describing statistical signal characteristics are widely used for signal characterization and specification, e.g., in the field of brain wave signals. Hence, the presented combination of these methods approaches the identification of anomalies related to noise in signals and proposes a routine to configure the wavelet transform optimally for a given noise extraction use case.

09:50
Yu Chen (University of Liverpool, UK)
Edoardo Patelli (University of Strathclyde, UK)
Ben Edwards (University of Liverpool, UK)
Michael Beer (Leibniz Univ. of Hannover, Germany)
Jaleena Sunny (University of Liverpool, UK)
Uncertainty quantification over spectral estimation of strong motion processes under missing data
PRESENTER: Yu Chen

ABSTRACT. The analysis of structures under random dynamic excitations, such as ground motions, requires realistic stochastic modeling of the excitations. Power spectral density function (PSDF) thus provides a useful representation of the ground motion processes. However, spectral estimation of the PSDF becomes challenging when only limited and partial recordings are available. In this paper we hereby propose a framework to estimate the power spectrum of the underlying stochastic processes under missing data. Firstly, to exploit additional information besides the incomplete recording, simulated strong motions are generated by a stochastic finite-fault model, with its region-specific parameters (source, attenuation, and site parameters) modeled as probability distributions. Then a Bayesian neural network is constructed to probabilistically learn the temporal patterns from such uncertain time-series data. More specifically, epistemic uncertainties on the model parameters of the Bayesian neural network model are learnt via variational inference. Thanks to the probabilistic merit of the Bayesian neural network, an ensemble of reconstructed realizations can be obtained, which leads to a probabilistic power spectrum, with each frequency component represented by a probability distribution. This framework is of great importance to researches such as stochastic structural dynamics, where accurate PSDF are needed for characterizing engineering excitation processes but faced with incomplete ground motion recordings.

10:10
Ji-Eun Byun (Technical University of Munich, Germany)
Daniel Straub (Technical University of Munich, Germany)
Development of physics-informed neural networks for network reliability analysis
PRESENTER: Ji-Eun Byun

ABSTRACT. Infrastructure networks, such as transportation networks and utility distribution networks, play a fundamental role in sustaining a society's functionality both in normal and disruptive situations; therefore, it is crucial to assess their reliability and resilience against hazardous events. Such assessments remain challenging, though, as real-world systems consist of a large number of components with complex independence. These large scales and system complexity incur a high computational cost in evaluating a system’s state from component states, posing a challenge for reliability analysis, which requires many iterations of system analysis. There is a potential to significantly reduce computational cost by making rapid predictions using surrogate techniques such as artificial neural networks (ANNs) instead of exact system models. ANNs can provide accurate results when being correctly trained. However, such training still remains challenging, especially with an increasing size of systems, as it requires a large amount of training data. To address these issues, we propose a physics-informed neural network (PINN) to improve ANN training by the use of physical relationships associated with system operation. Specifically, we focus on systems that are represented as a graph, where interactions between components take place only through neighboring links and nodes. By informing an ANN of dependence between components as described by graph topology, the training process can be expedited and made less sensitive to an increase in system size. The efficiency and applicability of the proposed method is demonstrated by analyzing a transportation network, where ANNs are employed to replace traffic assignment analysis.

10:30
Chao Ren (INSA ROUEN NORMANDIE, France)
Younes Aoues (INSA ROUEN NORMANDIE, France)
Didier Lemosse (INSA ROUEN NORMANDIE, France)
Edaurdo Souza De Cursi (INSA ROUEN NORMANDIE, France)
Structural reliability assessment of offshore wind turbine supports by combining adaptive kriging and artificial neural network
PRESENTER: Younes Aoues

ABSTRACT. Jacket structures become the main support of larger wind turbines in deep waters. Reliability assessment of these structures becomes vital to consider parameters uncertainties [1]. However, advanced modeling of the jacket structure to consider joint flexibility of the joints and stress concentration makes the reliability analysis of the jacket more complicated. Approximation reliability methods, as FORM/SORM are not suitable in the case of highly nonlinear problems or multiple most probable points. Crude Monte Carlo simulations is impracticable due to computationally expensive numerical models, such those required for offshore wind turbine jacket. Even though some variance reduction techniques were developed such as Importance Sampling and Subset Simulation the computational cost remain high and impractical for rare event problems. In order to obtain accurate and efficient reliability analysis of complex engineering problems, surrogate-assisted reliability analysis became increasingly important in the last decade. The basic idea is to replace the performance function by constructing a surrogate model, such as Response Surface, Artificial Neural Network, Support Vector Machine, Polynomial Chaos Expansion and Kriging. Active learning methods are widely used to construct adaptive surrogate model, where they use at the beginning few sample points to construct the surrogate model, which is updated at each iteration efficiently until the convergence by using an active learning function [2]. In this paper, a jacket model is developed to consider joint flexibility by using advanced substructuring technique. All the joints are modeled with shell elements and then reduced to super-elements. The stress concentration of the joints in the jacket model is then evaluated. An adaptive surrogate by combining Artificial Neural Network and Kriging is applied to evaluate the probability of failure of the offshore wind turbine jacket under ultimate limit state and considering 15 random variables. Moreover, the advanced modelling of braced joints of the jacket allows us to consider the local stress concentration. Finally, the probability of failure of the jacket is estimated by considering the local stress concentration by using the adaptive surrogate model combining Artificial Neural Network and Kriging. A comparison to the probability of failure estimated by using only beam modeling of the jacket shows the interest to consider the stress concentration in the reliability assessment. Furthermore, the efficiency of the surrogate-based active learning approach is demonstrated for evaluating the reliability of complex simulation models.

References: [1] R.O. Ivanhoe and L. Wang and A. Kolios. Generic framework for reliability assessment of offshore wind turbine jacket support structures under stochastic and time dependent variables. Ocean Engineering, Vol. 216, pp. 107691, 2020. [2] B. Echard, N. Gayton, and M. Lemaire. AK-MCS: an active learning reliability method combining kriging and Monte Carlo simulation. Structural Safety, 33(2):145–154, 2011.

09:30-10:50 Session 9D: S.12: Dynamic risk assessment and emergency techniques for energy system II
Chairs:
Renyou Zhang (Beijing Institute of Petrochemical Technology, China)
Mimi Zhang (Trinity College Dublin, The University of Dublin, Ireland)
Location: CQ-106
09:30
Tanja Eraerds (GRS, Germany)
Jan Soedingrekso (GRS, Germany)
Joerg Peschke (GRS, Germany)
Martina Kloos (GRS, Germany)
Josef Scheuer (GRS, Germany)
Prime implicant identification in the dynamic process of a steam generator tube rupture scenario
PRESENTER: Tanja Eraerds

ABSTRACT. A steam generator tube rupture (SGTR) is a typical failure scenario in a nuclear power plant (NPP) to be analysed. Due to the inherent importance of the process dynamics, it is a suitable candidate for a dynamic probabilistic safety analysis (Dynamic PSA). A dynamic PSA performed with the GRS tool MCDET (Monte Carlo Dynamic Event Tree) has shown the abundance of probabilistic information which can be extracted if the complex dynamic process is properly modelled. MCDET allows to consider discrete uncertain parameters by the dynamic event tree (DET) approach and continuous uncertain parameters by applying Monte Carlo simulation in combination with the DET approach. This provides the opportunity to identify new, previously unnoticed event sequences leading to undesired system states. One of the questions arising is how this information can be fed back to a classic PSA.

Such feedback may lead to improved PSA models, retaining those parts of the classic event trees not affected by system dynamics, extending and modifying those parts for which new event sequences have been identified. In this way the advantage of a classic PSA, i. e. the analysis of a whole system under consideration with comparably moderate computational effort, could be kept, while adding more realistic modelling to the treatment of inherent dynamic processes.

The prime implicants of dynamic event trees provide such a link between dynamic and classic PSA. Extracted from MCDET results they can in turn be used to generate classic event trees or subtrees. In this context, prime implicants are the minimal set of characteristic discrete conditions leading to the undesired end state of a system. In this paper it is demonstrated how, namely by using machine learning algorithms and an adapted prime implicant algorithm, the information produced in the MCDET analysis of a SGTR scenario is analysed to extract prime implicants. In addition, it is shown how these prime implicants are translated into the event tree logic of a classic PSA using an updated version of the GRS script-based PSA tool pyRiskRobot.

09:50
Tunc Aldemir (The Ohio State University, United States)
Reliability and Safety Analysis of Dynamic Systems Using Markov/Cell-To-Cell Mapping Technique

ABSTRACT. Markov/cell-to-cell mapping technique (CCMT) is a systematic procedure to model the dynamics of both linear and non-linear systems in terms of transitions among cells that partition the system state space. Markov/CCMT has been used for the reliability and safety analysis of control systems, as well as for state/parameter estimation, diagnostics and, accident management in dynamic systems. An overview of the Markov/CCMT is presented and some applications for different engineering systems are briefly reviewed.

10:10
Valentin Rychkov (EDF R&D, France)
Claudia Picoco (EDF R&D, France)
Model based software engineering techniques for dynamic reliability assessment.
PRESENTER: Valentin Rychkov

ABSTRACT. Despite obvious conceptual advantages, dynamic reliability methods are still far from the broad industrial applications. Very small market of industrial applications of dynamic reliability methods makes it very difficult to develop and maintain specific analysis tools. In this paper we present an application of an industrial tool coming from model driven software engineering domain that implements statechart concept in the context of dynamic reliability assessment. The main motivation behind this work is to show that a dynamic reliability model can be developed as a piece of software taking the advantage of existing tools leveraging decades of experience of the software development.

10:30
Tingting Luan (Beijing Institute of Petrochemical Technology, China)
Lijia Zhang (Beijing Institute of Petrochemical Technology, China)
Qinfei Xu (Beijing Institute of Petrochemical Technology, China)
Deyue Liu (Beijing Institute of Petrochemical Technology, China)
Xinyi Zhang (Beijing Institute of Oetrochemical Technology, China)
Mingyue Deng (Beijing Institute of Oetrochemical Technology, China)
Research on fire risk assessment method of automobile exhibition activities based on Bayesian network
PRESENTER: Tingting Luan

ABSTRACT. There are a large number of participants in automobile exhibition activities, with high safety risks. In the case of an accident, it is easy to cause mass death and injury and have a bad social impact. In view of the lack of real-time and dynamic characteristics of traditional risk assessment methods, a dynamic risk assessment method based on Bayesian network is proposed in this paper. The first part establishes the fire index system of auto show according to the fire evolution stage; The second part constructs the fire accident tree model of auto show; The third part constructs Bayesian model; Finally, combined with the example of automobile exhibition activities in Beijing, the dynamic risk is calculated to verify the scientific rationality of this evaluation model, so as to provide an important reference for the safety risk early warning and emergency preparedness of automobile exhibition activities.

09:30-10:50 Session 9E: Mathematical Methods in Reliability and Safety
Chair:
Nicolae Brinzei (University of Lorraine, CRAN UMR 7039, France)
Location: LG-20
09:30
Xian-Xun Yuan (Toronto Metropolitan University, Canada)
Mahesh Pandey (University of Waterloo, Canada)
Gamma Process Based Value of Information Analysis- An Exposition
PRESENTER: Mahesh Pandey

ABSTRACT. This paper presents a model to quantify the economic value of information (VOI) gained by inspections of components that are critical to the safety of an engineering infrastructure system. The problem is modelled for the stochastic gamma process that is used to model the component degradation. A novel feature of the paper is the consideration of an intricate interaction between parameter and temporal uncertainties associated with the gamma process, which has a significant effect on VOI. Previous research in VOI mostly considered either the updating of parameters or the updating of degradation state, but ignored the simultaneous interaction of the two effects. This paper provides a fully analytical exposition of this interaction in a stochastic process drive VOI problem that is common in the inspection and maintenance optimization area.

09:50
Peter Zeiler (Esslingen University of Applied Sciences, Germany)
On the Influence of Market-Specific Sales Volumes and Load Spectra of Different Applications on Operational Reliability with Confidence Interval

ABSTRACT. Before a new product is launched and sold on the market, it must be ensured that profits are generated over its life cycle. This means that financial risks must be avoided or limited. These financial risks can result from high warranty costs or from a loss of reputation if customer expectations or requirements regarding reliability are not met. Therefore, in addition to functionality, performance and design, product development also aims to achieve the targeted reliability according to customer or market requirements. Usually, a certain reliability of the product is required over the intended service life with a certain level of confidence. In addition, various services can be developed based on the reliability of a product, such as warranty contracts for specific periods, warranty extensions or maintenance contracts. Required quantities of spare parts can be defined to ensure their availability during a specific period, often specified by legal regulations. These services can be application specific or apply to the overall market of the product. For this, operational reliability must reflect a specific application or the overall market. In the case of the latter, the various market-specific applications must be taken into account. In addition, the sales volumes of the various applications in a market are usually different. Typically, there may be applications that have a very high share of the total sales volume. Often, there are also applications for special operating conditions with relatively low sales volumes. Consequently, the overall operational reliability of a product in a market should represent both the diversity of applications and their market-specific sales volumes. The reliability of a new product must be demonstrated before it is placed on the market in order to avoid the risks mentioned above and to ensure a valid design of the services. This demonstration can be based on experiments or calculations or a combination of both. As the time to market becomes shorter and shorter, the time for reliability demonstration also becomes shorter and shorter. Therefore, reliability demonstration is often based on accelerated life tests. These accelerated tests aim to shorten the test duration, e.g. by testing at higher load levels. In-service reliability is then predicted based on the accelerated test results. Usually, the load level in operation is lower than the test levels in the development phase. The load on products in a market can vary widely due to different climatic conditions, applications and conditions of use. This means that a wide variety of load spectra can be observed when operating a large volume of products. Typically, the load spectra of a market population show a significant variation of stress profiles. Often these load profiles can be grouped together to represent specific applications. The variety of load spectra of different applications, as well as their market-specific sales volumes, must be taken into account in overall operational reliability calculations. This variation in load spectra is often neglected when a stress profile is combined with a probabilistic life-stress model for reliability inference (Nelson 2004, Mettas 2005, Yang 2007, Sun et al. 2014, Zhu et al. 2015). The approach presented in (Zeiler and Eric 2020) is capable of considering a sample of load spectra. In this work, it is shown how a sample of load spectra can be considered for operational reliability calculation with confidence interval. However, no approach has yet been found in the literature that can additionally consider the market-specific sales volumes of these applications. In this paper, the approach of (Zeiler and Eric 2020) is extended to comprehensively consider the variety of load spectra of different applications as well as their market-specific sales volumes for the calculation of the overall operational reliability with confidence interval. The modules of this extended approach are described. A case study is conducted to investigate the impact of market-specific sales volumes and load spectra of different applications on operational reliability. The results of several example evaluations and sensitivity analyses are shown and discussed.

References

Mettas, A. (2005). Reliability Predictions based on Customer Usage Stress Profiles. Proceedings of the annual reliability and maintainability symposium. Nelson, W. (2004). Accelerated testing: Statistical models, test plans and data analyses. Wiley-Interscience. Yang, G. (2007). Life Cycle Reliability Engineering. John Wiley & Sons, Inc.. Zeiler, P., Eric, A. (2020). Reliability at Use Condition Considering the Statistical Uncertainty and Distribution of Life-Stress Model and a Sample of Load Spectra. Proceedings of the 30th European Safety and Reliability Conference and the 15th Probabilistic Safety Assessment and Management Conference. Sun, Q., Dui, H. N., & Fan, X. L. (2014). A statistically consistent fatigue damage model based on Miner’s rule. International Journal of Fatigue, 69, 16-21. Zhu, S. P., Huang, H. Z., Li, Y., Liu, Y., & Yang, Y. (2015). Probabilistic modeling of damage accumulation for time-dependent fatigue reliability analysis of railway axle steels. Proceedings of the Institution of Mechanical Engineers, Part F: Journal of Rail and Rapid Transit, 229(1), 23-33.

10:10
Timo Frederik Horeis (Institute for Quality and Reliability Management GmbH, Germany)
Johannes Heinrich (Institute for Quality and Reliability Management GmbH, Germany)
Fabian Plinke (Institute for Quality and Reliability Management GmbH, Germany)
A Self-Adapting Reconfiguration Process for The Failure Management of Highly Automated Vehicles

ABSTRACT. Nowadays, current and future developments of highly automated vehicles face the challenge of designing fail-operational systems without violating economic-driven cost restrictions. Thereby self-adapting failure management processes at runtime are considered to keep reasonable costs while achieving a sufficient level of reliability, availability, and safety. The aim is to regain the system's safe and available status as fast as possible after a failure occurs by implementing detection, isolation, switching, and reconfiguration measures. These self-organized measures introduce additional fallback levels by optimizing the usage of the system's resources. Thus, it significantly reduces the vehicle's costs compared to, e.g., highly redundant systems. However, implementing and executing this process is tedious, especially regarding reconfiguring the vehicle's application to the system resources, e.g., computing nodes. Depending on the vehicle's current status, the solution space may contain many possible system configurations to regain the availability and safety of the vehicle. As a result, various characteristics must be considered to determine the best-suited system configuration. Furthermore, a valid solution must be found under a given time constraint. Therefore, this paper defines a system model and suggests a mapping problem for the reconfiguration procedure to maintain the availability and safety of the vehicle. Furthermore, a process to determine the optimal solution within a given timeframe is suggested to determine a Pareto front of the most suitable system configurations, which maintain the system's safe operation.

10:30
Thibault Montigaud (LGM, France)
Frédéric Deschamps (LGM, France)
Bastien Malbert (LGM, France)
Géraldine Paillart (LGM, France)
Comparison of quantitative evaluations by FTA-type analytical approach and MBSA-type simulation approach by Petri nets

ABSTRACT. MBSA (Model Based Safety Assessment) approaches are becoming more and more common in engineering programs [1], [2]. MBSA, or more broadly MBMA (Model Based Mission Assessment) [3] provides the opportunity to represent the behaviour of systems more faithfully, to consider the impact of time using sequential approaches over cut set (which cannot), and finally to quantify a complete set of behaviours rather than a single average case under a complex set of assumptions. However, the behavioural validation of MBSA models is still a major challenge and therefore confidence in the quantitative results obtained remains a difficult question. On the other hand, analytical approaches such as the one proposed by FTA (Fault Tree Analysis), although with many more possibilities in modern software, are not able to model all the complexity of the systems. LGM has a strong know-how and experience in RAMT-S (Reliability, Availability, Maintainability, Testability - Safety) as well as MBSA and is therefore regularly called upon to produce best industrial practice guides on these subjects. It is within these activities of production of various sectoral best industrial practice guides concerning quantification of feared events for safety, that LGM was able to carry out various demonstrations and comparisons of analytical and statistical approaches using simulations on "toy cases” and representative use cases. These analyses covered both frequency evaluation approaches and probabilistic approaches such as unavailability calculations, CFI (Conditional Failure Intensity) and UFI (Unconditional Failure Intensity) in maximum and average cases considering diverse issues as wear-out, period of youth, failure to start, consideration of repairs and/or periodic test intervals. All those best practices have been applied on several industrial projects. These different results have made it possible to make recommendations regarding modelling and methodologies to be employed, depending on the input and the kind of quantification sought. It also enabled to observe the relative error between the MBSA and FTA approaches. Those recommendations will be supported by representative use cases or “toy cases”. This publication aims to share these recommendations more widely and challenged them by a wider community of our peers.

[1] Pierre LE COM, Pierre GAUTIER, Antoine LE ROY, Sylvain PASQUET, Thaïs LEBOISSELIER. “Advantage of MBSA approach for the IP transition of the French Airspace Command &Control System”. Lambda Mu 22, Le Havre, France. October, 2020.

[2] Julien VIDALIE, Michel BATTEUX, Jean-Yves CHOLEY, Faïda MHENNI, Mohamed-Sami KENDEL. “Typology of the differences Between Model-Based System Engineering (MBSE) and Safety Assessment (MBSA) models: Analysis of a Reference System”. Lambda Mu 22, Le Havre, France. October, 2020.

[3] Isabelle CONWAY, Silvana RADU, Naoki ISHIHAMA, Lui WANG. “Model Based Development for Spaceflight Assurance”. TRISMAC 2021, Tokyo, Japan, May, 2021.

09:30-10:50 Session 9F: Joint event: International Workshop on Autonomous Systems (IWASS)
Chairs:
Marilia Ramos (University of California Los Angeles, United States)
Christoph Thieme (SINTEF Digtial, Norway)
Location: LG-21
09:30
Sarah F. S. Borges (Aeronautics Institute of Technology, Brazil)
Moacyr M. Cardoso Jr. (Aeronautics Institute of Technology, Brazil)
Diogo S. Castilho (Flight Test and Research, Brazil)
Safety Analysis of Evtol Landing in Urban Centers

ABSTRACT. The operation of Electric Vertical Take-Off and Landing is scheduled to begin in 2024. Most aircraft will be flown by at least one pilot. There are also fully autonomous models such as Wisk's eVTOL Cora, CityAirbus, and EH216. Billions of dollars have already been invested and studies and projects are underway to make this revolution in aviation happen. In these scenarios, new safety issues cannot be left out. This research article aims to identify hazards and causal scenarios that would lead to losses in birds strikes during an eVTOL landing in urban centers. The System-Theoretic Process Analysis method is used to identify hazards, losses, loss scenarios, and safety requirements. To better understand the scenarios, the concepts of Skills Rules Knowledge frameworks by Rasmussen and the Hierarchy of automation levels by Endsley and Kaber will be considered. This STPA is focused on the approach for landing on helipads at the top of buildings. The Level of Automation considered was the Shared Control, with monitoring, generation of alternatives, and implementation, which could be done by the pilot or by a computer, while the selection of the alternative landing could only be carried out by the pilot. Helicopter pilots participated in the refinement of the analysis. Three Unsafe Control Actions are presented along with the related loss scenarios and mitigation requirements, revealing the importance of in-depth studies before the actual operation.

09:50
Sheng Ding (University of Stuttgart, Germany)
Niloy Chakraborty (University of Stuttgart, Germany)
Andrey Morozov (University of Stuttgart, Germany)
Inertial Measurement Unit Sensor Faults Detection for Unmanned Aerial Vehicles using Machine Learning
PRESENTER: Sheng Ding

ABSTRACT. Unmanned Aerial Vehicles (UAVs) are playing inescapable roles in many facets of engineering. The increasing demand for the usage of UAVs shows the necessity to address potential safety issues. In this research, a generic non-linear dynamic model of UAV is adapted. IMU is an essential UAV sensor and one of the most safety-critical UAV components. Several seconds of IMU malfunction lead to the crash of the drone. To address this, a unified representation of sensor-level faults, such as stuck-at, package drop, bias/offset, and noise, is presented using Simulink-based Fault Injection (FI) blocks. UAV’s Inertial Measurement Unit (IMU) sensors such as three-axis accelerometers, three-axis gyroscopes, and control commands are selected as the data source. The model is repeatedly simulated to collect data while maintaining data quality as well as a balanced ratio between healthy and faulty data. The paper presents results of Random Forest for accelerometer and gyroscope fault classification. The results of the experiments are based on extensive training and comparative test performance analysis between the implemented algorithms. Our meticulous study reports promising test accuracy and F1 score of the fault classification for accelerometer sensor and the gyroscope sensor.

10:10
Aspasia Pastra (World Maritime University, Sweden)
Tafsir Johansson (World Maritime University, Sweden)
Towards a Harmonized Framework for Vessel Inspection via Remote Techniques
PRESENTER: Aspasia Pastra

ABSTRACT. Remote inspection techniques (RIT) in performing inspections of the steel structure on ships and floating offshore are changing the landscape of ship inspection and hull cleaning. Unmanned Aerial Vehicles perform global visual inspections, ultrasonic thickness measurements and close-up surveys for ships undergoing intermediate and renewal surveys. Magnetic Crawlers can conduct ultrasonic thickness measurements and perform hull cleaning, whereas Remotely Operated Vehicles can perform underwater surveys. Moving forward, efforts to maintain good environmental stewardship, especially at the EU level, will not only require the seamless integration of RIT, but also a guarantee that all techno-regulatory elements vital to the semi-autonomous platform are streamlined into policy through multi-stakeholder cooperation. The aim of this extended abstract is to present some of the findings of the research conducted by the World Maritime University -Sasakawa Global Ocean Institute within the framework of the European Union H2020 BugWright2 project (www.bugwright2.eu/). The project aims to change the European landscape of robotics for infrastructure inspection and maintenance. The findings are related to the main elements that need to be considered for semi-autonomous platforms to form a harmonized regulatory blueprint.

10:30
Tor Stålhane (Norwegian University of Science and Technology, Norway)
Thor Myklebust (SINTEF ICT, Norway)
Trust case for autonomous vehicles
PRESENTER: Tor Stålhane

ABSTRACT. Tor Stålhane, IDI, NTNU Thor Myklebust, SINTEF Digital

Abstract The TrustMe project is developing a safety case for autonomous vehicles. A safety case is mostly based on information from the developers and refers to one or more relevant safety standards. However, trust is not the same as reliability or safety. While reliability is based on data analysis and statistics, trust is a person-to-person or person-to-thing relationship. Thus, we also need a Trust case. In order to make self-driving busses a success they need to be considered trustworthy. We started out with a rather simple relationship model to explain trust. This model was mainly based on the Technology Acceptance Model (TAM), a literature review and the results from two focus groups done in cooperation with the local bus service provider – see [1].

However, two new focus groups and a new survey, based on 54 persons, showed that this model was too simple. The main new issue that surfaced was the users’ need for information from the vehicle in order to feel safe – situational awareness. Although [2] claims that situational awareness “can guide reliance without necessarily impacting trust”, we have chosen to stick to the definitions we made in [1] and claim that “trust is confidence in or reliance on some person, organization or quality”. Thus situational awareness is a factor that influences trust.

The paper will discuss the following issues • Our first focus groups and the resulting trust case • How two new focus groups and a survey changed our understanding of people’s trust in self-driving vehicles • Our improved trust case based on a survey, two new focus groups and the work of Hoff and Bashir • The new trust case’s influence on how we may create public trust in self-driving vehicles • Our view on how the interaction between model building and data collection creates research goals and new models

The conclusion chapter will describe where we are research-wise, where we want to go and what we hope to achieve.

References 1.Stålhane, T. and Myklebust, T: Trust Case and the Link to Safety Case, SAFE 2021 2. Hoff, K.A. and Bashir, M.: Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust , Human factors, 2015

09:30-10:50 Session 9G: Prognostics and System Health Management III: Machine learning
Chair:
Fink Olga (ETH Zurich, Switzerland)
Location: CQ-009
09:30
Tuan Le (Roberval laboratory, University of Technology of Compiègne, France, France)
Hai Canh Vu (Roberval laboratory, University of Technology of Compiègne, France, France)
Nassim Boudaoud (Roberval laboratory, University of Technology of Compiègne, France, France)
Zohra Cherfi-Boulanger (Roberval laboratory, University of Technology of Compiègne, France, France)
Amélie Ponchet Durupt (Roberval Laboratory, University of Technology of Compiegne, France, France)
Ho Si Hung Nguyen (Faculty of electrical engineering, University of Science andTechnology, The University of Danang, Da Nang, Viet Nam)
A deep learning approach for Control Chart Patterns (CCPs) prediction
PRESENTER: Tuan Le

ABSTRACT. This paper presents a novel approach for predicting different patterns of control charts. In statistical quality control, Control Chart Patterns (CCPs) are very meaningful information since they directly reflect process states. There are six popular CCPs: normal, cyclic, increasing trend, decreasing trend, upward shift, and downward shift. Except for normal patterns, all other patterns indicate that the process being monitored is not functioning correctly and requires adjustment. Knowing the system state or CCPs in advance and taking timely maintenance action helps reduce defective products and incurred costs. In order to predict CCPs, Long Short-Term Memory ANN (LSTM) model, Convolutional Neural Network model (CNN), and hybrid model LSTM-CNN were developed. The models’ performance was tested on a simulated dataset containing 600 control charts. The obtained results show that the hybrid CNN–LSTM models outperformed the standalone deep learning models (LSTM, CNN models) in predicting both normal and mixed CCPs.

09:50
Nour El Houda Benlaribi (Noteworthy AI Inc, United States)
Leïla Kloul (DAVID Laboratory, University of Versailles, France)
Michel Batteux (IRT SystemX, France)
Tracking the best Machine Learning strategy for hard drive failure prediction
PRESENTER: Michel Batteux

ABSTRACT. In order to reduce the consequences of hard drives failures which can lead to data losses, several failure predictions models have been proposed in the literature. In this paper, we study the effect of integrating methods designed for time series with a classification model since each parameter can be presented as a time series and we can use such methods to generate missing data. We use Exponential smoothing and ARMA (Autoregressive Moving Average) as time series methods, and Random forest as classification model. We use SMART (Self-Monitoring, Analysis and Reporting Technology) attributes as parameters for the classification model. To conduct and check the performance of the selected techniques we compute the precision of the predictions defined as the number of successfully predicted failures by the total number of predictions, and the recall defined as the number of failures successfully predicted by the total number of failures observed. Our study relies on operational data published by the Backblaze Company for year 2014. This unbalanced dataset, which is collected from over 47000 hard drives with 81 models from 5 manufacturers, contains more than 12 million records with only 2206 record marked as failures.

10:10
Hadis Hesabi (Laval University, Canada)
Thierry Jung (Senior SME, APM and Digital Solutions, GE Renewable Energy, Canada)
Mustapha Nourelfath (Laval University, Canada)
Sofiane Achiche (Polytechnique Montreal, Canada)
Fault Diagnosis of Power Transformers Based on Machine Learning Approaches
PRESENTER: Hadis Hesabi

ABSTRACT. The power transformer, a piece of essential equipment for generating, transmitting, and distributing power, is continuously under the impact of thermal and electrical stresses during operation. Dissolved gas analysis is one of the most effective techniques to examine the health condition of power transformers. However, forecasting dissolved gas content in power transformers is complicated due to its non-linearity, in addition to the small size and high dimensionality of training datasets combined with the maintenance managers needing to deal with different types of data from diverse sources. Furthermore, these various types of data require processing with different diagnostic schemes. Since preventing the faults is essential and the traditional manual fault diagnosis is time and cost-consuming, using machine learning approaches as a new research direction can lead to a timely and accurate fault diagnosis of power transformers. This paper proposed using a support vector machine based on dissolved gas analysis data. Support vector machine is known for its robustness, good generalization capability, and unique global optimum solutions, particularly with limited data. To highlight the performance of the proposed support vector machine, five other machine learning approaches, including Naive Bayes, decision tree, random forest, K-nearest neighbours, and logistic regression, are implemented. The results indicated that the support vector machine displayed the best performance compared to other machine learning algorithms. Furthermore, it demonstrated extremely high accuracy (96%) while maintaining fast computation time for all stages in the proposed multistage fault diagnosis system.

10:30
Stefan Brunner (ZHAW Zurich University of Applied Sciences, Switzerland)
Carmen Mei-Ling Frischknecht-Gruber (ZHAW Zurich University of Applied Sciences, Switzerland)
Monika Reif (ZHAW Zurich University of Applied Sciences, Switzerland)
Christoph Walter Senn (ZHAW Zurich University of Applied Sciences, Switzerland)
Deep Gaussian Mixture Model - A Novelty Detection Method for Time Series
PRESENTER: Stefan Brunner

ABSTRACT. Various application areas require initiating safety-related failure reactions very quickly to bring the system into a safe state. Classically, sensor data is used for this purpose and if the output value of the sensor exceeds a threshold (once or several times, depending on the implementation), the safety-related failure reaction is initiated. Reading in and processing the sensor information takes a certain time. Furthermore, for most systems, there is no deterministic information about the types of states and, in particular, all combinations of subsystem states that will lead to a malfunction of the system in the near future. To offer a possibility to recognise more quickly (also for complex systems) that the system could enter a dangerous system state based on certain patterns in advance, our contribution compares different methods that can recognise such system behaviours more quickly by means of machine learning. Using historical or generated sensor data, time series are available in each case as a basis for decision-making. Therefore, in this paper, we focus on methods for novelty detection that offer the possibility to use stationary on non-stationary time-series information. There already exist various traditional novelty detection methods and deep novelty detection methods which can cope more or less well with the problem that in most cases only information about the healthy states of a system is known. Moreover, some of these methods are more oriented towards one type of time-series, stationary or non-stationary. In this contribution, we evaluate and compare the performance of these different methods for stationary and non-stationary time series. Thereby, we show that each of these methods has certain advantages and disadvantages, depending on the time series that is observed. Thus, we present the so-called Deep Gaussian Mixture Model (DGMM). Our proposed deep novelty detection method is able to predict faulty and healthy states in stationary and non-stationary time series. It consists of a combination of an autoencoder and a Gaussian Mixture Model (GMM) to utilize the advantages of each of these methods for different components of the time series. Results of experiments on both synthetic and measured data for stationary and non-stationary time series support the application of our Deep Gaussian Mixture Model for novelty detection on time series.

09:30-10:50 Session 9H: Maritime and Offshore: Organizational Factors & Safety Management
Chair:
Trond Kongsvik (Norwegian University of Science and Technology, Norway)
Location: CQ-105
09:30
José Cristiano Pereira (Universidade Católica de Petrópolis - UCP, Brazil)
Namir Furtado Vieira Júnior (Universidade Católica de Petrópolis - UCP, Brazil)
Alexandre Magno Ferreira de Paula (Universidade Católica de Petrópolis - UCP, Brazil)
The use of the kirkpatrick Assessment Method to reduce the risk of technical training failure to improve its effectiveness and, consequently, the quality of Maritime Equipment Maintenance Processes - A Case Study.

ABSTRACT. Effective maintenance of marine equipment is crucial to reducing accident rates. Consequently, the need for investments to ensure the quality of the services is essential. Maintenance performance can be improved by developing human potential through technical training, definition, creation of skills in employees, and adoption of tools that ensure the effectiveness of this training. Technical training may fail, and employees not well trained can impact the quality of services and products and not meet customer expectations. The lack of a robust effectiveness evaluation process may lead to failure in the training program. Research shows that companies, in general, cannot verify the effectiveness of their training. This fact was observed in a North American offshore maritime support company with a branch in Brazil, showing the need for improvement. The objective of this study is to demonstrate how Kirkpatrick`s four levels of assessment (Reaction, Learning, Behavior, and Results) can be used to manage training in the maritime support company, showing their applicability as a tool for continuous quality improvement and reduced risk of failure in training. Kirkpatrick`s model, known for involving simple questions, is applied in different contexts as a tool for continuous improvement in training management. A case study was carried out on an AHTS maritime support vessel. Employees and their respective leaders completed a survey, operational and quality data were reviewed to obtain the necessary information. The results confirm that it is possible to guarantee the effectiveness of technical training using the proposed method, demonstrating that it is adequate for each assessment level. The study shows that Kirkpatrick`s four assessment levels can improve the training program and reduce the risk of training failure and operational failure due to inadequate training. The conclusion is that the study is significant since the proposed method allows training process optimization and risk reduction and permits decision-makers to assign funds for critical activities that can impact the quality of the training process and improve safety. The present study will augment the knowledge of the operations managers, safety managers, maintenance, safety engineers/managers and help in the decision-making process. For the academic world, the results can serve as a basis for future studies, given the importance of the topic for the quality management of organizations and the maritime sector.

09:50
Caroline Kristensen (SINTEF Digital, Norway)
Siri Mariane Holen (SINTEF Digital, Norway)
Gunnar Lamvik (SINTEF Digital, Norway)
Eivind H. Okstad (SINTEF Digital, Norway)
Ranveig Kviseth Tinmannsvik (SINTEF Digital, Norway)
Design of a simulator-based emergency preparedness training concept, Case: Sea-based aquaculture

ABSTRACT. Contingency training in sea-based aquaculture becomes an increasingly important part of the emergency preparedness. Like other industries, this is especially connected with new technologies being introduced to support critical emergency-preparedness functions, as well as communication technology and new ways of interaction between involved parties in crisis situations. The use of simulators and virtual reality (VR) adapted to the relevant physical environments are, from the authors point of view, believed useful approaches for contingency training as part of the emergency preparedness. Such means makes it possible to practice for activities and decisions in realistic emergency preparedness situations that are difficult to carry out in a real physical environment (Baldauf et al., 2016). There exist simulator centres in Norway today adapted for training in both normal operations and emergency situations. These centres are mostly aimed at the maritime industry (see Wahl & Kongsvik, 2008) but do not cover the specific needs seen in the aquaculture industry when it comes to emergency preparedness. In connection with the earlier MarinSim project in SINTEF, a simulator-based platform for training on operational-risk scenarios in sea-based aquaculture was developed, but it did not cover emergency preparedness (Holmen et al., 2017). Some other existing courses and training offered to the maritime sector might be relevant to personnel in sea-based aquaculture, but none of these courses or training facilities cover handling of the special hazards and accidents typically seen in the aquaculture industry (Holen et al., 2021). Therefore, one of the work packages of an ongoing research project aims at developing a simulator-based training concept adapted to special needs identified in sea-based aquaculture (Innovation project, The Research Council of Norway: 309305). The chosen methodology to achieve the goal of this part of the project contains the following activities: 1) planning of a specific contingency exercise at a training simulator, 2) developing a 'script' (based on a task analysis) for the specific exercise and observing its execution, and 3) deciding on criteria or method to evaluate the exercise in aftermath, as well as carrying out such an evaluation. These activities will be carried out in near collaboration with key industrial partners. As one important preparation, the project team needs to discuss and agree on an appropriate training scenario, or different scenarios to cover the most critical emergency-preparedness needs for the industry. This paper documents the process and knowledge gained by the project team taking part in the planning work with industrial partners leading up to the contingency exercise, observing its execution-, and carrying of the evaluation afterwards. As mentioned, the planning focuses on the scenario development and preparation at the simulator centre, including the scenario script. As a supplemented background, a literature study of published research that cover use of technology, organization, and implementation of contingency training (or emergency drills) in an industrial context is added. Based on project experience and knowledge in SINTEF, an overall status within simulator-based training seen from various industries will also be established. By carrying out this pilot of a simulator-based contingency exercise, the industry partners in collaboration with research partners will achieve a basis for further testing, evaluation, and development of contingency training fitted for the needs in the aquaculture industry.

10:10
Ingrid Bouwer Utne (Department of Marine Technology, Norwegian University of Science and Technology (NTNU), Norway)
Arve Dimmen (The Norwegian Coastal Administration, Norway)
In the aftermath of the Viking Sky incident: Cruise ship safety in Norway and potential risk reduction measures

ABSTRACT. On March 23, 2019, the cruise ship Viking Sky with 1373 persons onboard almost grounded in severe weather conditions at Hustadvika, Norway. The near-miss incident had a major accident potential and clearly demonstrated the serious consequences that may result from loss of power and propulsion on a cruise ship close to the coastline in strong onshore winds and rough seas. This incident, combined with an increase in the cruise ship traffic in Norwegian waters, calls for risk reduction measures to be identified, evaluated, and implemented for the industry and society. It is not feasible to have sufficient search and rescue resources to handle worst-case scenarios with large cruise ships, and therefore proactive mitigation measures are necessary. This paper presents the challenges with cruise ship safety and emergency preparedness and gives recommendations for risk reduction measures. The paper is based on a White paper made by a Committee appointed by the Norwegian Government, with members (including the authors) and contributions from the industry, authorities, and academia.

10:30
Valtteri Laine (Aalto University, Finland)
Osiris Valdez Banda (Aalto University, Finland)
Floris Goerlandt (Dalhousie University, Canada)
Towards a risk maturity model for the maritime authorities: A literature review on recent approaches in different industrial sectors
PRESENTER: Valtteri Laine

ABSTRACT. A number of risk maturity models have been introduced in the literature and professional context for different industrial sectors. These models have proven to be useful when evaluating the organizational risk management performance and steering it towards a higher level. However, no research has been conducted to provide a risk maturity model for the maritime industry despite of its potential benefits to this field. Therefore, the aim of this study is to take the first steps for closing this gap through addressing the recent work in this area. To this end, we have conducted an extensive literature review that focuses on state-of-the-art research on risk maturity models in various industrial sectors. As a result of this process, we present a synthesis of the current approaches and discuss on their applicability for designing a risk maturity model for the maritime authorities. Although the results of this review still need to be further scrutinized and more research is needed to meet the model end-users criteria, they form a sound basis for future work in this context.

09:30-10:50 Session 9I: S.06 C: Safety and Reliability in Road and Rail Transportation: Safety Management & HF
Chair:
Vikram Pakrashi (University College Dublin, Ireland)
Location: CQ-107
09:30
Ishbel Macgregor-Curtin (Irish Rail, Ireland)
Nora Balfe (Irish Rail, Ireland)
Maria Chiara Leva (Technical University Dublin, Ireland)
Fatigue Risk Management: Current Practices and Challenges

ABSTRACT. This paper reviews academic literature, legislation and current industry regulations and guidance documents on fatigue risk management in the transport sector and the railway industry in particular. From the research, it is clear that fatigue represents a safety hazard that should be managed as such.

Both academic literature and industry guidance agree that hours of service (HoS) limitations implementing traditional working time legislation (EC, 1993) are a prescriptive tool insufficient to manage the complexity of fatigue risk in railway organisations, particularly where worker roles include safety critical tasks or where workers work shifts and/or overtime. Both academia and industry advocate a risk based (reflecting current health and safety legislation) defenses in depth approach (Dawson & McCulloch, 2005) implemented in the form of a fatigue risk management system (FRMS) as a more appropriate method to manage fatigue risk (Gander et al., 2011; Moore-Ede, 2010; Jones et al., 2005). FRMS treat fatigue as a hazard and manage fatigue related risk holistically with interdepartmental cooperation. A FRMS consists of a 1) a FRMS Policy that complies with the relevant regulatory requirements and industrial agreements; 2) Education and awareness training programme; 3) A fatigue reporting mechanism with associated feedback; 4) Procedures and measures for monitoring fatigue levels; 5) Procedures for reporting, investigating and recording incidents that are attributable wholly or in part to fatigue; and 6) Processes for evaluating information on fatigue levels and fatigue-related incidents, undertaking interventions and evaluating the effects of those interventions (Gander et al, 2011).

There are a number of difficulties associated with the introduction and implementation of FRMS (Williamson et al., 2011; Butler & Bell, 2017). One of which is the tailoring they require ensure appropriate measures are put in place for the size, type, complexity and risk profile of the individual organisation (Cheng & Tain, 2020). A second is their complexity and the administrative burden they can represent (Mawhood, 2013; Bourgeois-Bougrine, 2020). FRMS require continuous gathering, monitoring and effective analysis of fatigue related data. This data is primarily gathered through subjective measures such as self-assessment through PVT tests or assessment of rosters through biomathematical models (BMM) against arbitrary thresholds. To date there is a lack of advanced technological physiological monitoring and measuring tools and objective indicators of fatigue. There is also a lack of an agreed industry standard to measure fatigue levels and how they can be objectively assessed (Bjegojevic, et al., 2021).

There are also difficulties in implementing an FRMS in relation to organisational culture, maturity and responsibility. Fatigue, caused by all waking activities (Gander et al., 2011), is both detrimental to individuals long-term health and wellbeing (Grandner, 2017; Lerman et al., 2012; Barnes & Watson, 2019) and represents an organisational cost resulting from staff absence, staff turnover and from the reputational and financial impact of high-profile accidents (Hafner et al., 2016; Hidden, 1989; Young & Steel, 2017; Bowler & Gibson, 2015). It is therefore in the best interest of both organisation and workers to manage fatigue together in close co-operation and in an atmosphere of shared responsibility and just culture. However, in order to do so there must be a shared understanding of and commitment to the FRMS at all levels of the organisation, it must be fully aligned with organisational goals and integrated across departments with its effectiveness measured through key performance indicators in a cycle of continuous learning and improvement.

This paper will present the research and guidance available from the academic and industrial literature in relation to FRMS, and discuss the challenges associated with practical application.

09:50
Tomas Kertis (Siemens mobility, s.r.o., Czechia)
Dana Prochazkova (Czech Technical University, Czechia)
Radek Rehak (Siemens mobility s.r.o., Czechia)
Railway Safety Development in the Czech Republic, Recent Accidents and Lesson Learnt
PRESENTER: Radek Rehak

ABSTRACT. The Czech railway system with the huge net of tracks, including the international tracks within the European railway system, has the long-term tradition and unfortunately also experience with number of very severe accidents. All railway accidents have big economic consequences and impacts on human lives and health. Railway safety is developed and led by European directives, agreements (e.g., the fourth railway package) and the national law and regulations. Ongoing safety improvement is a crucial part of the whole human effort since the technologies are more and more complicated and at leading into practice, they also have a lot of weaknesses, i.e., causes of risks. The presented work focuses on the safety development via overview of the current rail safety law and standards and comparison of causes of accidents. It summarizes results of critical judgement of common causes of accidents since 2016. Based on those results, it provides the proposal of measures for safety improvement of railway system.

10:10
Chao He (University of Duisburg-Essen, Germany)
Dirk Soeffker (University of Duisburg-Essen, Germany)
Identification of human driver critical behaviors and related reliability evaluation in real time
PRESENTER: Dirk Soeffker

ABSTRACT. 1. Background The development of automation is shifting the role of humans from active controlling to passive monitoring. Human operators should maintain situation awareness and manually take control when automation is incapable of dealing with the problem. In human-machine systems, the role played by humans is becoming more and more important as the proportion of human related accidents is increasing. In traffic context the US national highway traffic safety administration (NHTSA) stated that 94 % of traffic accidents are related to human factors. Human errors in driving context are widely investigated. Many human driver error taxonomies are proposed. Within the literature on human error, three perspectives are dominant, which are Norman’s (1981) error categorization, Reason’s (1990) slips, lapses, mistakes and violations classification, and Rasmussen’s (1986) skill, rule, and knowledge error classification. These classifications of human errors and their applications in driving context improve the understanding of human error mechanisms in situated driving context.

2. Goal of the work In previous works, the authors provide a human performance reliability score (HPRS) which is applied to driving data collected from driving simulator with the modified fuzzy-based CREAM (cognitive reliability and error analysis method) approach. The driving behaviors and the mechanism of human error to the corresponding HPRS numbers are not analyzed. In this contribution, the classification of human driver error and its contributing conditions in driving will be reviewed. The driving behaviors and the mechanism of human error with the continuously calculated values will be analyzed to investigate what really happens. Human driver reliability will be evaluated especially in situated context, this means dynamically changing situations (on a second-timescale). The newly developed approach provides a dynamic measure and therefore allows to dynamically identify critical situations during operation in real time. As example the supervision of interacting human drivers will be shown.

3. Applied methods In this contribution, human error classification methods will be reviewed, human driver error and its contributing conditions in driving context are studied. The modified fuzzy-based CREAM approach proposed in previous works is applied for human reliability evaluation of drivers in real time.

10:30
Stephen Crabbe (Fraunhofer Institute for High-Speed Dynamics, Ernst-Mach-Institut, Germany)
Eduardo Villamor Medina (ETRA, Spain)
Katharina Roß (Fraunhofer Institute for High-Speed Dynamics, Ernst-Mach-Institut, Germany)
Corinna Köpke (Fraunhofer Institute for High-Speed Dynamics, Ernst-Mach-Institut, Germany)
Katja Faist (Fraunhofer Institute for High-Speed Dynamics, Ernst-Mach-Institut, Germany)
Uli Siebold (CuriX AG, Switzerland)
Eros Cazzato (CuriX AG, Switzerland)
Anett Mádi-Nátor (Cyber Services Plc., Hungary)
Eli Ben-Yizhak (Elbit Systems C4I & Cyber, Israel)
Ido Peled (Elbit Systems C4I & Cyber, Israel)
Alper Kanak (Ergünler Co. R&D Center, Erarge, Turkey)
Niyazi Ugur (Ergünler Co. R&D Center, Erarge, Turkey)
Marco Tiemann (Innova Integra Ltd., UK)
Marie-Hélène Bonneau (International Union of Railways, France)
Kaci Bourdache (Laurea University of Applied Sciences, Finland)
Stelios C. A. Thomopoulos (National Center for Scientific Research “Demokritos”, Greece)
Christos Kyriakopoulos (National Center for Scientific Research “Demokritos”, Greece)
Konstantinos Panou (National Center for Scientific Research “Demokritos”, Greece)
Antonio De Santiago Laporte (Metro de Madrid, Spain)
Emmanuel Matsika (Future Mobility Group – NewRail, Newcastle University, UK)
Raphael David (Future Mobility Group – NewRail, Newcastle University, UK)
Emiliano Costa (RINA, Italy)
Giulia Siino (RMIT Europe, Spain)
Sujeeva Setunge (RMIT University, Australia)
Mojtaba Mahmoodian (RMIT University, Australia)
Nader Naderpajouh (The University of Sydney, Australia)
Davide Ottonello (Stam S.r.l., Italy)
Tatiana Silva (Tree Technology, Spain)
Andreas Georgakopoulos (WINGS ICT SOLUTIONS, Greece)
Nelly Giannopoulou (WINGS ICT SOLUTIONS, Greece)
SAFETY4RAILS INFORMATION SYSTEM (S4RIS) PLATFORM DEMONSTRATION AT MADRID METRO SIMULATION EXERCISE
PRESENTER: Stephen Crabbe

ABSTRACT. SAFETY4RAILS is the acronym for the European Union Horizon 2020 co-funded innovation project entitled: “Data-based analysis for safety and security protection for detection, prevention, mitigation and response in trans-modal metro and railway networks”. Its focus is to support the increase of security and resilience against combined cyber-physical threats including natural hazards to railway and metro infrastructure. Its objectives target capabilities to support the characteristics of resilient systems; resilience represented by cycles containing phases of identification, protection, detection, response and recovery1 (or similarly named phases) (even if in practice it is not always possible to consider these phases sequentially).

These capabilities are being achieved through the increase in the Technology Readiness Levels (TRLs) of, presently, eighteen tools and their combination in an overall platform: the SAFETY4RAILS Information System (S4RIS) platform. The eighteen tools brought to the project consist of earlier research results and/or products from other domains. The capabilities include: design optimisation e.g. to mitigate blast effects; cryptography between infrastructure node pairs and secure gateways for confidentiality and integrity; a block chain based data integrity solution; quantitative and qualitative risk assessment; cost benefit analysis; Open Source Intelligence (OSINT) for the identification of potential threats such as malware and phishing campaigns; Artificial Intelligence (AI) based analytics for example for anomaly detection in information technology and operational technology (IT/OT) systems; a decision support system; simulation of cyber/physical threats including cascading effects within rail and metro networks and from or to other critical infrastructures; and agent based modelling for example to simulate the behaviour of crowds and to optimise operational organisational of detection methods. The combination of operationally promising selections of the tools is aimed at improving both functions for end-users and the overall accuracy and precision of insights presented to them in their command and control centres. The S4RIS platform architecture with its specific functionalities are customised developments in the project. Core to the approach of the S4RIS platform is a Distributed Messaging System (DMS) and the fusing and analysis of structured & unstructured data.

An ESREL paper in 2021 offered a very first look into SAFETY4RAILS project and the S4RIS platform as well as some of the tools that are included in the platform.2 This paper will describe the architectural solution implemented for S4RIS in the last year and the demonstration of representative capabilities from the first simulation exercise with Madrid Metro at the beginning of 2022. The simulation exercise was built around the phases of resilience cycles as represented above (identification, protection, detection, response and recovery) for threat scenarios identified as of interest by end-user partners in the project.

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 883532. The information appearing in this abstract has been prepared in good faith and represents the views of the authoring organisations. Neither the Research Executive Agency, nor the European Commission are responsible for any use that may be made of the information contained in this abstract.

1. Department of Communications, Climate Action & Environment, NIS Compliance Guidelines for Operators of Essential Service (OES), August 2019, p.8 and p.22. 2. Miller N. et al, “A Risk and Resilience Assessment Approach for Railway Networks”, Proceedings of the 31st European Safety and Reliability Conference (ESREL 2021), Pages: 2071-2078, ISBN: 978-981-18-2016-8, Research Publishing Services, Singapore 2021, DOI: 10.3850/978-981-18-2016-8_402-c.

09:30-10:50 Session 9J: S.10: Human-Robot collaboration: The New Scenario for a Safe Interaction II
Chair:
Mario Di Nardo (Università di Napoli, Italy)
Location: LG-22
09:30
Budi Hidayat (PT. PEMBANGKITAN JAWA BALI, Indonesia)
Muchammad Jati Nugroho (PT. PEMBANGKITAN JAWA BALI, Indonesia)
Egga Bahartyan (PT. PEMBANGKITAN JAWA BALI, Indonesia)
Rifky Raymond (PT. PEMBANGKITAN JAWA BALI, Indonesia)
Miftahul Jannah (PT. PEMBANGKITAN JAWA BALI, Indonesia)
IMPLEMENTATION OF COMPUTER VISION USING INTELLIGENT CUSTOM OBJECT DETECTION SOLUTIONS TO IMPROVE ASSET, RISK AND SAFETY MANAGEMENT SYSTEM IN SEVERAL POWER PLANT
PRESENTER: Budi Hidayat

ABSTRACT. PT. PEMBANGKITAN JAWA BALI as a Power Generation Company decided to facing new business horizon as the theme and also the direction to face the unprecedented business challenge in the electricity industry. We are addressing the company to takepart in the global trend through several strategic programs. Among them are asset optimization,Improving Enterprise Risk management (ERM), safety management system and enterprise asset management (EAM) trough innovation, creativity and adoption of new technology.

On other hand Computer Vision has been expanding at a rapid pace over the last decade to reach the equivalent of human vision level, even now it is possible to emulate human vision for performing complex visual tasks faster and even more effectively than humans do. This paper discusses a novel approach to implement computer vision in asset management, Risk Management, and Safety Management System to revolutionize various segments using what we called "Intelligent Customization Object Detection Solutions" that we have already developed over the last 3 years.

This approach involves enhancements for Sense (media detection), Think (Algorithm), and Act (Notification and Reporting) which can be tailored into Asset, Risk, and Safety Management System needs. The case study involves implementation on 5 different major power plants in Indonesia and more in 2022-2023 with multi-billion-dollar asset base and spread over a variety of locations.

09:50
Fabien Sechi (Institute for Energy Technology (IFE), Norway)
Yonas Zewdu Ayele (Institute for Energy Technology (IFE), Norway)
Security Implications of Social Robots in Public Space – A Systematic Literature Review
PRESENTER: Fabien Sechi

ABSTRACT. Social robots are increasingly becoming ubiquitous from healthcare to our homes. State of the art social robots (autonomous or semi-autonomous) and AI are typically designed in a way that they adapt and optimise their behaviour over time as more knowledge of their environment is gained [1,2]. This entails some sort of data gathering, storage, and processing that is used as a basis to improve the robot's behaviour. If we assume that the AI of social robots soon will evolve, adapt and learn from its context and optimise the user experience, a question is how to assure the security of the end-users. Moreover, what are effective security policies that protect end-users? Finally, how shall these security policies be governed? A challenge is that the capabilities of modern AI and social robots (for public space) evolve rapidly, and there is a lack of knowledge on how to assure by design a continued level of security for end-users that interact with such systems. Further, in most of the available social robot and cyber security literature, the security implication of social robots in public space has received less attention. The overarching purpose of this paper is thus to understand and scrutinize the security principles and guidelines for social robots operating in public spaces. In this study, we have performed a systematic literature review (SLR) to map the current state of the art and state of practice. We answered the following research question: i) what are the security, safety, and privacy principles and guidelines suitable for application to social robots in public spaces; ii) what legal differences in are principles and guidelines between robots in public and private spaces. For the method, we adhered to the Kitchenham guidelines [3]. For selection, we used the Web of Science - WOS database, built a specific search string, applied inclusion criteria part of this string, excluded studies meeting at least one of three exclusion criteria, or scored too poorly on seven quality criteria. The literature included journal publications, conference papers and proceedings, book excerpts, industry reports, and white papers. The initial findings depicted that the most available frameworks related to social robots are safety-based rather than security-based. Furthermore, we could revise the current standards to encompass a broader cybersecurity scope, such as adequate legal and privacy aspects of social robots, specifically in the development phase of both software and hardware components.

References: [1] UiO, “Vulnerability in the Robot Society (VIROS)”, University of Oslo, 2019. [2] IRI, “Social Cognitive Robotics in the European Society (SOCRATES)”, https://www.iri.upc.edu/project/show/171 [3] Kitchenham, B.: Procedures for performing systematic reviews. Tech. rep., 2004

10:10
Giulio Paolo Agnusdei (University of Salento, Italy)
Valerio Elia (University of Salento, Italy)
Maria Grazia Gnoni (University of Salento, Italy)
Fabio Fruggiero (University of Basilicata, Italy)
Digital twins and collaborative robotics: a SWOT-AHP analysis to assess sustainable applications

ABSTRACT. Digital twins, complex infrastructures able to connect physical systems with virtual ones in a bi-directional way, seem to be promising enablers of production system replication in real time. In the manufacturing field, cooperation and collaboration between humans and robots (properly cobots) in a shared environment is spreading out. Digital twins and cobot are becoming fundamental tools to support human at workplace. This study aims at evaluating the benefits as well as criticalities of applying digital twin technology for cobot implementation within manufacturing operations. The adopted hybrid methodology combines the SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis and AHP (Analytical Hierarchical Process) to assess the sustainability of digital twins and cobot implementation in a specific workplace by analyzing economic, as well as safety and environmental impacts. The main findings reported that application of digital twins and cobot may improve the safety at workplace by reducing hazards. Furthermore, the potential integration of digital twins and cobots represents an effective solution to overcome the weaknesses and threats of correlated systems separately conceived. The potential contribution of using digital twins in designing and managing these applications could help researchers and technicians. Results have practical implications as they allow for the application of optimal innovative solutions in the manufacturing and re-manufacturing sector with an extending domain for further research.

10:30
Silvia Carra (Italian Workers’ Compensation Authority (INAIL), Italy)
Luigi Monica (Italian Workers’ Compensation Authority (INAIL), Italy)
Giuseppe Vignali (University of Parma, Department of Industrial Engineering, Italy)
Sara Anastasi (Italian Workers’ Compensation Authority (INAIL), Italy)
Mario Di Nardo (Università di Napoli Federico II, Italy)
MACHINE SAFETY: A DECISION-MAKING FRAMEWORK FOR ESTABLISHING THE FEASIBILITY OF THE COLLABORATIVE USE
PRESENTER: Silvia Carra

ABSTRACT. A successful decision-making process, aimed at establishing the convenience and the adequacy in safety of the collaborative use of machines in today’s Industry 4.0, should be able to take into account a global and complexity-based vision of each process or working environment. Even practical constraints arising from industrial experience cannot be neglected. System dynamics analysis methods, to be applied on complex interactions, can be useful in this sense [1]. At the same time, the regulatory component is fundamental; it also relates to the entire certification process that companies have to face when implementing a human-machine interaction system. The present study aims to outline the main construction steps of a new complexity-based decisional framework. It is expected to be able to support companies during the decision-making process for establishing the feasibility of collaborative use of machines in terms of safety at the time of Industry 4.0. A previous work by the authors [2] extracted from literature several decisional approaches related to safety assessment and management in environments with human-machine interaction, as well as safety regulatory constraints in relation to new emerging risks. Starting from such scientific basis, an abstract representation of decisional levels in companies through nested subsystems is initially proposed. Then, a possible structured tool, that could be easily used by industrial practitioners even during early design stages, is outlined. With respect to other recently proposed decision-making frameworks for safety including human factors [3], the present work is characterized by the leading role given to the compliance with safety European regulation and machinery technical standards. The existing regulations are considered also in view of their future probable updates in the Industry 4.0 context.

Partial bibliography

[1] Adriaensen, A., W. Decré and L. Pintelon (2019). “Can complexity-thinking methods contribute to improving occupational safety in industry 4.0? A review of safety analysis methods and their concepts”. Safety, 5(4), art. n. safety5040065. [2] Carra, S., L. Monica and G. Vignali (2021). “Decision making approaches for safety purposes in working environments with Human-Technology Interaction”. Proceedings of the 31st European Safety and Reliability Conference (ESREL), Angers, France, 19-23 September 2021. [3] Di Martino, Y., S.E. Duque, G. Reniers and V. Cozzani (2021). “Making the chemical and process industries more sustainable: innovative decision-making framework to incorporate technological and non-technological inherently safer design (ISD) opportunities”. Journal of Cleaner Production, 296, article number 126421.

09:30-10:50 Session 9K: Nuclear Industry safety practice
Chair:
Sebastian Martorell (Universitat Politècnica de València, Spain)
Location: CQ-010
09:30
Ji Hyeon Shin (Ulsan National Institute of Science and Technology, South Korea)
Jae Min Kim (Ulsan National Institute of Science and Technology, South Korea)
Seung Jun Lee (Ulsan National Institute of Science and Technology, South Korea)
Visualizing Key Features of Nuclear Power Plants in Abnormal Situation
PRESENTER: Ji Hyeon Shin

ABSTRACT. When one of the thousands of components in a nuclear power plant occurred abnormal problem, the plant condition can deteriorate and cause severe safety issues. Therefore, operators have to diagnose an abnormality in complex plant systems and conduct the appropriate operating procedure. These tasks not only have to be performed within a given time but also have a high task level based on a large amount of plant parameters and alarms information. Recently, artificial neural networks have been studied to support operator diagnosis and reduce human error. However, for using this model results as an operator support system, it needs to study an efficient way to provide. For this, we approach plotting the cause of the abnormal event on the nuclear power plant map. We arrange the pixels corresponding to each main parameter in the form of the nuclear power plant map. A convolutional neural network was trained with these imaged parameter data sets. The diagnosis of the trained model was interpreted using guided gradient-weighted class activation mapping methods, and the position of the diagnostic cause was visualized red color on the nuclear power plant map. By providing the causal component position as the improved support system, the operators can diagnose faster and more safely.

09:50
Essi Immonen (VTT Technical Research Centre of Finland Ltd, Finland)
Joonas Linnosmaa (VTT Technical Research Centre of Finland Ltd, Finland)
Atte Helminen (VTT Technical Research Centre of Finland Ltd, Finland)
Jarmo Alanen (VTT Technical Research Centre of Finland Ltd, Finland)
Benchmark Exercise on Nuclear Safety Engineering Practices
PRESENTER: Essi Immonen

ABSTRACT. Development and utilization of large and complex systems, such as nuclear power plants (NPP), require a rigorous and a well-organized approach to keep managing the project in a safe and economically feasible manner through its long, now in many cases approaching 60 years, life span. This is supervised by the safety authorities by reviewing and assessing the fulfilment of plants’ safety criteria. Over time, as more knowledge of the technical and physiological limitations of the systems, materials, humans, or environment becomes available, the safety criteria and requirements are updated to correspond with it. These changing requirements can also force modifications to the plant, thus becoming another driving factor for the constant change in plant systems. The nuclear industry has extensive safety analysis methods to take care of the safety requirements, to analyze, evaluate and justify the safety of the plant.

However, managing this interaction between main elements of safety design (safety requirements, safety analyses and plant design) is a complicated process, which needs to be integrated across many disciplines, methods, and processes. This integration is typically handled in the safety engineering practices. Thus, efficiency can be pursued from better safety engineering practices, which handles changes in any of the main elements of safety design. This raises the need for rigorous and well-organized approach for design and operation. Even though each country has its own country-specific nuclear safety requirements, which have led to different safety engineering practices, they still have the same goal of showing the fulfilment of the safety requirements in the plant design.

To accelerate the implementation of best safety engineering practices, a benchmark exercise on safety engineering practices (BESEP) is conducted between several EU countries. This will help to find the most efficient practices to support the safety margins determination against external hazards and safety requirement verification helping the licensing process of nuclear power plant new builds and upgrades. The outcome of the benchmark exercise is especially beneficial to countries planning to build new nuclear power plants, but without previous experience in the implementation of safety engineering practices. Experiences and results on the comparison of different analysis methods can be used to improve the licensing processes of nuclear power plant new builds and upgrades. The results will support the safety margins determination and requirements verification by giving guidance on how to improve the flow of information between different safety analysis methods and how to create a graded approach for the deployment of more sophisticated safety analysis methods, including upgraded and validated simulation tools. The graded approach aims to maintain a balance between the plant level risk originating from different external hazards and the resources and level of detail allocated for the analysis of different external hazards.

This paper introduces the concepts and the methodology for the benchmark exercise and reports the initial results from the project’s first years. First we explain the reference model for the safety engineering process, leaning heavily to the principles of model-based systems engineering, aimed to balance the interaction between management of safety requirements, plant design and system safety analyses. Then we present an example case study, where the benchmarking of the actualization of this process will happen. Important part of process is the management, allocation and elaboration of the related safety requirements; thus, we try to explain how the requirements for each of the case studies have been handled. Lastly, the role of safety analyses and their relation to the safety margins concept and related failure tolerance analyses is described.

10:10
Xueli Gao (Risk, Safety and Security Department, Institute for Energy Technology, Norway., Norway)
Peter Kapati (Risk, Safety and Security Department, Institute for Energy Technology, Norway., Norway)
Application of the Structured Safety Argumentation Approach Guidance on the Halden Safety Fan
PRESENTER: Xueli Gao

ABSTRACT. The OECD Halden Reactor Project (HRP) has for the last years been performing research on safety demonstration of Digital Instrumentation & Control. The Halden Reactor Project has been in operation since 1958 and is the oldest NEA joint project. It brings together an important international technical network in the area of nuclear safety ranging from fuel reliability, the integrity of reactor internals, plant control/monitoring and human factors. As a part of this research, we have studied how structured argumentation can be applied in the field of safety assurance in a systematic and assessable way [1, 2, 3]. Based on our experiences with case studies and related literature, a high-level Structured Safety Argumentation Approach Guidance (SSAAG) was outlined to aid the application of structured argumentation for safety demonstration [4]. The guidance intends to support the development of systematic argumentation of safety in the context of the whole system development process at a generic level. A part of the SSAAG has thereafter been applied on the Halden Safety Fan (HSF) system development case . The HSF is being developed by IFE in a continuous effort to offer a safety relevant case with full documentation for such case studies as ours [5, 6]. The study’s objective was to get practical experience with the SSA approach guidance and identify improvement possibilities.

Our experience with applying a part of the guidance shows that the SSAAG has the potential to be useful as a general framework for organizing the safety argument construction with further improvements and extensions. As far as we know, such a high-level guidance is not available for structured safety argumentation. References

1. Karpati, P., Olsen, S. A., Gran, B. A., Sechi, F., Hauge, A. A., “Structured Safety Argumentation – APR 1400 Case Study”, HWR-1248, OECD Halden Reactor Project, 2019. 2. Karpati, P., Airrila, M., Edvardsen, S. T., “Structured Safety Argumentation for Decommissioning: The InStrucT Protoype Tool and a Case Study”, HWR- 1253, OECD Halden Reactor Project, 2019. 3. Gao, X., Gran, B.A., “Survey on the Way of Practice for Safety Demonstration of DI&C in Different Industries”, HWR-1284, OECD Halden Reactor Project, September 2020. 4. Karpati, P., Gao, X, Hauge, A.A., Gran, B.A., “Guidance on Applying Structured Argumentation for Safety Demonstration”, HWR-1283, OECD Halden Reactor Project, 2020. 5. Gran, B.A., et. al., HWR-1289, Halden Safety Fan – Context Description and System Specification, 2020. 6. Sechi, F., Hauge, A.A., Sirola, M., Olsen, S. A., Linnosmaa, J., Sarshar, S., HWR-1287, Early stage safety assessment using a system model as input, 2020.

10:30
Isabel Marton (Universitat Poltècnica de València. Departamento de Estadística e Investigación Operativa Aplicadas y calidad, Spain)
Sebastián Martorell (Universitat Poltècnica de València. Departamento de Ingeniería Química y Nuclear. GRUPO MEDASEGI, Spain)
Sánchez Ana (Universitat Poltècnica de València. Departamento de Estadística e Investigación Operativa Aplicadas y calidad, Spain)
Carlos Sofia (Universitat Poltècnica de València. Departamento de Ingeniería Química y Nuclear. GRUPO MEDASEGI, Spain)
Villanueva José Felipe (Universitat Poltècnica de València. Departamento de Ingeniería Química y Nuclear. GRUPO MEDASEGI, Spain)
STUDY OF THE IMPACT OF OBSOLESCENCE ON THE RELIABILITY OF NUCLEAR POWER PLANT SAFETY EQUIPMENT
PRESENTER: Isabel Marton

ABSTRACT. Nuclear energy is currently a fundamental pillar in the energy transition due to its robustness, guaranteeing the security of supply and neutrality of greenhouse gas emissions. Therefore, it is necessary to guarantee the margins of safety and economic viability in the horizon set by the National Integrated Energy and Climate Plan 2021-2030. In the case of Spanish nuclear plants (Generation 3), after the scenario laid down in the national climate plan, most nuclear plants will be quite close to reaching 40 years of operations, which is their design life. A Periodic Safety Review (PSR) should be carried out to obtain approval for the operation of the plant for an additional period (normally 10 years), which would effectively mean the approval of extended life to operate the plant beyond its design life, known as Long Term Operation (LTO). In these PSR, the plants should bear in mind specific aspects such as aging management and equipment obsolescence. Regulatory organizations (CSN) and companies in the sector agree on the need for developing methods and tools for pro-active obsolescence management and all recognize it lies on three main pillars: identification, prioritization of obsolescence problems and planning of solutions adopting the most appropriate strategy for the management of each technical obsolescence issue. Most of the work published in the literature regarding the prioritization of obsolescence problems is based on the use of experts to consider multiple criteria. Risk-based prioritization is considered relevant in this context, although practical applications that consider the risk dimension as a decision criterion are omitted. Therefore, the objective of this paper focuses on studying the effect of obsolescence in the risk-informed dimension into the set of decision-making criteria to be used by the expert panel. To estimate the risk impact of technological obsolescence of safety-related equipment it is necessary to develop a RAM model which includes not only equipment aging, the effectiveness of maintenance and efficiency of surveillance testing but also the effect of obsolescence in an explicit way and the tuning of RAM model parameters using estimations of historical data is also necessary. An application case based on a motor-operated valve of a nuclear power plant is presented.

11:10-12:50 Session 10A: Community resilience & social vulnerability
Chair:
Maria Nogal (Delft University of Technology, Netherlands)
Location: CQ-006
11:10
Marianna Loli (University of Surrey, UK)
George Kefalas (Draxis Envidonmental, Greece)
Elena Bouzoni (Grid Engineers, Greece)
Guillermo Diaz Fanas (World Bank, United States)
Stergios Mitoulis (University of Surrey, UK)
Leon Kapetas (Resilient Cities Network, Netherlands)
Integrating social vulnerability into climate adaptation of urban transport in Maputo, Mozambique
PRESENTER: Marianna Loli

ABSTRACT. In Mozambique, extreme weather events have a measurable impact on social and economic growth. While the country is exposed to a range of natural hazards, including cyclones, droughts, and wildfires, flooding is recognized as the primary concern. Triggered by cyclones and intense rain events, floods have become more frequent in recent years (eg. Figure 1a) and are expected to follow this trend due to climate change. Recently, major floods following the passage of cyclones Idai and Kenneth (2019) caused 603 fatalities, leaving 1641 people injured and 2.5 million people in need of humanitarian aid, of which 1.3 million were children. At the same time, the country is urbanizing at a rapid pace. As a result, its economic and social development has become highly dependent on the commercial and industrial activity that is concentrated at its capital, Maputo, and the surrounding cities and districts (Matola, Marracuene, and Boane) that form the Greater Maputo Area (GMA). Yet, population growth and the associated rising demand for transport has not been backed with the needed investment to environmentally sustainable and inclusive transportation services. As a result, accessibility to jobs, health, and education facilities for the poorest population in GMA is one of the lowest in Africa, with one third of public transport users reporting that flooding hinders their access to jobs during rainy days. A methodical investigation of flood risk for the existing road network in Matola and Maputo has been conducted in the framework of a major urban development project planned by the Government of Mozambique. Assessment of flood hazard exposure has been carried out using the Flood Hazard Index framework and available GIS data of historic flood records. Baseline calculations have classified hazard into five classes, from very low to very high, showing that a significant proportion of the road network (approximately 30%) lies in areas of high to very high hazard exposure (Figure 1b). In addition to hazard susceptibility, the assessment of flood risk has considered socioeconomic vulnerabilities, according to the multi-layered Index for Risk Management (INFORM) model, based on the distribution of the low-income population and the accessibility of vulnerable groups to health and education services (Figure 1c-d). The paper presents a novel implementation of the INFORM framework for the analysis of flood induced transportation disruptions in a region where the humanitarian impact of such events can be disproportionate. The outcome of this study is expected to be useful for the prioritization of climate adaptation interventions to enhance resilience and inclusivity of urban transport systems in GMA.

11:30
Ingo Schönwandt (Institute for the Protection of Terrestrial Infrastructure, German Aerospace Center (DLR), Germany)
Jens Kahlen (Institute for the Protection of Terrestrial Infrastructure, German Aerospace Center (DLR), Germany)
Daniel Lichte (Institute for the Protection of Terrestrial Infrastructure, German Aerospace Center (DLR), Germany)
Implementing societal values as drivers for performance indicators to improve resilience analysis of critical infrastructure
PRESENTER: Ingo Schönwandt

ABSTRACT. The resilience of critical infrastructures is assessed with key performance indicators that are unavoidably based on the underlying societal values of the stakeholders. Though societal values are under constant change and critically determine the resilience management of critical infrastructures they are difficult to consider in decision-making approaches. This research presents a proof-of-concept approach to highlight the relevance of societal values for decision-making and resilience management. Previous research proposed to use abstract worldviews to solve the complex decision problem presented by the human and nature coupled system described by the common lake model. By replacing the abstract worldviews with a reduced set of societal values we establish a formalized relationship between the societal values and the lake model. We show that even slight changes in societal values can lead to significantly different behavior of the lake model. Though the approach is extremely simplified it serves so highlight the sensitivity of decision problems to societal value changes.

11:50
Christian Foussard (IHEIE - MINES Paris - PSL University, France)
Wim Van Wassenhove (IHEIE - MINES Paris - PSL University, France)
Cédric Denis-Remis (IHEIE - MINES Paris - PSL University, France)
Taking public concerns into account as a risk management criterion. A case study.

ABSTRACT. On September 26, 2019, a large-scale fire affected the Lubrizol industrial site in Rouen, France. Despite the effectiveness of the emergency response in bringing the blaze under control, the accident gave rise to numerous critical reactions from the public, widely reported in the media. This event is an opportunity to question the consideration of public concerns as a risk management criterion. Reading the collection of various declarations and positions is particularly complex and leaves an impression of unintelligibility. In the context of this accident, this paper presents the interest of setting up a specific reading grid resulting from a methodological framework proposed by Ortwin Renn (Renn & al., 2002), in order to allow an organization of speeches restoring the bases of intelligibility necessary for a work of analysis and comprehension. This reading grid has three levels, and our comprehensive case study tends to validate the hypothesis that links incomprehension and tensions between the stakeholders with a shift in the levels of discourse between the interlocutors.

Renn, O., Kastenholz, H., & Leiss, W. (2002). Guidance document on risk communication for chemical risk management (Series on Risk Management: Environment, Health and Safety Publications, Vol. 16)

12:10
Trine Stene (SINTEF AS, Norway)
Trond Kongsvik (NTNU, Norway)
The Relevance of Resilience Engineering and Community Resilience for Future Maritime Transport Systems
PRESENTER: Trine Stene

ABSTRACT. Even though resilience perspectives are relatively new in safety studies, the resilience concept is increasingly reported in safety studies and literature. The concept of system resilience is important and popular—in fact, hyper-popular over the last few years (Woods, 2015). A variety of definitions are used, and it is applied in different research areas. Maritime transport systems are becoming increasingly interconnected, automated, and complex. The paper will present the MARMAN (Maritime Resilience Management of an Integrated Transport System) project financed by the Norwegian Research Council of Norway for the period 2021 – 2024. Implementation and application of connected and autonomous vessels involves different degrees of autonomy and different forms of Intelligent Transport Systems (ITS) in the Maritime Transport System (MTS) domain. This will increase the complexity, change the interconnection between actors and change the way of working (Stene & Fjørtoft, 2020). However, no systematic documentation of potentials in autonomous transport systems regarding resilience are currently available (Schröder-Hinrichs et al, 2016). The resilience concept is used in a lot of contexts, as healthcare, aviation, chemical and petrochemical industry, nuclear power plants, and railways. The concept represents a proactive management approach and principles for handling both normal operations and unexpected events. There are differences between the organisational practices between countries and sectors. This includes emphasizing different aspects and variables. The purpose of this paper is to develop a framework for addressing future challenges MTS when implementing autonomous vessels. The framework will be based on perspectives mainly represented by Resilience Engineering in addition to Community resilience. This will include key performance indicators (KPIs) related to a future resilient MTS, i.e. integration, management, and cooperation between actors in: (a) the maritime sea leg and (b) the ports and terminals. Resilience Engineering (RE) has become increasingly used as a theoretical approach to new societal challenges and brittleness related to e.g. safety and disasters management. Community resilience in broad terms refers to the ability of localised (usually geographically defined areas) to respond, cope and adapt to change through communal actions (Cretney, 2015). References Cretney, R.M (2015). Local responses to disaster. The value of community led post disaster response action in a resilience framework. Disaster Prevention and Management (25), 27-40 Schröder-Hinrichs Praetorius, G., Graziano, A., Kataria, A. and Baldauf, M. (2016). Introducing the Concept of Resilience into Maritime Safety. In: P. Ferreira, J. van der Vorm, D. Woods (ed.), Proceedings: 6th Symposium on Resilience Engineering: Managing resilience, learning to be adaptable and proactive in an unpredictable world. 22nd -25th June 2015 at Lisbon, Portugal (pp. 176-182). Sophia Antipolis Stene, T.M. & Fjørtoft, K. (2020). Are Safe and Resilient Systems less Effective and Productive? In: Proceedings of the 30th European Safety and Reliability Conference and the 15th Probabilistic Safety Assessment and Management Conference, Edited by P. Baraldi, F. Di Maio and E. Zio. Copyright c ESREL2020-PSAM15 Organizers. Published by Research Publishing, Singapore. ISBN/DOI: 978- 981-14-8593-0 Woods, D.D. (2015). Four concepts for resilience and the implications for the future of resilience engineering. Reliability Engineering and System Safety 141, 5-9. http://dx.doi.org/10.1016/j.ress.2015.03.018

12:30
Christine Große (Mid Sweden University, Sweden)
A glimpse of sustainability culture? Reflecting the sustainability concept in society's resilience

ABSTRACT. Sustainability has gained importance for society’s resilience not just since a few decades. Already about 300 years ago, the necessity of treating natural resources in a sustainable way has been a matter of concern. In the aftermath of the Thirty Years War (1618-1648), the efforts to sustain the forests in Europe, which supported the livelihood of the majority of the people at that time, have substantiated the sustainability concept. Focusing present conditions, electricity has been acknowledged as the backbone of modern society because it is central to other belonging sectors of infrastructure that is critical to society’s functionality, survival and progression. This paper aims to enhance both the knowledge about the history of the sustainability concept and the understanding of the role of a sustainability culture for societal resilience. To this end, the paper reflects historical challenges and considerations about sustainability in today's issues of society's resilience. As representatives of critical resources, the arguments will draw on the importance of sustaining forests and wood in the past and reflect this in the importance of undisturbed electricity supply in the present. The study applies the concept of culture as theoretical framework. Culture as an expression of the collective memory illuminates civilisations and human actions in both the past and the present. It offers a multidimensional orientation and action system that can be described through perceptum (artefact), understood with the help of exemplum (mental model) and explained by conceptum (concept), which together generate norma (patterns) for actio (action). The results inspire a cultural orientation and action system in the context of sustainability by highlighting societal challenges around essential resources in both the past and present. The paper contributes insights into the main difficulty with sustainability – it forms a complex concept of systemic nature due to myriad influences. This complex system is difficult to comprehend, and furthermore, it exceeds the sphere and lifetime of individual actors. Regardless, each individual is part of the system, which means that sustainability is also about individual responsibility. The elusive, systemic interactions mean that humanity needs to create a culture of sustainability to collectively learn from history, integrate sustainability thinking into everyday life and work with creativity and courage for the future.

11:10-12:50 Session 10B: S.08: Resilience-informed decision-making to improve complex infrastructure systems II
Chair:
Bryan Adey (ETH Zurich, Switzerland)
Location: CQ-008
11:10
Santhosh Tv (Bhabha Atomic Research Centre, Mumbai, India, India)
Edoardo Patelli (University of Strathclyde, Glasgow, UK, UK)
Gopika Vinod (Bhabha Atomic Research Centre, Mumbai, India, India)
A SAFETY-BASED RESILIENCE QUANTIFICATION FRAMEWORK FOR CRITICAL SYSTEMS
PRESENTER: Santhosh Tv

ABSTRACT. Resilience is a concept that, of late, has attracted significant interest in almost every field of science and engineering. Holling (1973) initially introduced the concept of resilience to describe a system’s ability to recover from external threat. Since then, many researchers and practitioners have proposed their own definitions to resilience while evaluation methods to quantify resilience of critical systems are still missing at large (Woods, 2006). This has also invited criticism on ambiguous definitions, vague performance metrics, and unrealistic applications to many engineering problems (Santhosh and Patelli, 2020). In principle, the system is expected to be restored from many options to regain the performance upon a disruptive event, and each recovery path has its own success probability. Based on this concept, Santhosh and Patelli (2021) have proposed an approach to quantify a global resilience metric to a critical system having such many recovery options. However, as safety is more important and it is practically impossible to demonstrate through acceptable risk derived from the traditional risk analysis covering all maximum credible events, it is highly likely that the unknown threats challenge the safety at certain point in time during operation. Hence, for critical infrastructure the performance-resilience alone cannot be the primary objective to qualify system being robust, instead system should also satisfy the safety-resilience objective. This paper presents a safety-based resilience quantification framework for critical infrastructure. This approach not only assess the system from performance-resilience perspective but also integrates the safety element into the quantification of global resilience metric. The proposed approach has been applied to a case study of nuclear power plant and global resilience metrics with and without the safety element has been computed. It is important to note that resilience is an essential component of risk assessment and not an independent measure for risk or safety. On the contrary, it provides a complementary way to improve the safety of critical systems. Hence, resilience is a characteristic of the system but not the ultimate performance to maximize. This is partly in agreement with Woods (2006) argument that one can only measure the potential for resilience but not the resilience itself.

References

1. Holling, C. S. (1973). Resilience and stability of ecological systems. Annu Rev Ecol Syst (4), 1-23. 2. Woods, D.D. (2006). Essential characteristics of resilience. In: Hollnagel E, Woods D, Leveson N, editors. Resilience engineering: concepts and precepts. Burlington, VT: Ashgate Publishing Company. 3. Santhosh, T.V. and Edoardo Patelli (2020). Resilience Engineering: Principles, Methods and Applications to Critical Infrastructure Systems. In E. N. (ed.), Reliability-Based Analysis and Design of Structures and Infrastructure. CRC Press/Taylor & Francis Publisher. 4. Santhosh, T.V. and Edoardo Patelli (2021). A Resilience Evaluation Framework for Complex and Critical Systems. 31st European Safety and Reliability Conference. 19-23 September 2021. Angers, France.

11:30
Salvatore Francesco Greco (ETH Zürich, Switzerland)
Andrej Stankovski (ETH Zürich, Switzerland)
Blazhe Gjorgiev (ETH Zürich, Switzerland)
Giovanni Sansavini (ETH Zürich, Switzerland)
Multi-hazard security assessment of transmission systems: a data-driven probabilistic framework for cascading failure analysis

ABSTRACT. Cascading failures significantly contribute to the total demand not served of safety-relevant blackout events occurring in transmission systems worldwide. Cascades in the network can be triggered by multiple, concurrent contingencies that originate from different types of external (e.g. extreme weather events) and internal (e.g. random component failures) hazards. Given the stochastic nature of the latter, probabilistic approaches are required to assess the vulnerability of transmission systems to such threats, complementing the existing deterministic approaches in the security assessment of transmission systems. Framed within this research direction, the present project aims at providing a probabilistic modelling framework to comprehensively assess the risk profile of a given transmission system subject to multiple external and internal hazards. The framework will combine established PRA tools (e.g. hazard maps, fragility curves) with state-of-art quantitative models (e.g. Bayesian inference, machine learning algorithms) to estimate data-driven failure probability values and uncertainty bounds for system components, and assess the impacts of component outages on the system via cascading failure analysis. The proposed framework will allow for in-depth analyses of the security of supply of transmission systems, supporting operators in the identification of system vulnerabilities and in the prioritization of risk mitigation strategies to minimize the consequences of accidental events for the connected end-users.

11:50
Gabriele Baldissone (Politecnico di Torino, Italy)
Micaela Demichela (Politecnico di Torino, Italy)
Antonello Barresi (Politecnico di Torino, Italy)
Davide Fissore (Politecnico di Torino, Italy)
Francesca Bosco (Politecnico di Torino, Italy)
Rapid archival document risk assessment methodology

ABSTRACT. A large economic, historical and artistic heritage is contained in the public archives. As documents that have legal relevance are contained in the company archives. The documents contained in the archives can be exposed to various risks eg. Fire, pests and other. One of the most significant risks that archival documents can incur in is flooding. Flooding have sourced from meteorological waters or waters of artificial origin eg. failure of the fire-fighting system. The flooding of the archives can cause the destruction of documents, the melting of the inks or can favor the growth of molds. With consequences both on the archival heritage than on the safety of workers. For these reasons it is important to extend the risk assessment of archives to the risk of flooding. This paper introduces a methodology for assessing the risk of archive flooding. The proposed methodology takes into account the possible damage caused by flooding (eg the economic value or the rarity of the documents), the probability of occurrence and the recovery time in order to limit the damage. It allows also taking into account the recovery system that the research advancement are making available, as freeze-drying and the use of essential oils.

12:10
Omar Kammouh (Delft University of Technology, Netherlands)
Ahmadreza Marandi (Eindhoven University of Technology, Netherlands)
Claudia Fecarotti (Eindhoven University of Technology, Netherlands)
Maintenance grouping strategy for multi-component interconnected systems: a scalable optimization approach
PRESENTER: Omar Kammouh

ABSTRACT. The well-being of modern societies depends on the functioning of their infrastructure networks. During their service lives, infrastructure networks are subject to different stresses (e.g., deterioration, hazards, etc.). Maintenance is performed to ensure the continuous fulfillment of the infrastructure’s functional goals. To guarantee a high level of infrastructure availability and serviceability with minimal maintenance costs, preventive maintenance planning is essential.

Unlike corrective maintenance, preventive maintenance allows for maintenance activities to be adequately planned thus facilitating the optimal grouping of maintenance activities. Maintenance grouping can be highly advantageous in complex multi-component systems, such as interconnected infrastructure networks. It enables maintenance setup costs to be shared and the frequency of scheduled and unscheduled downs to be reduced. However, there can also be negative economic consequences due to the increased frequency of implementing some activities or the waste of remaining useful life if preventive thresholds for components replacement are not optimized.

Finding the optimal grouping strategy of maintenance activities is an NP-hard problem that is well studied in the literature and for which various economic models and optimization approaches are proposed. This research focuses on a new efficient optimization algorithm to cope with the maintenance grouping problem of interconnected multi-component systems. We propose a scalable two-step maintenance grouping approach based on a clustering technique. The clustering technique is formulated using Integer Linear Programing, which guarantees the convergence to global optimal solutions of the considered problem. The proposed optimization approach is formalized into a structured mathematical model that can account for the interactions between multiple infrastructure networks and the impact on multiple stakeholders (e.g., society and infrastructure operators). It can also accommodate different types of intervention, such as maintenance, removal, and upgrading.

We show the performance of the proposed approach using a demonstrative example. Results reveal a substantial reduction in net costs. In addition, the optimal intervention program obtained in the analysis shows repetitive patterns, which indicates that a rolling horizon strategy could be adopted so that the analysis is only performed for a small time window.

12:30
Divya Jayakumar Nair (The University of New South Wales, Australia)
Chence Niu (The University of New South Wales, Australia)
Tingting Zhang (The University of New South Wales, Australia)
Vinayak V. Dixit (The University of New South Wales, Australia)
Transportation resilience optimization at the pre-event stage by using an integrated computable general equilibrium model
PRESENTER: Chence Niu

ABSTRACT. Since disruptive events can cause negative impacts on a city’s regular traffic order and economic activities, it is crucial that a transport network is resilient against disaster to prevent significant economic losses and ensure regular social, economic, and traffic order. However, using the transport metric for resilience improvement can only provide a limited view of transport pre-investments. This study tackles the problem of resilient road pre-investment with the aim of resilience optimization of traffic systems from an economic perspective. First, we use the Shapley value to determine critical candidate links that need to be upgraded. Second, we propose the Economic-based Network Resilience Measurement (ENRM) as a performance indicator to evaluate the network-level resilience from the economic perspective. Third, a bi-level multi-objective optimization model is formulated to identify the optimal capacity improvement for candidate critical links, where the objectives of the upper-level model are to minimize the ENRM and pre-enhancement budget. The lower-level model is the integrated computable general equilibrium (CGE) model that includes a CGE sub-model, which can be applied to capture economic impacts and a traffic sub-model, which optimizes travellers’ behaviours under user equilibrium conditions. The genetic algorithm approach is used to solve the proposed bi-level model. A case study of the optimization framework is presented using a simplified Sydney network. Results suggest that a higher budget can help reduce the ENRM and improve transportation resilience. However, the Pareto-optimality is observed, and the marginal utility decreases with an increase in the investment budget. Further, the results also show that investment returns are higher in severe disasters. This study will help transport planners and practitioners optimize resilience pre-event investment strategies by capturing a wider range of project impacts and evaluating their economic impacts under general equilibrium rather than partial economic equilibrium, which is often assumed in traditional four-step transport planning.

11:10-12:50 Session 10C: S.02: Artificial intelligence and machine learning for reliability analysis and operational reliability monitoring of large-scale systems II
Chair:
Ji-Eun Byun (Technical University of Munich (TUM), Germany)
Location: CQ-007
11:10
Katharina Rombach (ETH Zurich, Switzerland)
Gabriel Michau (Stadler Rail & ETH Zurich, Switzerland)
Kajan Ratnasabapathy (ETH Zurich, Switzerland)
Lucian-Stefan Ancu (Swiss Federal Railways, Switzerland)
Wilfried Bürzle (Swiss Federal Railways, Switzerland)
Stefan Koller (Swiss Federal Railways, Switzerland)
Olga Fink (ETH Zürich, Switzerland)
Contrastive Feature Learning for Fault Detection and Diagnostics in Railway Applications

ABSTRACT. A railway is a complex system comprising multiple infrastructure and rolling stock assets. To operate the system safely, reliably, and efficiently, the condition of a multitude of components and systems needs to be monitored. To automate this process, data-driven fault detection and diagnostics models can be employed. In practice, however, the performance of data-driven models can be compromised if the training dataset is not representative of all possible future conditions. We propose to approach this problem by learning a feature representation that is, on the one hand, invariant to operating or environmental factors but, on the other hand, sensitive to changes in the asset's health condition. We evaluate how contrastive learning can be employed on supervised and unsupervised fault detection and diagnostics tasks given real condition monitoring datasets within a railway system - one image dataset from infrastructure assets and one time-series dataset from rolling stock assets. First, we evaluate the performance of supervised contrastive feature learning on a railway sleeper defect classification task given a labeled image dataset that is collected by a diagnostic vehicle. Second, we evaluate the performance of unsupervised contrastive feature learning without access to faulty samples on an anomaly detection task given a railway wheel dataset that is collected by wayside monitoring systems (equipped with strain gauge sensors). Here, we test the hypothesis of whether a feature encoder's sensitivity to degradation is also sensitive to novel fault patterns in the data. Our results demonstrate that contrastive feature learning improves the performance on the supervised classification task regarding sleepers compared to a state-of-the-art method. Moreover, on the anomaly detection task concerning the railway wheels, the detection of shelling defects is improved compared to state-of-the-art methods.

11:30
Mohammad Najafi Juybari (Department of Industrial and Systems Engineering, Isfahan University of Technology, Isfahan 84156-83111, Iran, Iran)
Piero Baraldi (Energy Department, Politecnico di Milano, Via Lambruschini 4, 20156, Milan, Italy, Italy)
Antonio Palermo (Alma Mater Studiorum Università di Bologna, Via Zamboni, 33, 40126, Bologna, Italy, Italy)
Ali Eftekhari Milani (Wind Energy Department, TU Delft, Kluyverweg 1, 2629 HS, Delft, Netherlands, Netherlands)
Alessandro Marzani (Alma Mater Studiorum Università di Bologna, Via Zamboni, 33, 40126, Bologna, Italy, Italy)
Enrico Zio (MINES Paris, PSL University and Politecnico di Milano, Italy)
Wrapper Selection of Features for Fault Diagnostics of Truss Structures
PRESENTER: Piero Baraldi

ABSTRACT. Truss structures are used in power systems to support pipelines and auxiliary equipment like pumps, utility stations, manifolds, firefighting equipment, and first-aid stations. The collapse of truss structures supporting pipelines carrying dangerous liquids or gases, such as those used in the petrochemical and chemical industries, can trigger accident chains. The diagnostics of damages in truss structures are, then, important to avoid catastrophic events that can cause severe consequences. In this context, we develop a method for fault diagnostics of truss structures. The method, which exploits the power spectral densities (PSD) derived from measured structural accelerations, is based on the two steps of feature selection and data classification. The feature selection task, which aims at identifying the set of features to be used as input of the diagnostic system, is here performed by a wrapper approach based on Multi-Objective Genetic Algorithms (MOGAs). The selected features are fed to a k-nearest neighbor (KNN) classifier for the identification of the damaged scenario of the truss structure. The developed fault diagnostic method is validated on several damage scenarios numerically simulated for an aluminum tower structure. The results show that the proposed approach is able to correctly recognize the damaged scenario with a limited number of misclassifications.

11:50
Anna Varbella (ETHZ, Switzerland)
Blazhe Gjorgiev (ETHZ, Switzerland)
Giovanni Sansavini (ETHZ, Switzerland)
Deep learning for online cascading failures prediction: a comparison of graph neural networks and feed forward neural networks
PRESENTER: Anna Varbella

ABSTRACT. Past events have reviled that width-spread power blackouts are mostly a result of cascading failures in the power grid. Therefore, understanding the underlining mechanisms of cascading failures can help in developing strategies to minimize the risk of such events. Moreover, a real-time detection of precursors to cascading failures will help operators take the necessary measures to prevent their propagation into the grid. Currently, the well-established probabilistic and physics-based models of cascading failures offer low computational efficiency, hindering them to be used only as offline tools. In this work, we have developed a simulation-driven deep learning methodology for online estimation of the risk of cascading failures. We utilize detailed AC physics-based cascading failure simulation model to generate cascading failure scenarios under different operating conditions. The dataset is generated to obtain a sample space covering a large set of power grid states that are labeled as either safe or unsafe. Each sample of the dataset includes a randomly generated contingency and bus and branch conditions, i.e., the net power at each bus, the voltage magnitudes and phase angles, and the power flows. We use the synthetic data to train deep learning architectures, namely Feed-forward Neural Networks (FNN) and Graph Neural Networks (GNN). With the development of GNNs, increased interpretability and improved performance is achieved with graph-structured data. Furthermore, thanks to their inductive property, a trained GNN layer can generalize to graphs of different sizes. Since the power grids are complex networks, they can be represented mathematically by graphs, therefore, making the use of GNNs a convenient option. Indeed, the GNN dataset carries information on the grid topology and structure after the simulated contingencies, in addition to bus and branch states. The proposed architectures have been trained on the IEEE-39 bus test grid and Swiss transmission grid. A comparison between FNN and GNN is made in term of training speed and generalization ability. Moreover, the GNNs inductive capability is tested when switching from one test grid to another.

12:10
Pegah Rokhforoz (ETH Zurich, Switzerland)
Olga Fink (EPFL, Switzerland)
Multi-agent maintenance scheduling of generation unit in electricity market using safe deep reinforcement learning algorithm
PRESENTER: Pegah Rokhforoz

ABSTRACT. Improving maintenance scheduling of generation units in an electricity market would increase the safety and reliability of the system. This problem can be modeled as a multi-agent bi-level decision-making problem associated with some safety constraints. In the first level, the units, modeled as agents, decide about their maintenance scheduling and are responsible to satisfy the safety constraints. In the second level, the independent system operator (ISO) clears the market and calculates the electricity price while ensuring that the demand of the system is satisfied. Incomplete information of other units’ decisions and the requirement to satisfy safety and demand constraints make this problem particularly challenging. This paper proposes a safe reinforcement learning algorithm for generation unit maintenance scheduling in a competitive electricity market environment. In this problem, each unit aims to find a preventive maintenance scheduling which retains its reliability and satisfies safety constraints. Bi-level optimization and reinforcement learning are potential candidates for solving this problem. However, bi-level optimization and reinforcement learning cannot handle the challenges of incomplete information and safety constraints, respectively. To handle these challenges, we propose a safe deep reinforcement learning algorithm which combines reinforcement learning and a predicted safety filter. In the proposed method, the reinforcement learning algorithm can tackle the challenges of incomplete information by getting feedback from the environment and learning strategies of the other agents. In addition, the predicted safety filter guarantees that the safety constraints are satisfied and handles the challenges of critical safety constraints. We evaluate the performance of the proposed algorithm on the IEEE 30-bus system. We compare the results of the proposed algorithm with other state of the art Q-learning algorithm. The results demonstrate that the proposed approach can satisfy the system safety constraints and increase the profit of units.

12:30
Amel Belounnas (RICE GRTgaz, France)
Florent Brissaud (RICE GRTgaz, France)
Elodie Rousset (RICE GRTgaz, France)
Using artificial intelligence algorithms to identify factors of methane leaks from gas transmission assets
PRESENTER: Florent Brissaud

ABSTRACT. Background GRTgaz has launched a major program for improving the assessment and the reduction of methane emissions due to leaks from its gas transmission network and facilities. The industrial assets of GRTgaz notably include thousands of gas delivery units, each containing several pipes, gas pressure regulators, filters, shut valves, relief valves, manual valves… The leak detection campaign on the gas delivery units then requires significant resources and a lot of time. Targeting the assets that are the most likely to leak is therefore an important challenge for improving the campaign efficiency, moving forward the implementation of corrective measures, and reducing the methane emissions.

Aims Field data processing methods based on AI are investigated for analysing the factors relating to the items, their use and environment, in order to identify the assets that are most likely sources of methane leaks. Available field data are: knowledge about the industrial assets; maintenance activities performed for repairing the “external leak” failure modes; and up-to-date results of the leak detection campaign. A major issue is the characteristics of the input data, notably regarding the assets. In fact, these factors can be binary (i.e. yes or no), numerical (e.g. size, pressure…) or textual (e.g. constructor, type…). In addition, most of the factors are not known for all the assets (i.e. incomplete data) and erroneous values are inevitable (bad filling of the database). Considering only the assets for which the factors are fully known and confident would eliminate a larger part of the park, making the reduction of methane emission inefficient. It is therefore required that the proposed methods can deal efficiently with these constraints.

Methods Considering the background and aims, two field data processing methods based on AI algorithms are investigated: Bayesian networks implemented by a commercial software tool; and Gradient boosting implemented by an open-source software library for Python. Bayesian networks are implemented by BayesiaLab, a commercial software tool. The 10 version, issued in 2021, is used for the present study. Both discrete (including binary and textual) and continuous values are handled. However, continuous values need to be discretized. A genetic algorithm is used to performed automatically this task (nine other algorithms are also available, plus a manual mode). Moreover, when discrete values are numerous for a given factor, they need to be aggregated into smaller numbers of “sets” (e.g. about five values, depending on the quantity of data). Missing values are inferred by a structural Expectation Maximization (EM) algorithm (other algorithms, including entropy-based, static or dynamic imputations are also available). First, unsupervised structural learning is performed, using the maximum spanning tree algorithm (five other algorithms are also available). This tool is very convenient for investigating the relationships between factors. Each factor is depicted by a node and it is linked to the “most dependant” other factors by arrows. An “automatic mapping” allows drawing a planar network where the size of each node is proportional its “force” (i.e. degree of dependency with linked nodes). In addition, the variable clustering can group the factors in “classes”, which constitute kinds of “families” where factors are strongly dependant. Second, supervised learning is performed, using the naïve Bayes algorithm (seven other algorithms are also available). This tool models the relationship of each factor with a selected “target” which is, in our case, the leak rate. Then, it is possible to depict the “total” or “direct” effect of the factors on the “target”, using various illustrations: networks, curves, histograms, graphs… Finally, the inference is used to estimate the leak rate (or the probability to leak) based on the factors.

The other investigated method is the Extreme Gradient boosting implemented by XGBOOST, an open-source software library for Python (and other programming languages). It is used to perform a supervised learning to estimate an objective function. For the present study, a Cox regression model is selected (among several other options) for modelling the leak rate. The decision tree algorithms implemented by XGBOOST naturally deal with discrete, continuous and missing values to perform the machine learning. For the readability of the results, a Shapley additive explanations (SHAP) library is used as a complement of XGBOOST. Starting from a mean value, the SHAP value represents the positive or negative effect of each factor (given the value of this factor). Dedicated graphs show the average impact of each factor on the target, and the impact of each value of the factors.

Results The investigation of the two methods shows that both are efficient to analyse the factors of leak of the assets, and then to identify the assets that are most likely to leak, even with data of different nature (discrete, continuous…) and missing values. The Bayesian networks implemented by BayesiaLab are very convenient because of a dedicated software tool that can perform all the suitable analyses and provide illustrations of the results. However, it is a commercial tool. The Gradient boosting implemented by XGBOOST is also powerful. It is an open-source library, but it requires more experience in data handling and programming. To get illustrations of the results, a SHAP library is required. Considering the identification of the assets that are most likely to leak, the two methods do not provide the same results. However, because of a “risk-based” approach, these results should be only considered as indicators for optimizing a policy of methane emission policy. Therefore, the results of both methods are used for further campaigns of leak detection. The feedback collected in the following months will then be used for evaluating the actual “success rate” of each method for the identification of “leaky assets”.

11:10-12:50 Session 10D: Cyber Physical Systems
Chair:
Mary Ann Lundteigen (Norwegian University of Science and Technology, Norway)
Location: CQ-106
11:10
Hanne Kristine Rødsethol (IFE, Norway)
Yonas Zewdu Ayele (IFE, Norway)
Social Robots in Public Space: Use Case Development

ABSTRACT. A social robot operating in a public space can potentially gather and analyse large quantities of personal information, in the same way as popular social media platforms and digital assistants deployed on many smart phones do. However, a user can choose to not visit a social platform or use a digital assistant and choose to not give consent to its stated terms of use, thereby giving users high degree of control on what is willingly shared. To fill the gap, this study investigates two fundamental issues: i) are users willing to share personal information with a social robot in a public space? and ii) what kind of information are end-users willing to share? In this study a questionnaire-based assessment with potential end-users is carried out, and the insight from these findings will inform the use case development when piloting first prototype of social robot in public space. The primary purpose of this study is to seek information on the expectation and perceptions of end-users, including negative attitudes, towards social robots in public spaces. From the survey result it can be deduced that the predominant concern from the end-users are mainly related to a design aspect, where one challenge will be to design a social robot or the concept surrounding the robot in a way that it informs and reassures users that a social robot deployed in public space is safe and secure.

11:30
Stephen Creff (IRT SystemX, France)
Michel Batteux (IRT SystemX, France)
Characterizing behavioral modeling in Systems and Safety Model-Based Engineerings and their overlap for consistency checking
PRESENTER: Stephen Creff

ABSTRACT. Due to nowadays systems complexity, the modeling of a system is multi-concerns and multi-viewpoints in its very essence. Systems Engineering and Safety Assessment are two engineering domains that currently follow model-based approaches to conceive the system at the same level of abstraction. The overall consistency between the different models contributing to the system design is a key element of the realization. Each concern must align with common assumptions. Models are made of two kinds of constructs: structural and behavioral ones. The structural consistency challenge between Model-based Systems Engineering and Safety Assessment has already been specifically addressed in a generic way. What about behavioral consistency now? Before considering any behavioral consistency checking, identifying the behavioral modeling characteristics of the behavioral representations in each domain and potential overlap must be performed. Therefore, in this article we propose to characterize the behavioral modeling in systems and safety model-based engineerings and to provide some keys to identify their overlap for a forthcoming consistency checking.

11:50
Sanja Mrksic Kovacevic (University of Stavanger, Norway)
Frederic Bouder (University of Stavanger, Norway)
The world of AI algorithms: Challenges for uncertainty communication

ABSTRACT. The use of artificial intelligence (AI) algorithms is proliferating across various application areas. Smart cities, the oil and gas industry, the automotive industry, and healthcare are only a few of the fields benefiting from AI algorithms’ rapid development. In terms of policy, policymakers have faced new challenges in past years, including a greater focus on evidence-based uncertainty analysis and increasing uncertainty communication. Their efforts to communicate the uncertainties associated with rapid technological developments are commendable, but also challenged by a variety of obstacles. In the present paper, we examine how the risk field can assist policymakers in overcoming challenges in uncertainty communication connected to AI algorithms. We are looking at examples of medical AI algorithms to gain an overview of the present situation. Following the identification of the challenges, we are drawing on the risk field principles, that may be of assistance to policymakers in this endeavor.

12:10
Youba Nait Belaid (CentraleSupélec, France)
Anne Barros (CentraleSupelec, France)
Yiping Fang (CentraleSupélec, France)
Zhiguo Zeng (CentraleSupélec, France)
Anthony Legendre (EDF R&D, France)
Patrick Coudray (EDF R&D, France)
Enhanced Power and Communication Modeling in Cyber-Physical Distribution Grids for Resilience-based Optimization

ABSTRACT. Evolving smart grid (SG) services for demand side applications, markets, and various stakeholders are well addressed leaning on Information and Communication Technologies (ICTs). Yet, this technological leap induced high complexity in the grid, due to various power-ICT interdependencies. Managing this complexity has been very challenging over the last decade as prototyping and tools to faithfully replicate SG dynamics and all involved interactions with ICTs are this far out-of-reach. Advanced attempts considered co-simulation of both power and ICT infrastructures using domain-specific software, resulting in a relatively good description but an additional outlay of synchronization and handling different time scales. For SG studies that require low level of details and adopt a systemic view, like resilience evaluation, modeling is better suited to shed the light on paramount features. Smart grid modeling is generally electric system oriented by wide dominance of power flow analysis, associated with very few considerations of ICTs. Availability of telecommunication points-of-interest is considered in this work to capture the interdependence between power and ICT domains of the distribution grid. The integrated modeling inherently omits extra inter-domain synchronization overhead. Different telecommunication settings are therefore compared for fault localization, isolation, and service restoration (FLISR) function. An application of the joint modeling is successfully illustrated in case of resilience-based power service restoration under extreme event failure scenarios.

12:30
Antonio Estepa (University of Seville, Spain)
Rafael Estepa (University of Seville, Spain)
Adolfo Crespo Márquez (University of Seville, Spain)
Johan Wideberg (University of Seville, Spain)
Jesús Díaz Verdejo (Universidad de Granada, Spain)
Smart Detection of Cyberattacks in IoT servers: Application to smart lighting and other smart city applications.
PRESENTER: Antonio Estepa

ABSTRACT. It is not uncommon that assets are operated and managed via application servers that offer a web interface. In such a scenario, confidentiality, integrity, and availability threats can be detected using Machine anomaly detection techniques.

This work introduces a system for protecting assets by detecting cyberattacks targeted to the asset management system, including credential-stealing. For this, we detect anomalies in the HTTP transactions using probabilistic finite-state automata. Our system has been tested with a dataset that includes a week of real-life traffic (access log) from a Smart Lighting system in operation. Our results showed a precision of almost 80 % and a specificity of 99,9 %.

The same solution can be used for other aspects of smart city applications, such as traffic lights, which can be maintained by the same system. Tunnel lights can also be controlled and maintained with the same system.

11:10-12:50 Session 10E: Uncertainty Analysis
Chair:
Simon Wilson (Trinity College Dublin, Ireland)
Location: LG-20
11:10
Hugh Kinnear (University College London, UK)
Alejandro Diazdelao (University College London, UK)
Branching Subset Simulation
PRESENTER: Hugh Kinnear

ABSTRACT. Subset Simulation (SuS) is a Markov chain Monte Carlo method that was initially conceived to compute small failure probabilities in structural reliability problems. This is done by iteratively sampling from nested subsets on the input space of a performance function. SuS has since been adapted to perform as a sampler in other realms such as optimisation, Bayesian updating and history matching. In all of these contexts, it can be that either the geometry of the input domain or the nature of the corresponding performance function cause SuS to suffer from ergodicity problems. This paper proposes an enhancement to SuS called Branching Subset Simulation (BSuS). he proposed framework uses a nearest neighbours algorithm and Voronoi diagrams to partition the input space, and recursively begins BSuS anew in each partition. It is shown that BSuS is less likely than SuS to suffer from ergodicity problems and has improved sampling efficiency.

11:30
Julien Demange-Chryst (ONERA/Institut de Mathématiques de Toulouse, France)
Jérôme Morio (ONERA/DTIS, France)
François Bachoc (Institut de Mathématiques de Toulouse, France)
Shapley effect estimation in reliability-oriented sensitivity analysis with correlated inputs by importance sampling

ABSTRACT. Many physical systems are schematically described by a relation of the form Y=phi(X), where the d-dimensional input vector X is random and where the output Y is determined through the deterministic function phi. A common application is the analysis of a black box model : phi represents a numerical code and X can be regarded as the external conditions in which the calculation is done. A finite element code in structure engineering is a common example of a such model, whose complexity makes impossible to study it analytically. Moreover, calls to the code are supposed to be expensive and can therefore be made in limited number. For safety and/or certification reasons, the reliability analysis of the system is a crucial step. Without loss of generality, the failure event can be described as a threshold exceedance event {Y>t}. It is generally a rare event and it is essential to estimate accurately its probability. Crude Monte Carlo sampling techniques are not adapted to a such estimation when the failure probability is low because their convergence requires too many calls to the function phi. More efficient methods such as subset sampling or importance sampling [2] have been developed and allow to globally master this issue. Moreover, another major step of the study of a computer model is sensitivity analysis, which consists in studying the influence of each input component of X on the variability of the output Y, for example in order to reduce the problem dimension by fixing non-influential components to nominal values. There are many local and global sensitivity analysis tools, including the well-known Sobol indices. Sobol indices [5] are global sensitivity analysis indices which allow to evaluate quantitatively the influence of each input variable on the variability of the output. However, these indices are no longer adapted when the input variables are correlated : even if it is still possible to compute them, they do not allow anymore to clearly identify the origin of the variability of the output. To address this issue, by analogy with game theory, it has recently been proposed to consider the Shapley effects for global sensitivity analysis [4]. Nevertheless, their estimation is difficult since naive methods require a high computational effort. However, recent improvements have led to a drastic reduction of the computational effort thanks to estimators requiring only a unique i.i.d. input/output N-sample distributed according to the input distribution [1]. We aim here to combine both analyses in order to perform a target reliability-oriented sensitivity analysis of the system, which consists in studying the influence of each input variable on the occurrence of the failure of the system, when the input variables are correlated. It is a challenging task because it aims to obtain reliability-oriented sensitivity analysis results while minimising the computational cost after the estimation of the failure probability. The estimation of the target Shapley effects, i.e. the Shapley effects applied on the quantity of interest 1(phi(X)>t), seems to be interesting because it allows to identify the most influential components of X on the occurrence of the failure of the system. These indices and first estimation schemes have been introduced recently in [3]. Estimators requiring only a unique i.i.d. input/output N-sample distributed according to the input distribution are also proposed, which enable to estimate the target Shapley effects without additional calls to the function after the estimation of the failure probability by Monte Carlo. Nevertheless, these existing estimators are not adapted when the failure probability is low because their convergence requires a too high computational effort since they are based on a Monte Carlo sampling according to the input distribution. In this communication, we are presenting new importance-sampling-based estimators of the target Shapley effects which are more efficient than the existing ones when the failure probability is low. Importance sampling is a very well-known method in reliability analysis for estimating a low failure probability more efficiently. In the same way, the principle here consists in rewriting the target Shapley effects according to an auxiliary sampling distribution. Then, the corresponding new estimators require samples drawn according to the auxiliary distribution, and lead to a significant variance reduction when the auxiliary distribution is adapted to the problem. Moreover, we also introduce importance-sampling-based estimators requiring only a unique i.i.d. input/output N-sample distributed according to the auxiliary distribution, which enable to estimate efficiently the target Shapley effects when the failure probability is low without additional calls to the function after the estimation of the failure probability by importance sampling. In addition, we prove theoretically that using the optimal auxiliary distribution for estimating a failure probability by importance sampling, which is the input distribution restricted to the failure domain, as the auxiliary distribution improves the estimation of the target Shapley effects in comparison to the existing estimators. Recalling that if the reliability analysis has been done efficiently by importance sampling, the samples should be drawn according to an auxiliary distribution close to the optimal one, the latest result justifies then that it is numerically beneficial to reuse the available sample to estimate the target Shapley effects with the latest less expensive estimators. Finally, we illustrate the practical interest of the proposed estimators on the Gaussian linear case and on a more complicated real physical example.

[1] Baptiste Broto, François Bachoc, and Marine Depecker. Variance reduction for estimation of Shapley effects and adaptation to unknown input distribution. SIAM/ASA Journal on Uncertainty Quantification, 8(2):693–716, 2020.

[2] James Bucklew. Introduction to rare event simulation. Springer Science & Business Media, 2004.

[3] Marouane Il Idrissi, Vincent Chabridon, and Bertrand Iooss. Developments and applications of Shapley effects to reliability-oriented sensitivity analysis with correlated inputs. Environmental Modelling & Software, 143:105115, 2021.

[4] Art B Owen. Sobol’indices and Shapley value. SIAM/ASA Journal on Uncertainty Quantification, 2(1):245–251, 2014.

[5] Ilya M Sobol. Sensitivity analysis for non-linear mathematical models. Mathematical modelling and computational experiment, 1:407–414, 1993.

11:50
Alessio Faraci (Université Clermont Auvergne, Clermont Auvergne INP, CNRS, Institut Pascal, F-63000 Clermont-Ferrand, France, France)
Pierre Beaurepaire (Université Clermont Auvergne, Clermont Auvergne INP, CNRS, Institut Pascal, F-63000 Clermont-Ferrand, France, France)
Nicolas Gayton (Université Clermont Auvergne, Clermont Auvergne INP, CNRS, Institut Pascal, F-63000 Clermont-Ferrand, France, France)
Review on Python toolboxes for Kriging surrogate modelling
PRESENTER: Alessio Faraci

ABSTRACT. In recent years, various computer codes have reached a high level of sophistication enabling extremely accurate simulations in describing the physical behavior of a given system. This increase in accuracy occurs, however, at the expense of time efficiency. Despite the computing power achieved with current technologies, computationally intensive analyses (e.g. optimization problems, reliability or sensitivity analyses) actually remain a time-consuming problem in modern engineering [1]. To date, a traditional widely used approach to deal with this issue based on learning from data, relies on approximating physical models by easy-to-evaluate mathematical functions known as metamodels or surrogate modelling. As a results, the emulator has the attractive of being fast to evaluate.

In this context, metamodels based on Kriging (a.k.a. Gaussian Process (GP)) have gained momentum in computational sciences by playing a crucial role in the expansion of machine learning tools [2]. This evidence is mainly related to its features: (a) it is exact on the experimental design points under suitable assumptions; (b) it provides a prediction of the model outcome at a given point (Kriging mean); (c) it provides a local measure of the prediction error (Kriging variance). As a matter of fact, a large and growing body of literature has investigated the latter peculiarity to locally reduce uncertainties by adaptively adding extra-points in the design of experiments (a.k.a. Active-Learning methods), improving the Kriging's accuracy and efficiency. The benefit of this approach explains the success of its diffusion in many fields of application, from aerospace design to earthquake engineering, through materials science and mechanical engineering, leading to the release of several GP toolboxes over the last two decades.

The specific objective of this study is to investigate the use of different Python libraries for Kriging metamodeling purposes. To date, there is very little published information on it [3], and no large-scale studies have been conducted. A systematic understanding of how different Python toolboxes perform remains largely unexamined. Consequently, the role of this work is to set out a consistently review of the major frameworks used in the engineering field, with the aim of filling this gap. In particular, a focus on two primary aims is addressed: (a) to compare the various settings available for each library; (b) to ascertain how they perform and differ under similar assumptions.

Exactly, the main features, options and input parameters, as well as the optimization algorithms, the estimation methods, the trend's functional basis and the correlation models available for each toolbox, are described in a detailed full-comprehensive comparative table providing a general overview on the investigated Python packages. The latter are then compared revealing dissimilarity in their behavior using different data sets and different case studies based on practical structural dynamic and FEM problems. The reason behind this evidence can be essentially addressed in how each toolbox esteems the hyper-parameters through the numerical optimization algorithm. Specifically, a formal comparison is carried out on the prediction accuracy and on the estimate of the prediction error, by means of cross-validation, mean of the squared errors and error matrices for different threshold values. Furthermore, it is investigated the behavior of the toolboxes at the increasing dimensionality of the model. A well-known issue of Kriging metamodels lies on its computation complexity for large data, implying time-consuming problems. Consequently, the minimum size of the experimental design is also examined to get the same accuracy among the different packages. Finally, a comparison on computational time-cost is pointed out, and constitutes the last type of discrepancy among the studied tools.

In this investigation, the aim is to be of value to practitioners wishing to be aware on the capability and reliability of the multitude of open-source packages available nowadays.

----

References:

[1] Biegler, L., Biros, G., Ghattas, O., Heinkenschloss, M., Keyes, D., Mallick, B., Marzouk, Y., Tenorio, L., van Bloemen Waanders, B., & Willcox, K. (2011). Large-scale inverse problems and quantification of uncertainty.

[2] Rasmussen, C. E. (2003). Gaussian processes in machine learning. Summer school on machine learning, 63–71.

[3] Erickson, C. B., Ankenman, B. E., & Sanchez, S. M. (2018). Comparison of gaussian process mod- eling software. European Journal of Operational Research, 266(1), 179–192.

12:10
Maria Böttcher (TU Dresden, Germany)
Wolfgang Graf (TU Dresden, Germany)
Michael Kaliske (TU Dresden, Germany)
Robustness evaluation using information reduction measures for polymorphic uncertain quantities to support decision making procedures for structural design
PRESENTER: Maria Böttcher

ABSTRACT. This contribution focuses on the consideration of the uncertainties of input parameter, e.g. material, geometry and actions, within the numerical design process of a structure. Given an adequate modeling of these uncertainties using polymorphic uncertain variables, the objective here is to extract information regarding the robustness of the design from the uncertain output quantities. For this purpose, suitable information reduction measures (IRMs) are selected to quantify the designs' performance as well as its robustness. In this contribution, a detailed discussion about the different characteristics of the uncertain quantities expressed by various IRMs is given. In addition, the question of how to treat nested IRMs in case of polymorphic uncertain variables is addressed, including the investigation regarding their origin, the aleatoric and/or epistemic uncertainty of the input variables. A suggestion on how to improve the interpretability of the resulting IRMs is shown by an example of a simplified structural design process.

12:30
Alejandro Diazdelao (University College London, UK)
Subset Simulation for Bayesian Updating: Stopping Strategies

ABSTRACT. Subset Simulation was developed to solve reliability problems involving complex systems, through iteratively sampling from progressively more restricted subsets of the parameter space, that is, rare events. The algorithm has evolved to be efficient and robust to the number of uncertain parameters. Throughout the years, it has been extended to be applied to other problems, such as global optimisation. More recently, an analogy has been established between the Bayesian updating problem and a reliability problem, which opened up the possibility of an efficient solution by Subset Simulation. The formulation, called BUS (Bayesian Updating with Structural reliability methods) relies on the rejection principle to produce samples from a target posterior distribution. As a by-product, it computes the model evidence, which allows naturally for model selection. However, the method crucially depends on a stopping condition, which balances accuracy with computational cost. This work presents a study of qualitative and quantitative stopping conditions for BUS and proposes a new, simpler condition.

11:10-12:50 Session 10F: Human Factors and Human Reliability: HRA modelling & applications
Chair:
Andreas Bye (IFE, Institute for Energy Technology, Norway)
Location: LG-21
11:10
Gueorgui Petkov (Kozloduy NPP, Bulgaria)
Context Awareness for Uncertainty Reduction in PSA and HRA

ABSTRACT. Uncertainty in both DSA and PSA (a static PSA and even more in a dynamic PSA) still poses some fundamental problems, and therefore the correct coupling and interaction deterministic and stochastic models is of primary importance. PSA and DSA models should be developed at least in two directions to reduce their uncertainties: • The first direction is in decreasing the ‘lack of knowledge about an underlying deterministic context-free reality’ and presenting the risk as 'an uncertainty for which an explicit probability description is known.' Expanding PSA & DSA models and interaction by including multiple out of service or unavailable components, dependencies on shared structures, systems and components, environmental dependent factors, a set of checks and test procedures. Such expansion is strongly dependent on calculation time to count a large number of time points (local and global times). As in static PSA models, the uncertainties are characterized as epistemic, if the modeler sees a possibility to reduce them by gathering more data or by refining models, or they are categorized as aleatory if the modeler does not foresee the possibility of reducing them. • The second direction is in decreasing the 'lack of knowledge about the ambiguous deterministic or even non-deterministic reality' presenting contextual influence on risk as 'an uncertainty for which an implicit probability description can find out.' This means that the ‘contextual influence’ is present and in the sense that ‘human interventions and tests do influence the underlying situation in ambiguous or non-determined way.' Such an ambiguity situation cannot be described in depth only by an explicit/objective probability. Implicit/subjective probability could be used additionally for description the presence of such context awareness. The paper presents opportunities of the context quantification procedure of the PET method for explicit and implicit modeling of dependencies and reducing uncertainty in PSA and HRA.

11:30
Karl Johnson (University of Strathclyde, UK)
Caroline Morais (Brazilian Regulatory Agency of Petroleum, Gas and Biofuel, Brazil)
Lesley Walls (University of Strathclyde, UK)
Edoardo Patelli (University of Strathclyde, UK)
A data driven approach to elicit causal links between performance shaping factors and human failure events
PRESENTER: Karl Johnson

ABSTRACT. Within the field of human reliability analysis (HRA), there is an acknowledged demand to move further towards data driven models. There have been several independent research projects focused on gathering the required empirical data, to support existing theoretical models used in HRA, as well as to allow the use of probabilistic tools, such as Bayesian Networks, to model such data. However, with regards to Bayesian Networks, there is a reliance upon expert elicitation to design the structure of the network, that is, the causal links between the considered factors are determined by an expert, with the data used only to estimate the conditional probability tables. This work aims to provide a methodology/framework to elicit causal links between performance shaping factors from data, producing a HRA model constructed entirely from data, with the ability to integrate the knowledge provided by experts. The Multi-Attribute Technological Accidents Dataset (MATA-D) has been used as the data source, therefore the model is produced under a framework based on the Cognitive Reliability and Error Analysis Method (CREAM). This model is produced through a combination of information theory and structure learning algorithms for Bayesian Networks. The proposed model/methodology aims to support experts in their evaluation of human error probability, and reveal causal links between performance shaping factors, that may not have otherwise been considered.

11:50
Marilia Ramos (Garrick Institute for the Risk Sciences, University of California Los Angeles (UCLA), United States)
Caroline Morais (Agency for Petroleum, Natural Gas and Biofuels (ANP), Brazil, Brazil)
Nicola Paltrinieri (Norwegian University of Science and Technology (NTNU), Norway)
Integration of Human Reliability into Quantitative Risk Analysis in the Chemical Process Industry: Advances, Gaps, and Opportunities
PRESENTER: Marilia Ramos

ABSTRACT. Quantitative Risk Analysis (QRA) is widely used for risk management in chemical process industries (CPIs). Despite the contribution of human error to accidents, incorporation of Human Reliability Analysis (HRA) within QRAs is not the norm in CPIs. HRA and QRA practitioners could benefit from formalized and detailed guidance on best practices for extending QRAs with an explicit treatment of human error.

12:10
Sun Yeong Choi (Korea Atomic Energy Research Institute, South Korea)
Yochan Kim (Korea Atomic Energy Research Institute, South Korea)
Estimation of Human Error Probability based on Variation between Data Sources
PRESENTER: Sun Yeong Choi

ABSTRACT. The purpose of this paper is to propose a method of estimating the nominal HEP (Human Error Probability) by considering the difference between data subsets such as plant, MCR (Main Control Room) type, scenario, etc., without considering PSFs (Performance Shaping Factor). This method can be applied when the human error data collected has only general information. To this end, a parameter estimation method for component reliability and IE (Initiating Event) frequency was applied for a nominal HEP quantification. It uses an empirical Bayes estimation with a beta-binomial model or a gamma-Poisson model when there exists a difference between data subsets and a Bayesian analysis with JNID (Jeffreys Non-informative Prior Distribution) when there does not exist a difference. Based on the method, two kinds of case studies for binomial data and Poisson data were performed. Poolability tests for case studies were conducted for MCR types. When data collected is limited, the nominal HEP of interest, such as by plant and scenario, can be obtained using the method mentioned in this paper.

12:30
Luca Podofillini (Paul Scherrer Institute, Switzerland)
Vinh Dang (Paul Scherrer Institute, Switzerland)
Developing Bayesian Networks from Operational Events Analyses and Expert Judgment: a Human Reliability Application
PRESENTER: Luca Podofillini

ABSTRACT. This extended abstract addresses one of the primary huddles for the development of Bayesian Belief Networks (BBNs): the quantification of their Conditional Probability Distributions (CPDs) from scarce data. The presented process strives for traceable integration of expert knowledge and data, the latter in the form of retrospective analyses of human failures. The process combines the functional interpolation method for populating CPDs and Bayesian updates to adjust the BBN response to the available evidence. The BBN supports analysis of Errors of Commission via the Commission Error Search and Assessment (CESA) method. This extended abstract is extracted from work from Podofillini et al., 2022.

11:10-12:50 Session 10G: Prognostics and System Health Management IV: Remaining Useful Life
Chair:
Bruno Castanier (Université d'Angers, France)
Location: CQ-009
11:10
Miguel A. C. Michalski (University of São Paulo, Brazil)
Arthur H. A. Melani (University of Sao Paulo, Brazil)
Renan F. da Silva (University of São Paulo, Brazil)
Gilberto F. M. Souza (University of São Paulo, Brazil)
Remaining Useful Life Estimation based on an Adaptive Approach of Autoregressive Integrated Moving Average

ABSTRACT. Condition-based maintenance (CBM) is a maintenance strategy that has become increasingly popular as it aims to monitor the current condition of physical assets, allowing maintenance actions to be performed only when a performance loss or imminent failure is detected. When implemented correctly, this strategy leads to fewer unplanned downtime events and better prioritization of maintenance time, reducing its total cost and increasing the reliability and availability of monitored systems and equipment. For the CBM strategy to be applied to a system or equipment, a process of data acquisition and monitoring, fault detection, diagnosis, and prognosis must be implemented according to its operational context. In this scenario, Fault Detection, Diagnosis, and Prognosis (FDDP) become an important feature in any CBM strategy, being responsible for analyzing the monitoring data available for all physical assets, checking for any abnormal behavior (symptoms) of the system that determine the presence of faults in it, identifying the source and location of the fault, and anticipating the time when the system or equipment will no longer perform its designated function. When the FDDP process indicates the presence of a fault in the monitored system or equipment, maintenance needs to decide on what actions should be taken to mitigate, eliminate or otherwise deal with such fault and its future consequences. For a more assertive decision to be taken, in addition to confirming the existence of the fault, those responsible for the system or equipment maintenance must assess the fault severity and forecast the system’s Remaining Useful Life (RUL). By definition, RUL is the time interval between the moment a fault (or potential failure) is detected until a failure occurs and repair is required. Based on the RUL forecast, maintenance teams can optimize the scheduling of maintenance actions for the failing component, making the most of the system's productive capacity and, at the same time, minimizing the consequences of an unwanted failure. Therefore, the RUL prediction must be at the same time accurate, i.e., its expected value is as close as possible to the real-time in which the failure occurs and precise, i.e., the predicted RUL probability density function has low kurtosis values. Several authors propose the use of different techniques to predict RUL through prognostic analysis. Currently, the approaches to calculate RUL from a single variable may be divided into four categories: approaches based on physical models, approaches based on statistical models, approaches based on Artificial Intelligence (AI), and hybrid approaches. In this regard, more than half of the studies found in the literature address statistical-based model approaches, among which, in recent years, nearly a third of the studies address Wiener process models. Wiener process models are usually presented as a drift term plus a diffusion term following the Brownian motion. When compared with random coefficient models, e.g., such models can describe the temporal variability of degradation processes, being effective in modeling non-monotonic processes by assuming that random noise follows a Brownian motion. However, as they are based on the assumption of the Markov property, i.e., when the future state depends only on the current state, being independent of past behavior, such models do not always work in real applications. On the other hand, Autoregressive (AR) models (another, although less explored, approach to statistical-based models – only approximately 7% of works that address these methods consider this approach) assume that the future value of a time series is a linear function of previous observations plus random errors. An enhanced version of the AR models, the AR Moving Average (ARMA) model has also been considered by some researchers to predict RUL. The main advantage of these approaches is their calculation simplicity. However, its performance relies heavily on trending information from historical observations, which may lead to inaccurate predictions over time in some applications. A generalization of the ARMA model, the Autoregressive Integrated Moving Average (ARIMA) method is currently one of the most important methods for studying time series analysis, and it can be used to identify complex patterns in data and generate predictions. An ARIMA model can predict future values in the time series as a linear combination of its past values, involving a combination of three types of processes: an autoregressive process (AR), a differentiation process to remove integration (I), and a process moving average (MA). Thus, to verify the ability of ARIMA models to predict the RUL of engineering systems, this work presents an adaptive approach based on these models. The proposed method is applied to monitored data from a hydroelectric power plant. In this paper, RUL value is not only recalculated from new monitored data from the faulty system, but also a new ARIMA model is generated in each interaction, taking advantage of its calculation simplicity and low computational demand. The results obtained from the proposed method are compared with results obtained from the application of a Wiener process model for the same inputs and conditions, taking into account two main aspects: accuracy and precision. Some metrics are considered for such a comparison to be carried out and the results demonstrate that, for the considered case study, the proposed method has advantages to the Wiener process models, mainly concerning the precision of the results.

11:30
Koushiki Mukhopadhyay (University of Strathclyde, UK)
Bin Liu (University of Strathclyde, UK)
Tim Bedford (University of Strathclyde, UK)
Maxim Finkelstein (University of the Free State, Republic of South Africa, University of Strathclyde, UK)
Remaining useful life estimation of degrading systems continuously monitored by degrading sensors

ABSTRACT. We consider degrading engineering systems that operate in varying environment. The external environment along with internal aging processes in items causes deterioration not only of the main systems but also of the monitoring devices (sensors). Since accurate information is crucial for predicting system health condition and the subsequent decision-making, considering the effect of sensor degradation is highly important to obtain the justified reliability characteristics of systems such as the remaining useful life (RUL). Although the concept of sensor degradation has been introduced previously in the literature, RUL estimation in this case or parameter estimation in the presence of sensor degradation has not been studied in detail. To fill the gap, this study aims to estimate the RUL of a system that is continuously monitored by a degrading sensor. In this work, to distinguish sensor degradation from that of the main system, an additional calibration sensor is used that can accurately inspect the system health condition at certain points of time. Subsequently, maximum-a-posteriori estimation technique is employed to estimate the parameters of the system degradation process and maximum likelihood estimation is used to estimate the parameters of sensor degradation. Kalman filter is then used to estimate the system and sensor states, followed by system RUL evaluation. A numerical example with simulated data is employed to illustrate the effectiveness of the proposed method. It is shown through the numerical study that neglecting sensor degradation can result in significant errors in RUL estimation, which can further impact the subsequent maintenance decisions.

11:50
Yufei Gong (Troyes University of Technology, France)
Khac Tuan Huynh (Troyes University of Technology, France)
Yves Langeron (Troyes University of Technology, France)
Antoine Grall (Troyes University of Technology, France)
A Learning Approach for Remaining Useful Lifetime Prognosis of Stochastically Deteriorating Feedback Control Systems
PRESENTER: Yufei Gong

ABSTRACT. Real-time prognosis of remaining useful lifetime (RUL) for stochastically deteriorating feedback control systems (FCS) has recently attracted growing interest in the Prognostics and Health Management field. Such a process usually relies on the degradation modeling using stochastic processes. In [1] and [2], Langeron et al. describe the observed degradation of a critical component in a controlled drilling unit a Gamma-Poisson process. In [3], Mo et al. use Wiener processes to model the degradation of multiple actuators in a closed-loop cooling system. Similar stochastic processes are also used by the authors of [4-6] in the context of unobservable degradation. Notwithstanding, the above approach seems not very strict, because the failure of one a several individual components do not necessarily lead to the failure of the whole FCS. To overcome this obstacle, Xiao et al. [7] propose to link the stability margin of a controlled solar energy platform with the inverse Gaussian degradation process of an aging valve. By considering the stability margin as a degradation index of the entire platform, the authors successfully determine a maximum allowable level of the valve degradation that corresponds to the system failure. Unfortunately, this approach is still component-based, and how to identify such a relevant component in a complex FCS is obviously not an easy task. In this paper, instead of a study at the component level, we consider the whole deteriorating FCS (even with complex structure and of multiple components) as a black box, and we seek its degradation index. Because of the fault tolerance property of FCS, the system degradation is mostly masked in the first stage and is strongly exhibited near to the failure. It is not easy to assimilate such a phenomenon by conventional stochastic processes, and hence the system RUL prognostic is no longer feasible by probabilistic calculation or Monte Carlo simulation as usual. To remedy, a learning-based prognostic approach for the system RUL is proposed here. Assuming an available dataset of system degradation levels and associated RUL, we fit the RUL population by a parametric probability distribution and learn the functions that is the mapping between its parameters and the associated degradation level. By this way, we can derive the RUL distribution in real-time when a new degradation level is given by an inspection. More concretely, we apply the above approach to predict the RUL of a stochastically deteriorating stabilization loop control device in an inertial platform subject to a proportional-integral-derivative (PID) controller. Without loss of generality, we assume that the hidden damage of the device acts as a state-independent monotonic Gamma process or a state-dependent fluctuant Wiener processes. To explain briefly, Gamma process stands for a simple monotonically increasing stochastic process that is time-dependent; Wiener process is influenced by time and system state that has both positive and negative increments. In this case, the input reference applied to FCS and the output become the only available information. Thus, the maximum gain of this FCS, which is deriving from its transfer function, is employed as a degradation index (DI). The close-loop transfer function between system output and input contains all the FCS information. Therefore, all internal degradation features are incorporated in our DI. However, The statistic features of DI is too complex to model by existing stochastic processes. Thus, we fit the probability density function of the RUL distribution by Birnbaum-Saunders distribution (BSD). This choice is because the first passage times of the Wiener process degradation model and the Gamma process degradation model can be approximated by BSD [8]. Based on a given failure threshold, we first assume several groups of failure times based on different current DI states, and fit their RUL distributions by BSD. Then applying maximum likelihood estimation, we estimate all their parameters. Next, we propose a mapping between current DI states and two corresponding parameters respectively. According to the characteristics of the obtained two mappings, segmented piecewise polynomials [9] method is applied to fit this curve. Validations of fitting goodness are given by t-tests and ks-tests [10]. Thus, we can prognosis RUL for a deteriorating FCS at any current health state. In summary, We only use input and output of the FCS, which is suffering from internal degradation phenomena, to design a DI. This DI enable us to monitor the health state of the entire system at any time. From a learned mapping function, we can prognosis the RUL of this system given any current DI value.

Reference [1] Langeron, Y., Grall, A., Barros, A., A modeling framework for deteriorating control system and predictive maintenance of actuators. Reliability Engineering & System Safety 140(2015), 22–36 [2] Langeron, Y., Grall, A., Barros, A., Joint maintenance and controller reconfiguration policy for a gradually deteriorating control system. Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliability 231(4) (2017), 339–349 [3] Mo, H., Xie, M., A dynamic approach to performance analysis and reliability improvement of control systems with degraded components. IEEE Transactions on Systems, Man, and Cybernetics: Systems 46(10) (2015), 1404–1414 [4] Nguyen, D.N., Dieulle, L., Grall, A., Remaining useful lifetime prognosis of controlled systems: a case of stochastically deteriorating actuator. Mathematical Problems in Engineering ID 356916 (2015), 16 pages [5] Obando, D.R., Martinez, J.J., Bérenguer, C., Deterioration estimation for predicting and controlling rul of a friction drive system. ISA transactions 113 (2021), 97–110 [6] Si, X., Ren, Z., Hu, X., Hu, C., Shi, Q., A novel degradation modeling and prognostic framework for closed-loop systems with degrading actuator. IEEE Transactions on Industrial Electronics 67(11) (2019), 9635–9647 [7] Xiao, X., Mo, H., Dong, D., Ryan, M., Reliability analysis of aging control system via stability margins. Journal of Manufacturing Systems. (2021) doi: 10.1016/j.jmsy.2020.12.010 [8] Hong, L., Ye, Z., When is acceleration unnecessary in a degradation test? Statistica Sinica 27(3) (2017), 1461–1483 [9] Duan, J., Wang, Q. and Wang Y., HOPS: A Fast Algorithm for Segmenting Piecewise Polynomials of Arbitrary Orders. (2021) doi: 10.1109/ACCESS.2021.3128902 [10] James, Gareth, et al., An introduction to statistical learning. New York: springer 112 (2013)

12:10
Jie Liu (Norwegian University of Science and Technology, Norway)
Yiliu Liu (Norwegian University of Science and Technology, Norway)
Shen Yin (Norwegian University of Science and Technology, Norway)
Jørn Vatn (Norwegian University of Science and Technology, Norway)
Evaluation of remain useful life prediction models from a resilience perspective
PRESENTER: Jie Liu

ABSTRACT. In maintenance planning, the remaining useful life of components is critical. However, it is difficult to determine whether or not a model is reliable and trustworthy. There is no clear guidance about how to evaluate a remaining useful life prediction model since the output results are not displayed in the physical system. Furthermore, the missing or outlier data may cause the models to make an incorrect prediction. To improve the reliability and robustness of models, the concept of resilience is introduced to evaluate the performance of the remaining useful life prediction model. This research presents a definition for the resilience of remaining useful life prediction models. The paper then provides methods for evaluating the models used for the remaining useful life prediction. The paper could be used as a starting point for more research in areas including resilience and remaining useful life prediction models.

11:10-12:50 Session 10H: S.01: Advances in Well Engineering Reliability and Risk Management: data collection and quantitative methods
Chair:
Isis Didier Lins (Universidade Federal de Pernambuco, Brazil)
Location: CQ-105
11:10
João Mateus Santana (Federal University of Pernambuco, Brazil)
Beatriz Cunha (Federal University of Pernambuco, Brazil)
Diego Aichele (Federal University of Pernambuco, Chile)
Rafael Azevedo (Federal University of Pernambuco, Brazil)
Marcio Das Chagas Moura (Federal University of Pernambuco, Brazil)
Caio Maior (Federal University of Pernambuco, Brazil)
Isis Didier Lins (Federal University of Pernambuco, Brazil)
Renato Mendes (Abrisco, Brazil)
Everton Lima (Petrobras, Brazil)
Enrique Droguett (University of California, United States)
PetroBayes: An effortless software to perform Bayesian reliability estimation

ABSTRACT. Reliability estimation is paramount to predict costs, plan maintenance, and estimate system availability in most industries, especially Oil and Gas (O&G). However, shortage of specific historical data is common for high-reliability devices because of restrictions related to equipment supplier rights, costs, or even unfeasibility to collect data. In this context, the Offshore and Onshore Reliability Data (OREDA) represents a collaboration between several O&G companies for sharing reliability information and building a generic historical database. The assessment of the variability distribution of the non-homogeneous data from OREDA serves as a prior distribution for system-specific Bayesian reliability assessments. The approach of OREDA analysis is to assume that the system has a constant failure rate (i.e., Exponentially distributed times to failure); however, a more appropriate approach would consider time-dependent failure rates, often related to a Weibull distribution. In addition, the challenge relies on the fact that the prior data is given as paired entries (k,t), (where "k" is the number of failures over an observation time "t", for subpopulation i), instead of failure times, as is a common procedure for Weibull analysis. These assumptions increase the solution’s complexity, requiring more advanced statistical methods, numerical procedures, and computational resources, which may limit its use for decision-makers and field experts who are unfamiliar with Bayesian approaches. In this context, we have developed PetroBayes, a user-friendly software to perform Bayesian reliability estimation, enabling the user to choose between the Exponential or the Weibull distribution to describe the system’s behavior under analysis. As background, the prior distribution is determined from population variability of OREDA non-homogeneous data via Empirical Bayes. The posterior distribution is estimated after updating prior beliefs using system-specific failure data (censored or not), requiring Markov Chain Monte Carlo for numerical solution. System-specific reliability measures can be inferred from the posterior and displayed to the user in tables, written reports and interactive images, with the visual information allowing a straightforward interpretation. PetroBayes requires only consumer-grade hardware and can be hosted on a remote server, avoiding high usage computation resources for the user’s hardware. We present a case study of the proposed software considering several systems and OREDA generic database.

11:30
Luiz Fernando Oliveira (DNV, Brazil)
Joaquim Domingues (DNV, Brazil)
Danilo Colombo (Petrobras, Brazil)
Failure and Intensity Rates Applied to the Evaluation of the Frequency of Blowout of Wells in Operation

ABSTRACT. Background: Well integrity management is a fundamental activity of oil and gas production. Maintaining well integrity means keeping the ability to prevent the occurrence of uncontrolled well leaks (blowouts). This is a well lifecycle activity from design to decommissioning, passing by construction, operation, and maintenance activities.

Objectives of this Paper: From the viewpoint of risk and reliability, the main indicators used for well integrity are the probability of a blowout during the well lifetime and the instantaneous frequency of blowout. The first is more important for the improvement of the design of the well, while the second is more important for operational decisions. Traditional risk/reliability assessment methods were developed to be used in the design stage and therefore are not really adequate for operational decisions, mainly because they are developed to give average results during the whole lifetime of the well. At the design stage the analyst can only conjecture about what is going to actually happen during operation, therefore the design model cannot take into account the true conditions of the system at a certain date (at t = t) during the operational phase.

In this paper, we discuss some aspects of a reliability model developed to be used for decision-making during the operational phase; two main conceptual differences from existing quantitative models. • First, the evaluation is performed not from t = 0 (the design approach), but rather from t = t (the operational approach); by that we mean that the evaluator is within the operational phase; • Second, from the viewpoint of integrity, the well is not evaluated as a safety system subjected to a low demand regime, but rather as one that is subjected to continuous demand; the oil is trying to force its way out to the environment at all times. The combination of these two aspects indicates that the instantaneous frequency of blowout is the most adequate indicator to answer the questions posed by the situation of interest: 1) what is the probability of blowout today or within the next time steps, 2) a component failure has just been detected, what is its impact on current blowout risk of the well? 3) How is this risk going to evolve within the next weeks, months, or even within the next couple of years? 4) if I have a maximum risk criterion, for how long can I operate the well before the criterion is reached? Our model uses the known operational conditions to calculate the reliability indicator at t; they are: 1) the operational and maintenance history of all components since the beginning of operation of the well (when they were repaired or substituted); 2) for monitored components, we know if they are operating or not at t; 3) for periodically tested components we know the date of their last tests and the respective test results. Therefore, the results of the model for the current date are accurate and reflect the current known conditions. Based on that, we can also calculate with good precision the impact of a failure which has just been detected on the well integrity indicator today. To make predictions of future results, we use a simplifying assumption: that the conditions we observe today will not change from today to the future date of interest; it is considered that the analyst is moving in time with the system. With this, we are able to do the calculations using Weibull failure distributions in addition to exponential ones.

Using Intensity Rates and Hazard Rates as Main Indicators of the Model: Four different reliability frequency concepts are discussed in the literature: the failure rate, the failure density, the hazard rate (also called Vesely´s failure rate), and the failure intensity. Although aware of the possible conceptual differences between failure rate and hazard rate, we adopt Vesely´s definition of failure rate which is similar to the hazard rate for repairable components. From the design viewpoint, the failure intensity, w(t), is mostly used as the frequency of component/system failures at t; it gives the unconditional probability of occurrence of a failure with t and t+dt calculated from t=0. From the operational viewpoint, it is shown in this paper that using failure rates for components, cut sets and system as main indicators gives more meaningful results for operational decision-making.

Results: The frequency of blowout for an oil well in operation is dominated by the leak frequency of second order cut sets because of the traditional two-barrier requirement in use by the industry. We have extensively studied all six possible second-order cut sets of two types of components of the three types used in our model: 1) monitored, 2) periodically tested, and 3) non-repairable. We show that using hazard rates instead of intensity rates for the cut sets gives results that are consistent with what would be expected for an operational model. The results are very good when the component failure rates are of the order of or lower than 10-7/h, which is the case for the great majority of well barrier elements. For component failure rates of the order of 10-6/h (case of a few elements), the results tend to be above the expected results, indicating that the predictions of our model for such cases are conservative (better than the other way around). There are no elements with failure rates of order 10-5/h or higher. A procedure has been implemented to bound the growth of the results with time for cut sets combining a high failure rate component with a low failure rate component. Since the time horizon involved in the solution of the most important operational issues related to well integrity is significantly shorter than the 30-year typical lifetime of a well, it is shown here that our model is well suited for applications to decision-making regarding well integrity problems during operation.

11:50
Guido Difederico (Politecnico di milano, Italy)
Ahmed Shokry (Ecole polytechnique, France)
Enrico Zio (MINES Paris, PSL University and Politecnico di Milano, Italy)
Giorgio Fighera (Eni s.p.a, Italy)
Emanuele Vignati (Eni s.p.a, Italy)
Long Short-Term Memory ANNs for Fast Assessment of Water Injection Policies in Oil and Gas Reservoirs
PRESENTER: Enrico Zio

ABSTRACT. Accurate and reliable reservoir models are used in almost all phases of oil and gas fields management, from initial discovery, geological assessment, wells placement, production optimization, field enhancement, to final abandonment. The development of a physics-based model for a reservoir is an effort-intensive and time-consuming task, not only because of the need to integrate complex phenomena, such as geophysics, fluid dynamics,chemistry, and petrophysics, but also due to the large amount of uncertain parameters to be tuned. As a result, In some cases, such models are unavailable. Moreover, even when such models are available, their usage might be computationally demanding, especially for online tasks such as production optimization where many forward evaluations are needed in a very short amount of time. As a result, many research efforts have been directed to the use of machine learning for building data-driven reservoir models, based on either data generated by physics-based models (for model approximating purposes) or real data collected from the field. Many works employ feedforward Artificial Neural Networks (ANNs) in the form of Nonlinear Autoregressive eXogenous (NARX) models for capturing the reservoir dynamics. Recently, few works have proposed the use of Long Short-Term Memory (LSTM) ANNs to model reservoir behaviors, with superiority over feedforward ANNs due to their powerful capabilities to capture temporal and spatial patterns among multivariate time-series. However, they have been applied mainly to small-scale fields and using simulation data with unrealistically simplified characteristics (e.g., large amounts of data with information on all the possible dynamic conditions). Considering the task of oil and gas fields production optimization, this paper investigates the use of LSTM ANNs for the development of dynamic models for predicting the future oil and water production rates as a function of the water injection rates. These models are useful for the fast assessment of water injection policies with respect to their economic and environmental impacts. First, a procedure for data generation is presented, which imposes realistic operational and economical constraints on the reservoir model to generate input-output patterns (i.e., water injection rates and corresponding water and oil production rates). Then, the generated patterns are used to train an ensemble of LSTM ANNs, each one able to predict the future trajectory of the oil or water production rates at each production well, as a function the water injection rates at all the injection wells. The effectiveness of the proposed framework is validated by application to the well-known case of the Olympus field, which is a 3D reservoir model with complex geological properties, including seven injection and eleven production wells. The results show a very good prediction accuracy and significant reduction in the computational time, thus showing that in specific reservoir engineering problems, data-driven models are interesting alternatives to full-physics numerical simulations for speeding-up decision making.

12:10
Sergio Cofre-Martel (Department of Mechanical Engineering, University of Maryland College Park, United States)
Enrique Lopez Droguett (Department of Civil Engineering, University of California Los Angeles, United States)
Mohammad Modarres (Department of Mechanical Engineering, University of Maryland College Park, United States)
Physics-Informed Neural Networks for Remaining Useful Life Estimation of a Vapor Recovery Unit System

ABSTRACT. Prognostics and health management (PHM) has become one of the main research fields in the reliability community. PHM seeks to build end-to-end frameworks capable of extracting, analyzing, and processing sensor monitoring data to train diagnostics and prognostics models. The deployment of these DDMs for RUL estimation in the industry is still rare and limited. Most deep learning (DL) techniques lack interpretability within their structure, therefore relying on post-hoc (post-model) interpretability tools rather than intrinsic (in-model) interpretability. This hinders the transparency and trust that users may have on the model’s predictions (Carvalho, Pereira, and Cardoso 2019), and thus, is undesired. Furthermore, most of these DL models are trained and validated using benchmark datasets generated in simulated or controlled experimental setups. As such, it is likely that DL architectures developed in these conditions will present poor performance when adapted to real-world scenarios. To address these challenges, we have previously presented a PINN framework for RUL estimation (Cofre-Martel, Lopez Droguett, and Modarres 2021a), which was tested on the C-MAPSS data set. The work presented here seeks to validate this PINN-RUL framework on a real case study consisting of a vapor recovery unit (VRU) located at an offshore oil production platform.

12:30
Luciano M. de Almeida (Universidade Federal Fluminense, Brazil)
Danilo Colombo (Universidade Federal Fluminense, Brazil)
Rodolfo Cardoso (Universidade Federal Fluminense, Brazil)
USING A PART STRESS-BASED MODEL TO ASSESS THE COVERAGE FACTOR OF PARTIAL TESTS OF BLOWOUT PREVENTER FOR TEST SCHEDULING OPTIMIZATION

ABSTRACT. In recent decades, the search for new oil reserves has advanced to the sea and into ever deeper waters. The combination of harsh environment and high operational complexity increase the risk of these activities. Therefore, the performance of barrier elements in oil wells is critical to ensuring the safety and integrity of these systems. Hence, for the safe operations in oil wells, is standard practice in the industry and requirement of regulatory bodies the maintenance of two physical, independent and sequential barriers to make sure the well is properly sealed and to prevent the occurrence of leaks. A barrier element stands out in the construction and interventions for maintenance or abandonment in marine wells: the blowout preventer (BOP), a safety system whose function is to hydraulically isolate the well and prevent the uncontrolled leakage of hydrocarbons, the so-called blowout. The BOP is subjected to a routine of periodic tests to ensure its integrity. The periodic tests can be function tests and pressure tests. Both tests imply the unavailability of the BOP, which increases the total time of operation and the well intervention costs. Consequently, it is important to balance risks and costs when programming periodic tests of the BOP. The proper application of partial tests can help to optimize risks and costs by allowing the planning of the periodic tests regarding their frequency and scope. In this paper, a reliability model based on the part stress analysis to assess the coverage factor for BOP pressure tests in subsea wells is being proposed. Incorporating the coverage factor in quantitative risk analysis for the BOP test programming allows the application of partial tests and its associated gains, both in safety and economical perspectives. In addition to the risk model is also proposed a cost function, which incorporates the cost of testing, the cost of maintenance, the cost of failure and the cost of risks. Through the application of the reliability model, associated with the cost function, it is possible to carry out the optimized planning of BOP tests during interventions in offshore wells.

11:10-12:50 Session 10I: S.06 D: Risk Assessment in Road and Rail Transportation
Chair:
Paolo Gardoni (University of Illinois at Urbana-Champaign, United States)
Location: CQ-107
11:10
Insaf Sassi (Institut de recherche technologique Railenium, France)
Mohamed Ghazel (COSYS-ESTAS, Univ Gustave Eiffel, IFSTTAR, Univ Lille, France)
El Miloudi El Koursi (COSYS-ESTAS, Univ Gustave Eiffel, IFSTTAR, Univ Lille, France)
Statistical Model Checking for On-board Train Integrity Safety and Performance Analysis
PRESENTER: Insaf Sassi

ABSTRACT. Railway signaling systems are in continuous progress to ensure competitiveness and cope with the evolution of the railway industry and market needs. Among the objectives is to increase the capacity of the European rail network, in particular by enabling moving block operation in a cost-effective way. Therefore, traditional signalling systems relying on track circuits or axle counters for train position detection and train integrity determination have to be substituted by on-board modules which must ensure that the train is moving safely and integer during its journey, i.e., no wagon is lost. In fact, a lost vehicle may lead to severe scenarios such as train collision for instance. Besides, using an on-board control-command system for the train integrity functionality, transfers more responsibility, in terms of train operation safety, from infrastructure managers to railway operators. In this context, a new on-board train integrity (OTI) function, compliant with the European Train Control System (ETCS), is proposed in Shift2RAIL projects to help tackle the new challenges [1]. The functional specifications of the OTI are proposed in a dedicated deliverable [1] as a list of requirements expressed in natural language and semi-formal models (high level UML State Machines (SM) and a set of Sequence Diagrams (SD)). Formal models described in [2] have been developed with bringing into play model checking to examine various types of safety and functional properties with the aim to ensure completeness, correctness and unambiguity of the OTI specifications. This automatic formal verification technique allows for exhaustively checking the system behavior based on automaton-like model of the system behavior, hence making it possible to provide some safety evidence. Namely, in our framework we use the timed automata notation [4] supported by the UPPAAL tool [3]. Besides, OTI is considered as an interactive, distributed system which behaves stochastically where OTI modules exchange status messages to evaluate the train integrity. Probabilistic aspects such as message loss probability, transmission delays, failure probability of the used technologies and communication network are integrated in the analysis. Thus, quantitative properties should be considered and specified to check system performances using probabilistic formal methods. In this context, statistical model checking (SMC) within UPPAAL [5] offers valuable advantages to investigate a number of properties. SMC consists in obtaining statistical evidence (with a predefined confidence level) of the quantitative properties to be checked using sufficient number of simulations based on a given system model. This paper proposes an analysis of the OTI performance and safety indicators that has revealed many important aspects about the OTI implementation and safety performance. Sensitivity analysis has shown which parameters and factors should be taken into consideration in order to understand the evolution of some indicators. Thanks to SMC, detection time of integrity loss, probability of false alarms and false negative rate which represents the rate of misdetection of loss of integrity (the hazardous scenario) are investigated. This study highlights the impact of the quality of service of the underlying communication network, in terms of message loss rate, as well as the sensors’ reliability, on the OTI performance. It provides some guidelines to choose the system configuration parameters, mainly the OTI timers ensuring an acceptable level of availability as well as the safety.

References 1. Deliverable4.1 (2020). Train integrity concept and functional requirements specifications. Technical report, Shift2Rail, X2RAIL-2 WP4 project. 2. SASSI Insaf, GHAZEL Mohamed and EL-KOURSI El-Miloudi. Formal Modeling of a new On-board Train integrity System ETCS Compliant. In: ESREL 2021, 31st European Safety and Reliability Conference. 2021. p. 9p. 3. Gerd Behrmann, Alexandre David, and Kim G. Larsen, A Tutorial on Uppaal 4.0, Update of November 28, 2006. 4. Rajeev Alur and David L. Dill, A Theory of Timed Automata, Theoretical Computer Science No.126, pp. 183-235, 1994 5. Alexandre David, Kim G. Larsen, Axel Legay, Marius Mikucionis, and Danny Bogsted Poulsen, Uppaal SMC Tutorial, January 2018

11:30
Mohammed Chelouati (IRT Railenium, France)
Abderraouf Boussif (IRT Railenium, France)
Julie Beugin (COSYS-ESTAS, Univ Gustave Eiffel, IFSTTAR, France)
El Miloudi El Koursi (COSYS-ESTAS, Univ Gustave Eiffel, IFSTTAR, France)
A framework for risk-awareness and dynamic risk assessment for autonomous trains

ABSTRACT. The development of autonomous transportation has attracted enormous attention recently; it started with the automotive industry and is now flourishing in the railway due to various benefits (railway capacity, mobility, and safety) [1]. At the same time, autonomous trains need to operate while maintaining an acceptable level of safety. To ensure this level, the onboard Autonomous Driving System (ADS) must be self-aware of the environmental and operational conditions and respond to the actions of other entities (obstacles, other trains, trackside, …). Conventionally, the safe operation of trains (with drivers) is assured in two phases: (i) a safety demonstration performed during the design and development phase, and (ii) a situation awareness process continuously performed by the driver during the operational phase [2]. The former phase consists in identifying and reducing the risks to an acceptable level with respect to the defined operational conditions; while the latter consists in examining and evaluating the risks dynamically (by the driver) in reaction to the real operational conditions and then deciding about the most appropriate safe action to carry it out [3]. Notice that the concept of situation awareness was firstly introduced in the Human Factors research field as “the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning and the projection of their status in the near future” [4]. Therefore, the situation awareness includes three tasks: (i) perceiving relevant elements withinside the environment, (ii) understanding what those elements mean, especially while involved collectively with regards to the system’s goal, and (iii) predicting the future situation/actions of the environment/system, respectively. Now when it comes to the autonomous train (with a high level of autonomy), the situation/risk awareness and the dynamic risk assessment processes are incumbent upon the onboard ADS. Indeed, the onboard ADS should be able to perform its functions safely in all unforeseeable situations and operational conditions. In order to achieve this goal, the ADS must integrate a dynamic risk assessment layer with its high-level control/decision-making architecture [5]. In fact, with strong interactions with the perception, planning, and control units, such a layer update continuously the probability estimations for the occurrence of railway hazards. In this paper, we address the challenge of dynamically assessing the risks related to the major railway hazardous events (collision, derailment, fire, …), in autonomous train operations. Indeed, we propose a framework allowing the onboard ADS to continuously perform the situation awareness process and provide run-time probability estimations for the occurrence of railway hazards while accounting for (internal and external) environment perception. Concretely, we first present the concept of situation and risk awareness, and its model as part of the safe decision-making process in the context of conventional trains with onboard drivers. Then, we propose a risk-awareness and dynamic risk assessment framework as part of the ADS decision-making architecture. Finally, we discuss and evaluate the relevance of the framework through two operational safety functions (collision avoidance and environment monitoring). Keywords: Autonomous trains, Railway safety, Safety assurance, Risk awareness, Dynamic risk assessment.

References

[1] D. Trentesaux et al., « The Autonomous Train », in 2018 13th Annual Conference on System of Systems Engineering (SoSE), Paris, juin 2018, p. 514‑520. doi: 10.1109/SYSOSE.2018.8428771. [2] O. McAree, J. M. Aitken, et S. M. Veres, « Towards artificial situation awareness by autonomous vehicles * *Research in part supported by the EPSRC, grant numbers EP/L024942/1 and EP/J011843/1 », IFAC-Pap., vol. 50, no 1, p. 7038‑7043, juill. 2017, doi: 10.1016/j.ifacol.2017.08.1349. [3] F. Khan, S. J. Hashemi, N. Paltrinieri, P. Amyotte, V. Cozzani, et G. Reniers, « Dynamic risk management: a contemporary approach to process safety management », Curr. Opin. Chem. Eng., vol. 14, p. 9‑17, nov. 2016, doi: 10.1016/j.coche.2016.07.006. [4] M. Endsley, « Situation Awareness in Aircraft Systems: Symposium Abstract », Proc. Hum. Factors Soc. Annu. Meet., vol. 32, p. 96‑96, oct. 1988, doi: 10.1177/154193128803200220. [5] T. Parhizkar, I. B. Utne, et J.-E. Vinnem, « Online Probabilistic Risk Assessment of Complex Marine Systems », Springer Ser. Reliab. Eng., 2022.

11:50
Grethe Lillehammer (Bane NOR, Norway)
Morten Gustavsen (Institute for Energy Technology, Norway)
Geert van Loopik (CGE Risk, Netherlands)
Semi-automated bow-tie diagrams for optimizing safety analysis in the railway infrastructure

ABSTRACT. The European Union Agency for Railways is working to move Europe towards a sustainable and safe railway system without frontiers. Important tools to realize this goal are the Technical Specification for Interoperability (TSI) and the Common Safety Method for Risk evaluation and Assessment (CSM-RA). The TSI is focused on achieving interoperability by specifying the elements of essential requirements for safety, technical compatibility, reliability, and availability amongst others. The CSM-RA complements this by describing a common, mandatory, European risk management process for the rail industry. Neither the TSI nor the CSM-RA offer recommendations concerning specific tools or techniques to be used. Bow-tie risk analysis has emerged as a powerful approach for risk management as it integrates accident scenarios with causes and consequences. However, developing bow-tie diagrams is typically done based on manual data inputs for specific objects (i.e., hazards, events, threats, barriers, consequences, and escalation factors). By using existing data as a basis for automation, there is a great potential to streamline the process of developing bow-tie diagrams and increase their application areas.

Bane NOR, the Norwegian railway infrastructure manager, is required to maintain an overall risk and emergency preparedness for their infrastructure. In recent years, Bane NOR has developed a large generic bow-tie model for barrier-based risk management of railway tunnels. This model reflects TSI requirements and is based on 27 risk-assessment of tunnels, as well as multiple meetings with domain experts. The goal with the bow-tie model is to streamline risk and emergency preparedness analysis of tunnels by reusing hazards and results from previous analyses, and to clearly separate hazards already handled by current code of practice from hazards that need to explicitly be risk assessed.

This work describes a novel research and development effort seeking to exploit information and knowledge already stored in digital Building Information Models to enable software to semi-automate generation of bow-tie diagrams that are tailored for local scenarios and constraints for railway tunnels. The research proposes a human-in-the loop process, where the software concept generates new bow-tie diagrams that domain experts verify before utilization. For example, there is a correlation between the length of a tunnel and the probability and consequence of an accident, resulting in stricter requirements in the TSIs for a longer tunnel. Therefore, extracting the specific length of a tunnel from a BIM repository enables automatic adjustment of bow-tie diagrams, adapting the diagram to reflect safety management regulations and thus optimize safety and risk management.

The work is exemplified using a large generic bow-tie diagram for tunnels, but the concept is applicable for many objects and areas, such as bridges, level crossings, storage tracks, and railway stations. The research effort has identified feasibility, specified a software architecture concept, and provided recommendations on how this concept can be applied for large industrial railway projects.

This research was sponsored by the Research Council of Norway in partnership with Bane NOR, Norwegian Public Roads Administration, Cowi, and Multiconsult.

12:10
Rim Saddem-Yagoubi (Univ. Gustave Eiffel, IFSTTAR, Univ. Lille, France)
Muhammad-Usman Sanwal (Mälardalen University, Sweden)
Simone Libutti (University of Naples Federico II, Italy)
Massimo Benerecetti (University of Naples Federico II, Italy)
Julie Beugin (Univ. Gustave Eiffel, IFSTTAR, Univ. Lille, France)
Francesco Flammini (Mälardalen University, Sweden, Sweden)
Mohamed Ghazel (Univ. Gustave Eiffel, IFSTTAR, Univ. Lille, France)
Bob Janssen (EULYNX, Germany)
Stefano Marrone (University of Campania Luigi Vanvitelli, Italy)
Fabio Mogavero (University of Naples Federico II, Italy)
Roberto Nardone (University of Naples Parthenope, Italy)
Adriano Peron (University of Naples Federico II, Italy)
Cristina Seceleanu (Mälardalen University, Sweden)
Valeria Vittorini (University of Naples Federico II, Italy)
Formal Modeling for Safety and Performance Evaluation of ERTMS/ETCS Level 3: The PERFORMINGRAIL Project

ABSTRACT. Within modern railways, Moving Block (MB) signalling systems represent the most efficient approach to ensure train separation. MB is a central concept in ERTMS/ETCS L3 (European Railway Traffic Management System / European Train Control System Level 3), which is an European standard for interoperable railways. Compared to traditional fixed block signalling, MB allows for substantial capacity gains at reduced costs, while improving availability as the trackside equipment would be substantially reduced. A set of specifications for MB operation has been proposed in the framework of previous research, but additional activities need to be undertaken to define detailed specifications for a safe and performable implementation of ETCS L3. In this respect, railway safety standards recommend the use of formal modelling and verification techniques to guarantee behavioural correctness and to validate safety requirements. However, there are several challenges to be tackled to make formal methods usable in industry, due to modelling difficulties and scalability to complex systems and scenarios. The work reported in this paper has been developed within the EU-funded project named PERFORMINGRAIL. We present a methodology showing how high-level MB specifications expressed in SysML can be transformed into reusable parametric formal models in order to enable automated verification and performance evaluation of MB systems. We apply the methodology to selected ETCS L3 scenarios for illustrative purposes.

12:30
Chris Harrison (RSSB, UK)
Application of the Safety Risk Model to better understand localised risk

ABSTRACT. One of the main challenges in identifying efficient and sustainable safety solutions to rail industry problems is the granularity over which such assessments can be made. General proposals for change in the industry will affect the whole network in some way and as such can prove very costly compared with their perceived benefit when calculated globally. If such proposals can be made more specific and assessed at a much more localised level, for example for a particular route, then it is possible to identify areas where specific application of proposals can be demonstrated to bring safety benefit where application at a national level does not. In order to achieve this a switch from the current suite of national assessment models to more localised modelling needs to be achieved. The Safety Risk Model (SRM) provides a network-wide risk profile for the mainline railway in Great Britain (GB). It has underpinned the industry’s evidence and risk-based approach to safety management for the best part of two decades. The model has recently undergone a fundamental overhaul of its structure and calculation methodology. The main aims were to build a new, more flexible model built on a modular approach that would enable more localised risk modelling and a framework that could more easily evolve and be enhanced based on user need. The work is now complete and a localised model is now in place. The rail industry is now able to use this to make more granular assessments to help identify sustainable solutions, as safety is one element that needs to be considered when making such decisions on whether to implement proposals. This paper presents an overview of this work, explaining the new modelling approach along with the rebuild work and the benefits it will bring. Application of the model to some examples of where it has been used to identify localised solutions will be presented along with a vision of how the model will develop and evolve to meet future needs in addressing sustainability.

11:10-12:50 Session 10J: S.10: Human-Robot collaboration: The New Scenario for a Safe Interaction III
Chair:
Mario Di Nardo (Università di Napoli, Italy)
Location: LG-22
11:10
Naomi Kamoise (Lab-STICC CNRS UMR 6285, University of South Brittany., France)
Clément Guérin (Lab-STICC CNRS UMR 6285, University of South Brittany., France)
Mohammed Hamzaoui (Lab-STICC CNRS UMR 6285, University of South Brittany., France)
Nathalie Julien (Lab-STICC CNRS UMR 6285, University of South Brittany., France)
Using Cognitive Work Analysis to deploy collaborative digital twins : application to predictive maintenance
PRESENTER: Naomi Kamoise

ABSTRACT. The digital twin is a technology that expands the possibilities of manufacturers thanks to its various applications, particularly for the application of predictive maintenance. In the context of Industry 4.0, technical agents such as digital twins tend towards more autonomy. Indeed, one of its characteristics lies in its partial or total capacity to act on its physical counterpart. Therefore, it is necessary to consider the digital twin and the human agent as cooperating agents. Under this perspective, decision loops between the digital twin and human agent is necessary in the deployment of the digital twin. The JUPITER project aims to develop the digital twin for predictive maintenance of the SCAP platform, a manufacturing line. The objective of this paper is to show how Cognitive Work Analysis (CWA) could contribute to the definition of the allocation of the decision making activity between the digital twin and the human. Two CWA analysis were applied: the Work Domain Analysis (WDA) and the Social Organization and Cooperation Analysis with his tool Contextual Template Activity (SOCA-CAT). The functions identified by the WDA highlighted the allocation decision between the cooperative agents and the SOCA-CAT provided a contextualization of this allocation. The allocation of the decision making activity was considered in terms of the level of automation between the digital twin and the human agent.

11:30
Elena Stefana (Department of Mechanical and Industrial Engineering, University of Brescia, Italy)
Daniele Ghidoni (Department of Mechanical and Industrial Engineering, University of Brescia, Italy)
Federico Fanizza (Department of Mechanical and Industrial Engineering, University of Brescia, Italy)
Filippo Marciano (Department of Mechanical and Industrial Engineering, University of Brescia, Italy)
Paola Cocca (Department of Mechanical and Industrial Engineering, University of Brescia, Italy)
Nicola Paltrinieri (Department of Mechanical and Industrial Engineering, Norwegian University of Science and Technology, Norway)
TOWARDS MACHINE LEARNING APPLICATION FOR SAFETY IN CONFINED SPACES: CREATING AN INCIDENT DATABASE
PRESENTER: Elena Stefana

ABSTRACT. In Occupational Safety and Health (OSH), a confined space is a space that (a) is large enough and so configured that an employee can bodily enter and perform assigned work, (b) has limited or restricted means for entry or exit, (c) is not designed for continuous occupancy. It generally represents a high-risk area, where the tolerance for errors is small and multiple hazardous conditions are frequently present. The fatalities and severe injuries occurred in such spaces are recorded in various and separate national and international incident databases that are characterized by different structures and provide descriptions with varying levels of detail. This limits the possibility of applying advanced data-driven approaches based on Machine Learning (ML) techniques, which allow extracting knowledge from data, analyzing causal relationships in the scenarios, identifying the most relevant factors contributing to the injury severity, and predicting the type and severity of the consequences. Such limitation is confirmed by the scarcity of scientific contributions using ML techniques to assess and treat risks in confined spaces. Moreover, to the best of our knowledge, there are few indications on how to create a single incident database from different data sources in the OSH domain suitable for the ML applications. In such a context, this paper addresses the first steps of a ML-based approach for the analysis of confined space incidents: data collection and pre-processing, attribute selection, identification of features and labels, definition of a specific taxonomy, and creation of the ML database. Data were collected from some of the main and most updated incident databases, including the US Occupational Safety and Health Administration (OSHA), the US National Institute for Occupational Safety & Health (NIOSH), and the Italian National Institute for Insurance against Accidents at Work (INAIL) websites and archives. About 3000 incidents from 1985 to 2021 were selected thanks to the definition of specific queries and a set of keywords, and were manually reviewed by the authors. Such incidents were then critically analyzed and described in terms of event date, location, industry, confined space type, performed tasks, characteristics of the involved workers (e.g. age, gender, role), primary and secondary causes, number of fatalities, number of injured workers, and injury severity. Starting from the created database, possible ML algorithms able to support risk management in confined spaces will be discussed.

11:50
Alex Schouten (TU Dublin, Sweden)
Victor Hrymak (TU Dublin, Ireland)
Improving the reliability of visual inspections conducted by environmental health & safety professionals on large construction sites
PRESENTER: Alex Schouten

ABSTRACT. Introduction The European Union construction sector is crucially important from an economic perspective but it remains a hazardous work environment. Recent data from Eurostat illustrates that in 2018, construction work represented the largest sectoral cause of EU fatalities at over 21%. Since the framework directive; 89/391/EEC was introduced over thirty years ago, EU legislation has required a preventative philosophy to underpin the process of minimising accidents, injuries and ill health in workplaces. Central to achieving this preventative requirement is risk assessment, a process that requires hazards to be identified prior to their evaluation and control. Therefore, a crucial component for risk assessment on construction sites is the reliability of visual inspections to ensure successful hazard identification. However, the actual reliability of safety professionals to observe construction site hazards has been shown to vary resulting in these hazards not being identified, and not being appropriately controlled. This has unfortunately resulted in fatal accidents and injuries on construction sites. Recent research has begun to investigate visual inspection performance and reliability by environmental health and safety professionals in terms of observing construction site hazards. This study presents early results from visual inspection reliability on a large European data centre under construction. For this study, the research question considered was; whether a recently trialled and novel visual search method could improve on reliability from existing visual search methods used by environmental health and safety professionals on the construction site under analysis.

Methodology A large 30 hectare EU member state datacentre under construction was recruited to participate in this study. This construction site employed between 300 and 500 employees. At the time of the study, the construction site was at various stages of completion with one unit operational, two units nearing completion and two units are used as general storage for site materials and awaiting further construction. The method used by the lead researcher to observe construction site hazards was systematic visual inspection. This behavioural visual search algorithm method consists of three key steps. The first is to break down each room or area under analysis into its constituent constructional elements being; each individual wall, the ceiling and floor. The second step is to iteratively select a particular element for individual observation. The third step is observe the entirety of the selected element by applying a visual eye scan pattern, that begins at the top left corner of the element and tracks to the right until the next element is reached. Visual search is then redirected to the left hand side of the element underneath the area already observed, and the process continued until the element has been fully observed. A useful analogy would be to describe the systematic visual search method as reading the element in the same way as you would read a page of writing in a book. That is; starting at the top of the page and moving left to right until the whole page has been read. In effect, a visual search overlay is imagined which guides an eye scan pattern across the element. In this manner systematic visual search will ensure the meticulous and exhaustive observation of the element under analysis, without missing any observable hazards present. The lead researcher conducted 45 systematic visual inspections on the construction site under analysis between September 2021 and January 2022. This researcher also had access to the construction site’s safety data for the same period. This data included the results of visual inspections conducted by all environmental health and safety professionals employed on the site. In this manner a comparison was made between the number of hazards identified by the lead researcher using the systematic visual inspection method and the environmental health and safety professionals employed on the site.

Results The results were unequivocal in terms of visual inspection reliability. The mean number of observable hazards identified by the lead researcher using the systematic visual inspection method was 28.4 per inspection (SD=6.10). In contrast, the mean number of observable hazards identified by site environmental health and safety professionals per inspection was a mean 5.7 (SD=6.50) These results were also highly significant and with a large effect size as measured by Cohen’s “d”.

Discussion Two key findings emerge from this study. The first is that missing observable hazards during construction site visual inspections occurs and occurs with regularity. This is in contrast with a widely held but erroneous assumption that current visual inspections have a level of intrinsic accuracy that can be relied on. The second finding is that the number of construction site hazards observed by environmental health and safety professionals can be increased by using this systematic visual inspection method as described above.

Conclusion The number of observable hazards on construction sites can be increased by using systematic visual inspection. Therefore visual search reliability can be improved which will be of clear benefit to the 18 million workers who currently work on European construction sites. In addition, the systematic visual inspection method described above is an easily learnt task, that should be considered for use by all environmental health and safety professionals.

12:10
Wouter Steijn (TNO, Netherlands)
Teun Sluijs (TNO, Netherlands)
Coen van Gulijk (TNO, Netherlands)
Dolf van der Beek (TNO, Netherlands)
SD-model for industrial human-robot interaction safety
PRESENTER: Wouter Steijn

ABSTRACT. Although the penetration degree of smart robotics on the work-floor is still relatively low (at least in the Netherlands), there is a general expectation that this will increase in the near future. More service robots will be put into service in different industries. With this in mind, we published a taxonomy of relevant determinants of safe human robot interaction in 2020. The current study, will build upon this taxonomy to develop a System Dynamics (SD) model that could help with monitoring and analysing human-robot interaction (HRI). SD sees a system as a complex and dynamic whole, which in this case concerns the interplay between the human, robot and their environment. The dynamics arise through interconnections that affect each other, forming multicausality over time.

The presented model uses literature that describe the relationships between relevant HRI factors and support this with experimental data or a theoretical foundation. The starting point was the literature from our previous report, supplemented by making using of the reference list and respective citations of those papers to identify additional papers.

The resulting SD model consisted of 25 nodes and 40 relationships. The nodes contained human factors, organizational factors, technological factors and four occupational safety and health outcome factors.

The presented model is based on scientific evidence gathered from existing literature. By combining evidence for individual links between factors relevant to optimizing the safety, efficiency, and sustainability of human-robot interaction, a complex and elaborate network of nodes and relationships is formed. The resulting SD model could potentially be used to monitor and analyze existing human-robot applications in industrial settings. By including estimations of strengths for the relationships, the effects of, for example, additional training or increasing the work speed, can be simulated on the outcome factors of interest; safety, efficiency, and sustainability. Future work will focus on validating the model by means of a Delphi panel. Additionally once (some degree of) consensus is achieved about the final model by the Delphi panel, various standardized measurement instruments will be identified for each factor to gather data on various robot applications in real world settings and further validate and substantiate the SD model.

12:30
Marianna Madonna (INAIL, Italy)
Luigi Monica (INAIL, Italy)
Sara Anastasi (INAIL, Italy)
The human-robot interaction in the perspective of the new Regulation on machinery products
PRESENTER: Marianna Madonna

ABSTRACT. The human-machine interface is traditionally the critical element in the design of machines, as it must take into account anthropometric measures, principles of cognitive ergonomics and aspects of stress and fatigue. Furthermore, this design complexity has increased with the technological evolution due to industry 4.0 that has led to a direct human-machine collaboration as the collaborative robots designed to share the same tasks and workplace with humans. The advent of collaborative robots allows new modes of interaction between operator and robot: they cooperate in executing numerous complex and high-precision tasks leading to improve physical ergonomics for the operator, who is replaced by the robot in heavy and repetitive operations. However, these technological advances in robotics imply the robot to move from isolated environments to dynamic ones without barriers to cooperate with humans in carrying out complex tasks. As a result, the current machinery safety legislation, Directive 2006/42/EC, contains a number of gaps regarding the safety risks arising from this new interaction and in general from industry 4.0 enabling technologies, that need to be addressed. Thus, a new Regulation on machinery products was proposed in April 2021, which will also take into account risks of contact and the psychological stress that may be caused by the interaction with machines designed to operate with varying levels of autonomy. In this scenario, ergonomics has also undergone an evolution: while before it was traditionally interested in physical and cognitive aspects in a physical world, now it turns to robotic, intelligent and autonomous systems and to the digital world. In this paper, the new issues of human-robot collaboration are considered and the way in which Human- Centered Design (HCD) aims to optimize human well-being and the overall performance of the human-robot system is outlined. It will also show how the ergonomic requirements of the new Regulation on machinery products can be met in the design of collaborative robots.

11:10-12:50 Session 10K: Nuclear Industry: Safety Analyses
Chair:
Marko Cepin (University of Ljubljana, Slovenia)
Location: CQ-010
11:10
Jan Jiroušek (State Office for Nuclear Safety, Czechia)
Dana Procházková (Czech Technical University in Prague, Czechia)
IDENTIFICATION, ANALYSIS AND MANAGEMENT OF RISKS DURING THE REFILLING OF THE TEMELÍN NUCLEAR POWER PLANT WITH COOLING WATER
PRESENTER: Jan Jiroušek

ABSTRACT. Nuclear power plants are among the objects of both, the national and the European critical infrastructures. In terms of both, the integral and the nuclear safety, high demands are placed on them. These requirements can be met only by the fact that the technical equipment is designed, manufactured and maintained in accordance with requirements of international standards and national legislation. The operation of a nuclear installation must have inserted and set up a management system that is based on a high safety culture. The safety of a nuclear power plant is technically ensured by: an emergency shutdown system (e.g. at an unexpected increase in the temperature of the primary circuit, the absorption clusters automatically fall, absorbing the neutrons in the reactor core and thereby stopping the further fission reaction); emergency cooling systems; emergency power supply of pumps; and radiation protection systems. Because, sources of external and internal risks change over time, it is necessary to consider even with low probable risk scenarios. To handle with critical situations, it is necessary to have a quick and effective response, i.e. appropriate technical equipment, clear procedures and its management, qualified personnel and clearly distributed responsibilities. Analysis of data on the operation of nuclear power plants showed that one of the problems that creates critical situations is a long-term power black-out. The article identifies the risks associated with the existing way of refilling the cooling water for the cooling-down of the unit on the Temelin nuclear power plant, in the long-term blackout. It analyzes the operating conditions and identifies critical points of the cooling water supply process, i.e. problems of the critical safety function of the external infrastructure. The paper shows list of the prepared block cooling scenarios and their risks. It mainly deals with the extreme scenario based on the application of the feed & bleed method described in paper [1] and identifies critical points of the entire process in which the cooling process could be disrupted. It has prepared a risk management plan for the critical sites in question. For preventive purposes, it proposes measures on external infrastructure that increase the autonomy of the power plant and allow maximum prolongation of the cooling-down period, thereby minimizing the impacts on both, the power plant and its surroundings.

11:30
Jung Sung Kang (Ulsan National Institute of Science and Technology, South Korea)
Seung Jun Lee (UNIST, South Korea)
A framework of safety margin simulation for optimized emergency operation in nuclear power plants
PRESENTER: Jung Sung Kang

ABSTRACT. When an emergency accident occurs in a nuclear power plant, the plant is tripped and operators enters the emergency operation mode. Unlike normal operation and abnormal operation, the operators are under time pressure and stress. Considering the consequences of nuclear power plant accident, it is very important to prevent human error. As a strategy to increase human reliability, a concept of emergency operation procedure (EOP) is adopted. The type of EOP is divided into paper-based procedures and computer-based procedures. In the case of paper-based procedures, since those were developed based on the results analyzed in advance, it may be difficult to accurately reflect the current power plant status. Considering the current power plant situation, finally, decision-makings are made based on the knowledge of the operator. Since the computer-based procedures are also developed based on the paper-based process, it has similar characteristics. Real-time power plant information is used in the computer-based procedure at a simple level, such as valve open/close status. The operator performs identification of the device status recorded in the procedure and analysis of the accident situation. For comparison, paper-based process is a paper map and computer-based process is a digital map. Taking a step further from here, it is possible to expand to the concept of navigation that identifies the current location using GPS and guides the optimal route. Likewise, the procedure can be developed into an intelligent procedure for the concept of checking the current state and suggesting optimal operation to the operator. For this, two key methodologies are needed. The first is a quantitative evaluation method that can evaluate which operation is better. The second is a prediction method that predicts the future state by combining tasks that must be performed after receiving the current state of the power plant. This paper proposes a safety margin concept defined as an area (product of the target variable residual value and the emergency operation mode execution time) for this optimal operation. Initial event scenarios are set, and major tasks according to the scenarios are analyzed and grouped. In addition, the optimal operation is derived by calculating the safety margin. And optimal operation case studies based on this definition are conducted.

11:50
Michal Cihlář (Czech Technical University in Prague, Czechia)
Dana Procházková (Czech Technical University in Prague, Czechia)
Pavel Zácha (Czech Technical University in Prague, Czechia)
Jan Prehradný (Research Centre Řež, Czechia)
Václav Dostál (Czech Technical University in Prague, Czechia)
Martin Mareček (Research Centre Řež, Czechia)
Reliability of Molten Salt Static Corrosion Tests
PRESENTER: Michal Cihlář

ABSTRACT. Generally, salts are chemical compounds of acids and alkalis. In nature, they occur as crystalline substances with the following properties: they have very strong ionic bonds among the particles; they have high boiling and melting points; they conduct an electric current in solution or a melt, but do not conduct an electric current in the solid-state; and they can be separated from the solution by crystallization. The uses of molten salts are very broad. This paper focuses on specific molten liquid salts that have found applications in nuclear reactors cooling, solar power plants, and energy storage systems. The disadvantages of molten salts are these: higher melting temperature (therefore, e.g., they can freeze inside pipelines); considerable aggressiveness towards construction materials; and economic difficulty. In the process of selecting the best possible liquid salt for the Generation IV nuclear reactors cooling, these requirements are put: high boiling point; low vapor pressure; high specific heat; high thermal conductivity; high density at low pressure; low viscosity; low neutron absorption cross-section (for primary circuit media); and low corrosion aggressivity to structural materials. The high corrosion aggressivity of molten salts must be addressed before commercial deployment is possible. High corrosion aggressivity severely reduces the safety of structural components, thus nuclear safety and the integral safety of the nuclear installation. Corrosion of structural materials is a significant problem in many different environments. In most cases, resistance to corrosion is achieved by a passivating oxide layer on the material's surface. For molten salts, this method cannot be used because this layer is not stable [1]. For this reason, it is essential to investigate the effect of molten salts on structural materials, i.e., the vulnerability of materials in molten salt environments; the impacts of corrosion on construction materials caused by molten salts; and the time changes of various structural materials in this highly aggressive environment. Commonly, this research is being conducted in several stages. The first stage is often static corrosion tests, followed by dynamic tests to better simulate the actual environment. Corrosion tests of structural materials in liquid salts are significantly challenging on finance, time, and organization. Therefore, it is not feasible to perform a large number of experiments with hundreds or thousands of samples to be analyzed. These limits place high demands on the repeatability and reliability of experimental results. In the present paper, an analysis of the whole process of static corrosion tests is made, from the preparation of the experiment and the experimental conduction itself to the cleaning and analysis of the tested samples. Each part of the process is divided into individual steps, which are assessed in terms of safety, reliability, and repeatability of the obtained results. Based on this assessment, critical steps are identified and, depending on technical and financial possibilities, technical or organizational changes are recommended to maximize the safety, reliability, and repeatability of the results obtained. Examples of the results of corrosion monitoring experiments under different technical and organizational conditions, such as length of the experiment, molten salt temperature, etc., are given in the paper. The results are obtained in different liquid salt environments (e.g., NaF-NaBF4, LiF-BeF2) and using different structural materials, from stainless steels to nickel alloys to molybdenum-nickel superalloys. Their evaluation suggests that molten salt corrosion experiments are a complex problem in which the combination of conditions under which corrosion occurs plays an immense role. Therefore, for a comprehensive investigation, we are currently building an experimental molten salt loop in which the dynamic corrosion effects of liquid salts on various structural materials can be investigated. [1] YOUNG, D. J. High Temperature Oxidation and Corrosion of Metals. Oxford: Elsevier Science 2008.

12:10
Ediany Araújo (Department of Nuclear Engineering, Polytechnic School, Federal University of Rio de Janeiro, Brazil)
Maximiano Martins (Graduate Program of Nuclear Engineering, COPPE, Federal University of Rio de Janeiro, Brazil)
Danielle Teixeira (COPPE, Graduate Program of Nuclear Engineering, Federal University of Rio de Janeiro, Brazil)
Paulo Fernando Frutuoso E Melo (Graduate Program of Nuclear Engineering, COPPE, Federal University of Rio de Janeiro, Brazil)
A Monte Carlo Uncertainty Propagation on the Accident Rate of a Plant Equipped with a Single Protection Channel Considering Truncated Probability Distributions

ABSTRACT. In many plants the demand rate on protective channels may be high, so that it is important to analyze the plant accident rate as a function of the plant demand rate [1-2]. To fix ideas, we consider a plant equipped with a single protection channel whose demand, failure, and repair times follow exponential distributions, that is, the equipment is in its useful life [1]. However, we may not know the parameters of these exponential distributions precisely; this is due to variability on these parameters and it is common that parameter ranges are found, giving a lower and an upper value for the parameter at hand [3]. This possible lack of plant specific data may give rise to plant-to-plant data variability, because one typically uses data from similar plants and might be subject to Bayesian updating when any piece of empirical evidence is available [3]. In this sense, the available information may be translated as probability distributions and so it may be necessary to perform an uncertainty analysis on the plant accident rate in order to assess the impact of the lack of knowledge on the parameters of the exponential distributions used. On the other hand, we may not know what probability distribution should be used to model the lack of knowledge on the parameters of the exponential distributions involved in the analysis. This implies that goodness-of-fit tests are useful for determining what distribution to use in each case [4]. Initially, we consider the use of a lognormal distribution to model the uncertainty on each parameter used for the analysis of the plant accident rate. The reason for this choice is that demand, failure and repair rates may be written generally as m × 10n, where m is a real number and 0 < m < 10 and n is an integer. In this sense, n may be considered to follow a normal distribution, so the rate itself may be considered to follow a lognormal distribution [5]. However, it is not reasonable to consider that, for example, a failure rate may assume values from zero up to infinity (the domain of a random variable that follows a lognormal distribution), so it is necessary to consider truncated versions of the probability distributions eventually used for modelling the variability of the exponential distribution parameters [6]. We present a discussion on this topic and also the truncated lognormal distribution, its properties and perform the uncertainty analysis by means of the Monte Carlo method by using the inverse transform method for generating random numbers [7] for the distributions used. The use of truncated distributions imposes some considerations (related to the new mean and standard deviation of the chosen distribution) when performing the Monte Carlo uncertainty propagation and these considerations are discussed in the article. We present and discuss the results in terms of a histogram. The spread of the results is highlighted and considerations on the mean accident rate and its standard deviation are presented.

References 1. Oliveira, L. F. & Amaral Netto, J. D., 1987. Influence of the demand rate and repair rate on the reliability of a single-channel protective system, Reliability Engineering, 17, pp. 267-276. 2. Oliveira, L. F., Youngblood, R., and Frutuoso e Melo, P. F., 1990. Hazard rate of a plant equipped with a two-channel protective system subject to a high demand rate, Reliability Engineering and System Safety, 28, pp. 35-58. 3. Modarres. M., 2006. Risk analysis in engineering, techniques, tools, and trends, Taylor & Francis, Boca Raton, FL. 4. Soong, T. T., 2004. Fundamentals of probability and statistics for engineers, Wiley, Chichester, UK. 5. WASH-1400, 1975. Reactor Safety Assessment, An assessment of accident risks in the U.S. commercial nuclear power plants, Report NUREG 075/14, WASH-1400, Nuclear Regulatory Commission, Washington, DC. 6. Zaninetti, L., 2017. A left and right truncated lognormal distribution for the stars. Advances in Astrophysics, Vol. 2, No. 3, pp. 197-213. 7. Ross, S., 2012, Simulation, Academic Press, San Diego.

12:30
José Ordóñez Ródenas (Departamento de Ingeniería Química y Nuclear, Grupo MEDASEGI, Universitat Politècnica de València, Spain)
Jose Felipe Villanueva López (Departamento de Ingeniería Química y Nuclear, Grupo MEDASEGI, Universitat Politècnica de València, Spain)
Sofía Carlos Alberola (Departamento de Ingeniería Química y Nuclear, Grupo MEDASEGI, Universitat Politècnica de València, Spain)
Sebastián Martorell Alsina (Departamento de Ingeniería Química y Nuclear, Grupo MEDASEGI, Universitat Politècnica de València, Spain)
Isabel Martón Lluch (Departamento de Estadística e Investigación Operativa Aplicadas y Calidad, Universitat Politècnica de València, Spain)
Ana Isabel Sánchez Galdón (Departamento de Estadística e Investigación Operativa Aplicadas y Calidad, Universitat Politècnica de València, Spain)
Rafael Mendizabal Sanz (Consejo de Seguridad Nuclear (CSN), Spain)
Javier Ramón Camarma (Consejo de Seguridad Nuclear (CSN), Spain)
Study of the cooling conditions through the secondary of a steam generator using a mobile pump in an extended SBO

ABSTRACT. The MEDASEGI group of the Universitat Politècnica de València, along with the Consejo de Seguridad Nuclear of Spain, is developing and validating deterministic analysis methodologies for extended design basis accidents following the requirements established by the IAEA SSG2 guide. This instruction refers to DEC-A scenarios (Design Extension Conditions, without severe damage to the nuclear fuel), that are on the limit of severe accidents but for which there is still a safety margin to act through adequate emergency operating procedures. The work presented in this paper focuses on the study of the behavior of a typical three-loop PWR nuclear power plant in the event of an extended SBO (Station Black Out), loss of external power supply together with the unavailability of the emergency diesel generators and without recovery of external power. This scenario is simulated using the thermal hydraulic code TRACE to study the evolution of the accident and the feasibility of taking a compensatory measure to prevent core damage and leading the plant to a safe situation. Cooling through the secondary of a steam generator using a mobile pump is considered as compensatory measure. One of the keys to ensure core integrity at the beginning of the transient is the secondary auxiliary feedwater system (AFWS), in particular one turbo-pump available, to assure the capability of the secondary side to act as a heat sink. This system will cease to operate when its associated battery is depleted. At that instant or before, a mobile pump system could come into operation to prevent the steam generator from emptying, extending the time available to recover external electrical power. However, operation of the mobile pump requires previously the depressurization of the secondary side of at least a steam generator. The instant at which the secondary is depressurized is essential to prevent the primary pressure from increasing to the point of opening the relief valves, with the consequent loss of inventory. The depressurization level sets both, the new secondary and primary operating pressures. On the other hand, the start-up of the mobile pump system and its correct operation also depends on the conditions of the secondary when it is required, so the depressurization level must also be done considering the operating parameters of this system. Then, in this work, the time at which the depressurization of the secondary of the steam generator starts and the depressurization conditions previous to the operation of the mobile pump are considered. A sensitivity analysis is carried out to conclude on the time available to start such a measure and the corresponding cooling conditions required to keep the core under control during the different phases of the transient.

14:00-15:20 Session 11A: Asset management
Chair:
Christophe Berenguer (Grenoble Institute of Technology, France)
Location: LG-22
14:00
Theo Cousino (RICE GRTgaz, France)
Florent Brissaud (RICE GRTgaz, France)
Leïla Marle (RICE GRTgaz, France)
Laurent Doyen (Univ. Grenoble Alpes, CNRS, Grenoble INP, LJK, France)
Olivier Gaudouin (Univ. Grenoble Alpes, CNRS, Grenoble INP, LJK, France)
Estimation of industrial assets ageing and maintenance efficiency with interval censored data
PRESENTER: Theo Cousino

ABSTRACT. 1. Introduction GRTgaz owns and operates the longest high-pressure natural gas transmission network in France. Its industrial assets include more than 32,500 kms of pipeline, 26 compressor stations and about 5,000 pressure reduction units. The R&D center (RICE) of GRTgaz is developing tools to model these assets for optimizing their management, particularly in terms of maintenance policies. These tools are based on reliability distributions that consider equipment aging and maintenance models [1].

The intrinsic aging is modelled using probability distributions considering maintenance-free operation. The maintenance effects are between “perfect” and "no effect". This is known as “imperfect maintenance”. Many imperfect maintenance models exist in the literature. Among them, GRTgaz selected the ARA virtual age models as relevant for its industrial applications [2, 3, 4].

Equipment reliability estimation is based on statistical methods for analyzing failure and maintenance data. However, these data are not always precisely known. For example, instead of observing a failure time with certainty, the only information available is that it occurred between two known dates (e.g. inspection dates). This is called interval censoring. The treatment of censored data is classic when the data are realizations of independent random variables and follow the same distribution [5]. It is less so for random point processes, and especially for virtual age models.

2. Methods and results A first experiment was carried out: repositioning in a fictitious way the censored dates (according to two different rules) and comparing, with simulated data, the loss of quality of the estimates when fictitious dates are used instead of real dates. The results obtained from simulations tend to prove that the amount of information present in the interval censored data should make possible to obtain good quality estimations. Moreover, placing randomly the censored dates in the interval in which they are defined (first rule) makes possible to get estimations very close to those without censoring. In our case of interval censoring, the fact that the failures are not observed immediately results in delaying the induced corrective maintenance action until the next preventive maintenance date. We then defined new models consistent with these assumptions and made a new experiment, with a quality of estimation like the previous one.

3. Conclusion For simulated data, we were able to obtain correct parameter estimates in the presence of interval censoring if a large amount of data is provided. For real data, an additional problem will be the choice of the maintenance model.

4. References [1] B.H. Lindqvist : « On the statistical modeling and analysis of repairable systems », Statistical science 21 (4), 532-551, 2006. [2] L. Doyen, O. Gaudoin : « Classes of imperfect repair models based on reduction of failure intensity or virtual age », Reliability Engineering and System Safety, 84 (1), 45-56, 2004. [3] L. Doyen, O. Gaudoin : « Imperfect maintenance in a generalized competing risks framework », Journal of Applied Probability, 43 (3), 825-839, 2006. [4] L. Doyen, O. Gaudoin : « Modelling and assessment of aging and efficiency of corrective and planned preventive maintenance », IEEE Transactions on Reliability, 60 (4), 759-769, 2011. [5] J.P. Klein, M.L. Moeschberger : « Survival analysis: techniques for censored and truncated data », Springer, 2003.

14:20
Alexander Bakker (Delft University of Technology / Rijkswaterstaat /, Netherlands)
Leslie Mooyaart (Delft University of Technology, Netherlands)
Evert Jan Hamerslag (Rijkswaterstaat, Netherlands)
Laurie van Gijzen (Rijkswaterstaat, Netherlands)
Probablistic maintenance and asset management of storm surge barriers under rapidly changing circumstances
PRESENTER: Alexander Bakker

ABSTRACT. The Netherlands have been applying probabilistic maintenance and operations to their storm surge barriers for more than 15 years. This is a risk based approach enables the asset manager to continuously maintain the required performance level while optimizing the maintenance costs. A probabilistic reliability analysis (PRA) helps the continuous monitoring of the actual performance level and search for improvements if necessary. The PRA is a highly detailed analysis that uses fault and event tree like techniques to address all relevant risks that might threaten the structure’s performance. The high level of detail enables the efficient assessment of the consequences of (temporarily) changes like longer repair times, higher failure rates or a smaller operational team.

As a consequence of sea level rise, socio-economic changes and aging, discrepancies between desired and actual performance occur more frequently and grow faster than previously. In order to anticipate these discrepancies more rigorous system adjustments are needed. Yet, the high level of detail that has proven so useful for probabilistic maintenance also reduces the transparency, accessibility and adjustability of the risk models. This complicates the efficient exploration of large system adjustments.

In this presentation we analyse how the high level of detail reduces the usability and explore how the models might be adjusted to minimize this effect.

14:40
Tomás Grubessich (Universidad Técnica Federico Santa María, Chile)
Raúl Stegmaier (Universidad Técnica Federico Santa María, Chile)
Pablo Viveros (UTFSM, Chile)
Fredy Kristjanpoller (UTFSM, Chile)
A general framework proposal for modelling complex production systems considering maintenance and operative factors under uncertain. Case study in Mining Industry

ABSTRACT. The modelling of complex systems is a powerful tool for analyses systems, understand behaviors and / or to project the system under different scenarios. The foregoing is essential to make better decisions on issues such as continuous improvement, new investments, evaluation of operational changes, among many others. However, the potential use of this tool reached only when the model can effectively represent the real behavior of the system, which means that the model results are coherent and consistent to the real system under similar conditions. In this sense, factors arise that affect the quality of the model and that imply the non-compliance of the projected results when executing the changes.

Some of the factors mentioned are related to the complexity of the system under study and the uncertainty that arises. This complexity is typical of real systems where many assets interact in multiple ways and uncertainty always manifests itself. On the side of the complexity of the system, there could be conditions that are difficult to represent related to the type of input (some of its characteristics could influence the operation of the system), or multiple operation alternatives related to many assets that interact in different ways seeking a higher level of performance. Another possible factor related to complex systems has to do with the interaction of different areas, where each of them makes decisions seeking to maximize their indicators. The presence of stockpile could generate greater independence between the areas and at the same time hide sources of inefficiency in the system. To face the above, the need arises to create models that represent the system as close as possible to reality. These models can be very simple to very complex and with different results in terms of their closeness to reality, which is related to the amount of resources they require for their creation in terms of time and cost.

Considering the above, this paper presents a general framework proposal for modelling complex production systems in which different areas interact and uncertainty is presented in various ways. This proposal guides the process of creating the model, seeking to fulfill the objectives established by the project and using the available resources efficiently. Maintenance and operations perspectives will be considered along with the strategic vision of the business. For the development of this proposal, work will be carried out under the project scheme, defining objectives, scope, assigned resources, among others.

On the other hand, the proposal is based on other methodologies for the identification and generation of valuable knowledge for the purposes of the project and other studies related to the generation of indicators. All of the above will be of great help when deciding the best strategies for creating the model.

Finally, all of the above will be applied in a case study in the mining industry from Chile.

15:00
Ian Els (University of Pretoria, South Africa)
Jacobus Krige Visser (University of Pretoria, South Africa)
Application of industry standards and management commitment to asset management in a typical petrochemical company

ABSTRACT. Over the last few decades, the role of maintenance has evolved significantly from just fixing broken assets to using an engineered system that aims to improve asset integrity and reliability. The asset management knowledge area considers the whole life cycle of assets, including how these assets are acquired, operated, maintained, and divested. Asset management also considers the total life cycle cost in decision making. Several standards have been developed over the last decade to assist and guide industries to implement asset management systems that are aimed at improving asset reliability, overall equipment integrity and safety as well as to reduce the maintenance and operational cost of manufacturing companies. The International Organization for Standardisation (ISO) 5500x suit of asset management standards is a collaborated effort to standardise several different asset management standards and guidelines. Organisations like the Global Forum for Maintenance and Asset Management (GFMAM) further developed frameworks that can assist industries to improve their asset management and maintenance management systems. Several factors are important when considering the management of an asset portfolio. These include the maintenance strategies and tactics that are applied as well as operational parameters. A balance is required between production volume and asset integrity, especially for continuous production facilities like that found in the petrochemical industry. This study focused on a facility in the petrochemical industry with a good safety and financial record. The company have several world class systems and technologies implemented as part of the operational and maintenance process. However, occasional disruptions to the production process are experienced due to various failures of instrumentation, electrical or mechanical equipment. These disruptions lead to unplanned downtime and production losses. The objective of this study was to identify potential causes of unplanned breakdowns and to investigate whether the asset management strategy and systems implemented is lacking essential components which, if corrected, could improve the equipment reliability, and assist in preventing unplanned interruptions. This investigation aimed to identify whether the asset management system implemented is adequate as well as whether competent and experienced personnel diligently apply the associated strategies. The study evaluated the company asset management and maintenance standards and compared it to the latest international standards. The study involved the collection of data by means of a survey. The population comprised personnel of the organisation involved in or contributing to asset management. Due to the nature of the research topic, a non-probability sampling strategy and purposive sampling was used. The sample consisted of random responses received from a targeted population within specified role categories across two major business units of the company. About 1300 members of the personnel were identified as possible respondents and 102 persons from operations, maintenance, engineering, and management completed the questionnaire. The results indicated that a very thorough asset management system, which is on par with the latest international standards, has been implemented. However, some areas were identified where the asset management framework is not diligently implemented or applied. A shortcoming which was revealed during the study is the concept of asset management maturity. It was found that some areas that have a mature implementation of the asset management framework achieved better overall asset performance. A collaborative working approach also existed between departments and disciplines. It was also noted that a consistent and dedicated management approach is required to elevate the maturity of all areas to an acceptable level. Further research is required to quantify and evaluate asset management maturity levels of different departments. The study found that asset performance, as based on the opinion of the respondents, varied with several probable causes. Highly unreliable areas may be due to inadequately designed or incorrect equipment installed. It may also be due to aged equipment which may require capital investments and engineering solutions to resolve and improve on. Based on the data gathered, it is evident that there are different goals in different disciplines, and even between different hierarchies of the same discipline. Such different goals could lead to friction and a lack of collaboration among the work force. A strong and committed leadership team is therefore required to ensure that all departments are aligned in their goals and contribute to the common vision and mission of the company.

14:00-15:20 Session 11B: S.16: Risk and resilience analysis for the low-carbon energy transition
Chair:
Giovanni Sansavini (ETH Zurich, Switzerland)
Location: CQ-006
14:00
Peter Burgherr (Paul Scherrer Institut (PSI), Switzerland)
Eleftherios Siskos (Paul Scherrer Institut (PSI), Switzerland)
Rebecca Lordan-Perret (University of Basel, Switzerland)
Matteo Spada (Zurich University of Applied Sciences (ZHAW), Switzerland)
Christopher Mutel (Paul Scherrer Institut (PSI), Switzerland)
A Multi-Criteria Decision Model for the Assessment of Sustainability and Governance Risks of Tailings Dams
PRESENTER: Peter Burgherr

ABSTRACT. In January 2019, the Brumadinho tailings dam collapse in Brazil resulted in 270 deaths and extensive damage to forest, agricultural and river ecosystems. Although it was the most catastrophic event in the past few years, smaller-scale accidents at tailings dams occur rather regularly. Until recently publicly available information and data on the characteristics of tailings dams, their corresponding risk management practices, and the records of historical disasters were limited and distributed among multiple sources. However, the Brumadinho disaster triggered numerous proposals and activities to increase transparency related to risks and sustainability more generally. This push for transparency has included increasing pressure from major institutional investors to ensure that their portfolio assets comply with environmental, social and governance criteria. What is lacking is a coherent aggregation and analysis of these transparency measures.

Against this background, the current study seeks to create a global sustainability comparison of tailings dams at a country level. For this purpose, harmonized data from multiple input sources are combined with an iterative, multi-stage process to support evidence-based decision-making. Specifically, a comprehensive set of criteria and indicators is developed that conceptually adheres to the framework of the Sustainable Development Goals (SDG) and includes, among others, the impact on the environment, accident risks, socio-political and governance aspects. The environmental indicators stem from the life-cycle inventory database ecoinvent (e.g. toxic waste, land use and biodiversity), while indicators on accident risks cover the expected fatality and spill rates, as well as the worst-case scenario consequences. The socio-political and governance indicators are derived from national statistical authorities and trusted international organizations (e.g. “indigenous peoples land” from the International Labour Organization, “political stability” and “ease of doing business” from the World Bank).

The sustainability of tailings dams across countries is assessed with the aid of a dedicated Multi-Criteria Decision Analysis (MCDA) framework based on an outranking sorting approach (i.e. ELECTRE-TRI). First, the different sustainability categories are defined and subsequently the corresponding indicators quantified. Next, the preferential information, such as thresholds, criteria weights and vetoes, is elicited from domain experts. Finally, the sorting results, together with the intermediate degrees of credibility, are calculated and visualized. The clustering of the countries, in the sustainability classes, is assessed and analyzed in the form of a benchmark, leading to the main conclusions and specific recommendations.

14:20
Babette Tecklenburg (German Aerospace Center - Institute for the Protection of Maritime Infrastructures, Germany)
Alexander Gabriel (German Aerospace Center - Institute for the Protection of Maritime Infrastructures, Germany)
Frank Sill Torres (German Aerospace Center - Institute for the Protection of Maritime Infrastructures, Germany)
A scenario based threat assessment using Bayesian networks for a high voltage direct current converter platform

ABSTRACT. The adaptation to the climate change poses new challenges to the energy production. The overarching goal is to minimize the production of greenhouse gases. Therefore the electricity mix of the individual countries needs to be restructured from a fossil based to a mainly renewable based energy mix. To support this paradigm change, according to the German government, the expansion target of the offshore wind industry for 2030 should be 20 GW,which corresponds to an increase of 260%. The European Union announced support measures with a volume of 800 billion euros. For this reason it can be expected that the amount of offshore wind energy will increase in medium term. An offshore wind farm (OWF) consists of multiple wind turbines and an offshore substation. The offshore energy is produced by the wind turbines and than forwarded to the offshore substation. An high voltage direct current converter platform (HVDCC) transforms the electricity of multiple offshore wind farms (OWFs) from alternating current to direct current and transmits the electricity to the shore. Therefore HVDCCs play a significant role because when the HVDCCs stops operating the energy production of multiple OWFs cannot be submitted to the shore. In the last years a few attacks against energy production as well as storages took place. In one reported incident, a break-in at an onshore substation led to a blackout. Also in the maritime domain attacks by climate activists against oil and gas infrastructures are known. In 1995 Greenpeace activists occupied Brent Spa to raise attention against ocean dumping. In Nigeria in 1998 over 100 unarmed and peaceful protesters occupied an oil production platform to highlight environmental and distribution topics. To be capable of acting in such a situation the operating companies need to know against what kind of scenarios they need to prepare and if the scenario poses a threat to the own process or infrastructure.

The aim of this paper is to present an approach that quantifies a selected threat scenario. The method uses Functional Resonance Analysis Method (FRAM) and Bayesian networks. In the beginning a threat scenario is defined and verbally described. This description is the foundation for a FRAM model. From the scenario description the functions and aspects for the FRAM are derived and included in the FRAM model. The FRAM model allows the user to develop a deeper understanding of the scenario. This is possible because a FRAM uses six aspects (input, time, control, output, precondition and resource) that describe under which circumstances and restrictions any respective function (a task or activity) can be executed and therefore guide the user. The next step of the approach is to transfer the FRAM into a Bayesian network. First the network structure is defined. Therefore the nodes and edges are derived from the functions and aspects as well as their relation to each other. Second the probabilities are included in the Bayesian network. The data sources can be for example databases, literature information or expert knowledge. After the initialization it can be determined if the process or facility is threatened or not. As a case study for this approach the scenario “unauthorized access to an HVDCC” is studied.

14:40
Nadhir Kahlouche (World Maritime University, Sweden)
Serdar Yildiz (World Maritime University, Sweden)
Anish Hebbar (World Maritime University, Sweden)
Jens-Uwe Schröder-Hinrichs (World Maritime University, Sweden)
Enhancing Maritime Safety in the Era of Decarbonization: A Safety Barrier Analysis
PRESENTER: Anish Hebbar

ABSTRACT. Successful maritime transport decarbonization would require managing emerging safety risks. Proprieties of alternative fuels present unique safety challenges compared to conventional fuels, and moreover, their differing technological and regulatory maturity levels further complicate the picture. Therefore, effective risk management in the era of shipping decarbonization would require developing a proactive response to emerging hazards alongside the continuation of established approaches for well-known hazards. This paper aims to shed light on the maritime safety barriers as part of the risk management system in the maritime industry and their ability to prevent, control, and mitigate the emerging hazards from the use of ammonia as an alternative fuel. The paper first discusses the safety implications of using ammonia as a marine fuel onboard ship. Drawing on data from previous incidents involving ammonia in various industries, a Hazard Identification (HAZID) analysis is conducted to identify and assess the potential safety threats and their causes and consequences that may arise from the latter’s use in shipping. A safety barrier analysis is then conducted utilizing a Bowtie method, a scenario-based risk management tool, to identify possible weaknesses and barriers that need to be integrated into the maritime system. This study could benefit relevant stakeholders in the maritime industry, including shipowners and policymakers, in decarbonizing shipping without jeopardizing maritime safety.

15:00
Paolo Gabrielli (ETH Zurich, Switzerland)
Jordi Schweitzer Campos (ETH Zurich, Switzerland)
Viola Becattini (ETH Zurich, Switzerland)
Marco Mazzotti (ETH Zurich, Switzerland)
Giovanni Sansavini (ETH Zurich, Switzerland)
Design and multi-objective optimization of resilient carbon capture, transport and storage supply chains
PRESENTER: Paolo Gabrielli

ABSTRACT. Limiting global warming to 1.5 requires a drastic reduction of anthropogenic greenhouse gas emissions (mostly carbon dioxide, CO2), to reach net-zero CO2 emissions by 2050. Carbon capture and storage (CCS) will most likely be needed to achieve this ambitious target by limiting fossil-based emissions from hard-to-abate sectors (i.e., cement, iron and steel, chemical production, and waste treatment), as well as by removing CO2 from the atmosphere when combined with direct air capture and bioenergy.

Early CCS projects adopted a point-to-point model, consisting of a single large emitter (e.g., a fossil-based power plant) located at a reasonable distance from a CO2 storage site. However, most recently, CCS hubs and clusters have been developed to aggregate CO2 streams captured at different facilities for conditioning, transport and storage. Examples of such hubs in Europe are those being developed around the North Sea (e.g., in Norway, in the Netherlands, and in the UK) and in Iceland. These hubs will enable significant reductions in unit costs of CO2 transport and storage through economies of scale, thus limiting investment risks and unlocking opportunities for CCS implementation at small CO2 sources. Therefore, the optimal design and development of carbon capture, transport and storage (CCTS) supply chains and networks is a topic of the utmost urgency and practical relevance.

This study explores the design of resilient carbon capture, transport and storage (CCTS) supply chains aimed at decarbonizing industrial emitters, and focuses on the resilience of the CO2 transport network. The resilience of a CCTS supply chain is defined as its ability to mitigate the effect of failures on its function to successfully capture, transport and permanently store CO2 during a time horizon of interest, and its quantification relies on the concept of expected carbon not stored. A mixed-integer linear program is used to determine the optimal location and size of carbon capture units and the optimal structure of the CO2 transport network that complies with a specified level of resilience, while minimizing the cost or the CO2 emissions of the supply chain.

While being general, the proposed optimization and analysis framework is presented and demonstrated by referring to a specific objective, i.e., the decarbonization of the Swiss waste-to-energy sector from 2025 to 2050. The trade-off between minimum cost and maximum resilience, and the sensitivity of resilient CCTS supply chains on cost, emissions and reliability parameters, are investigated.

14:00-15:20 Session 11C: S.24: Artificial Intelligence, Meta-Modeling and Advanced Simulation for the Analysis of the Computer Models of Nuclear Systems I
Chair:
Nicola Pedroni (politecnico di torino, Italy)
Location: CQ-007
14:00
Seung Geun Kim (Korea Atomic Energy Research Institute, South Korea)
Jaehyun Cho (Korea Atomic Energy Research Institute, South Korea)
Comparative Study of Explainable Artificial Intelligence Methods in Nuclear Field
PRESENTER: Seung Geun Kim

ABSTRACT. To support operators’ decision-making in nuclear power plant, numerous deep neural network-based models have been developed. However, the explainability of deep neural network model is insufficient for directly applying it to the safety-critical fields and various explainable artificial intelligence methods have been proposed to cope with the explainability problem. Since each explainable artificial intelligence method has its own advantages and disadvantages, comparative study should be conducted for identifying which method is suitable for specific domain and data. In this study, various explainable artificial intelligence methods are applied to the simple accident diagnosis model that is based on time series data, and their explanatory abilities are compared quantitatively via perturbation analysis. The results revealed that there are significant differences between explanation abilities of explainable artificial intelligence methods and therefore comparative studies on explainable artificial intelligence methods are required for every deep neural network applications in nuclear field.

14:20
Adolphus Lye (Institute for Risk and Uncertainty, University of Liverpool, UK)
Nawal Prinja (Prinja and Partners, UK)
Edoardo Patelli (Centre for Intelligent Infrastructure, University of Strathclyde, UK)
Probabilistic Prediction of Material Properties: A Review and Application of AI in the Nuclear Industry
PRESENTER: Adolphus Lye

ABSTRACT. This work is based on the recent feasibility study that is conducted in response to the recent challenge by GameChangers to devise digital technologies to support development, deployment, operation and decommissioning of Advanced Nuclear Technologies. There are 2 distinct objectives to this work.

The first objective would be to present a review of the state of the implementation of Artificial Intelligence (AI) in the nuclear industry today. This seeks to provide a background overview of the challenges faced by the industry from which the benefits and opportunities for AI would be discussed. The second objective would be to demonstrate the implementation of Artificial Intelligence tool through the use of Artificial Neural Networks along with Uncertainty Quantification tools to perform probabilistic prediction of material properties in nuclear reactors. The idea is to allow for the prediction to account for the inherent variability of the material properties as well as the uncertainty associated with the lack of information due to sparse data. To achieve this, Bayesian Neural Network is employed as the stochastic model for stochastic model updating. The purpose of which is to enhance the sparse data-set from which it will be used to train and validate the existing Artificial Neural Networks which have been developed.

The eventual research seeks to illustrate the potential and benefit of the implementation of AI to address a nuclear industrial challenge as highlighted in the review.

14:40
Jae Min Kim (Ulsan National Institute of Science and Technology, South Korea)
Junyong Bae (Ulsan National Institute of Science and Technology, South Korea)
Seung Jun Lee (Ulsan National Institute of Science and Technology, South Korea)
Comparison of quantification methods for reflecting limiting conditions for operation during startup operation of nuclear power plants
PRESENTER: Jae Min Kim

ABSTRACT. Deep learning technology is being applied to various industries as growth of computational processing. Nuclear power plants are operated completely manual during startup and shutdown operation modes, and research is being conducted to apply an autonomous operation system to reduce human error. Reinforcement learning is used for agents to obtain an optimal policy that control the necessary components to achieve their goals. If all states and components are configured as a learning environment, the optimal policy is difficult to converge. Therefore, it is advantageous to configure the environment with simplified information to acquire the optimal policy, which leads to a multi-agent environment where various agents intervene. Since individual agents do not consider the overall process or the operating policies of other agents, if the actions of each agent are performed as they are, the results of the actions may not come out as expected. For example, the action of opening and closing a particular valve can be injected at the same time. Therefore, this paper proposes an action evaluation method for adding to an autonomous operation system where multi-agent is used. To evaluate behavior, it is necessary to understand how the current action will affect the plant. Accordingly, a plant parameter prediction model was trained to predict the future outcome according to the selected action. In this paper, a regression model based on long short-term memory was trained for the valves used by two agents in the operation section selected for the comparative experiment. Input and output variables used for training the prediction model were selected based on limiting conditions for operation, and key variables to be monitored while operation such as reactor coolant system temperature were added. When actions presented by agents collide, action is selected based on the variables predicted through this model. In the comparative experiment, it was confirmed which quantification method selected the action that could well reflect limiting conditions for operation. In conclusion, it can be seen that better operation is possible when the conflicting action of the autonomous operation system is selected than when no processing is performed. In the future work, a generalized quantification method that can be extended to other operation sections will be studied. Furthermore, it will be possible to induce action to achieve the goals of startup operation of increasing pressure and temperature more safely and quickly.

14:00-15:20 Session 11D: S.33: Collaborative Intelligence in Manufacturing and Safety Critical Systems. The CISC and Teaming-aI EU projects I
Chair:
Hector Diego Estrada Lugo (Technological University Dublin, Ireland)
Location: CQ-106
14:00
Carlos Albarrán Morillo (Polytechnic of Turin, Italy)
Devesh Jawla (TU Dublin, Ireland)
Micaela Demichela (Polytechnic of Turin, Italy)
John Kelleher (TU Dublin, Ireland)
Safety-critical systems in the automotive sector: pros and cons in the current state-of-the-art of human performance assessment

ABSTRACT. Even though the possible replacement of workers with automation or robots seems to indicate a future loss in human workforce needs, most manufacturing lines still require human interaction, e.g., for maintenance, inspections, and operations. Due to this interaction, the human-machine interface and human reliability are critical performance factors for identifying and dealing with safety-critical scenarios and business continuity. This paper is based on a literature review on human performance assessment in the automotive sector. This review reports the pros and cons of the current human performance assessment and deployment methods and data at the current state-of-the-art. In the automotive sector, production systems are based on assembly lines where the automation process is becoming increasingly complex, which calls for additional capabilities for the workers' operations and for analyzing safety-critical operations and making decisions. Artificial intelligence that made massive progress in science and technology can play a relevant role. Artificial intelligence is expected to support the decision-making for the management of safety-critical systems in an always more rapid and effective way, exploiting data from the shopfloor, both from equipment and operators' monitoring. Moreover, independent artificial intelligence decisions, even in critical scenarios, will be the future target of the research. This work is an initial step in our research activity within the Collaborative Intelligence for Safety-Critical Systems (CISC) project. https://www.ciscproject.eu/

14:20
Inês F. Ramos (Università degli Studi di Milano - Unimi, Italy)
Gabriele Gianini (Università degli Studi di Milano - Unimi, Italy)
Ernesto Damiani (Università degli Studi di Milano - Unimi, Italy)
Neuro-Symbolic AI for Sensor-based Human Performance Prediction: Challenges and Solutions
PRESENTER: Inês F. Ramos

ABSTRACT. Recently, due to the rapid development of deep learning methods, there has been a growing interest in Neuro-symbolic Artificial Intelligence, which takes advantage of both explicit symbolic knowledge and statistical sub-symbolic neural knowledge representations. In sensor-based human performance prediction (HPP) for safety-critical applications, where maintaining optimal human and system performance is a major concern, neuro-symbolic AI systems can improve sensor-based HPP tasks in complex working settings. In this paper, we focus on the advantages of hybrid neuro-symbolic AI systems, present the outstanding challenges and propose possible solutions for the HPP for the safety-critical application domains.

14:40
Aayush Jain (Technological University of Dublin, Irish Manufacturing Research, Ireland)
Shakra Mehak (Technological University of Dublin, Pilz Ireland, Ireland)
Philip Long (Irish Manufacturing Research, Ireland)
John D. Kelleher (Technological University of Dublin, Ireland)
Michael Guilfoyle (Pilz ireland, Ireland)
Maria Chiara Leva (Technological University of Dublin, Ireland)
Evaluating Safety and Productivity Relationship in Human-Robot Collaboration
PRESENTER: Aayush Jain

ABSTRACT. Collaborative robots can improve ergonomics on factory floors while allowing a higher level of flexibility in production. The evolution of robotics and cyber-physical systems in size and functionality has enabled new applications which were never foreseen in traditional industrial robots. However, the current human-robot collaboration (HRC) technologies are limited in reliability and safety, which are vital in risk-critical scenarios. Certainly, confusion about European safety regulations has led to situations where collaborative robots operate behind security barriers, thus negating their advantages while reducing overall application productivity. Despite recent advances, developing a safe collaborative robotic system for performing complex industrial or daily tasks remains a challenge. Multiple influential factors in HRC make it difficult to define a clear classification to understand the depth of collaboration between humans and robots. In this article, we review the state of the art in reliable collaborative robotic work cells and propose a reference model to combine influential factors such as robot autonomy, collaboration, and safety modes to redefine HRC categorization.

15:00
Andrés Alonso-Pérez (TU Dublin, Ireland)
Hector Diego Estrada Lugo (TU Dublin, Ireland)
Enrique Muñoz-de-Escalona (TU Dublin, Ireland)
Maria Chiara Leva (TU Dublin, Ireland)
Modifying a manufacturing task for Teamwork between humans and AI: initial data collection to guide requirements specifications

ABSTRACT. Recent advances in AI, above all machine and deep learning, have brought about unprecedented possibilities in automation, prediction and problem solving with impact on operators and their way of working and interacting with automation on the shop floor. While the expected effects are focusing on increasing the efficiency, flexibility, and productivity of operations in the industrial and service sector, there is justified scepticism towards its implementation due also to the challenge of integrating AI into operator’s current way of working and practices in a way that actually supports also the human in the loop. Therefore, it is now time to consider the user’s side from an employees’ point of view in order to foster AI in a human-technology relationship. The present paper is exploring the preliminary steps taken in this direction while trying to identify a problem definition and its suitable solutions for, firstly, improving the human automation interaction and, secondly, reduce the time variability and improve efficiency in a milling process for large metal metal components of a wind turbine at a manufacturing facility. To complement this description, a data analysis of the manufacturing process status is provided. The analysed data sets contain general information of relevant parameters of the manufacturing system as well as the required inputs from the operators. The purpose of this report is to establish the basis on which a thorough operational description of the overall man-automation process is defined and the usefulness of including a better integration for the manual tasks in it. The operational description of the tasks is a key ingredient to achieve better requirements specifications and how we can enhance the human performance of the operators by increasing their situational awareness on the shop floor. Moreover this task mapping can account of a lot of missing information regarding variability of execution time in the process and to support scheduling of manual activities for the operator to perform while the automated task may not need direct supervision.

14:00-15:20 Session 11E: System Reliability I
Chair:
Elena Zaitseva (University of Zilina, Slovakia)
Location: LG-20
14:00
Nikola Veljanovski (University of Ljubljana, Faculty of Electrical Engineering, Slovenia)
Marko Čepin (University of Ljubljana, Faculty of Electrical Engineering, Slovenia)
Probability Based Estimation of Reliability Indices in Power Systems
PRESENTER: Marko Čepin

ABSTRACT. The reliability indices are important parameters used in the power systems primarily for the determination of the quality of the power supply and allocation of the resources for restoration, improvement and further development of the power systems. The ongoing transition in the power systems from deterministically to probabilistically based planning methods calls for probabilistic representations of the reliability indices. One of the most widely used reliability indices in the distribution power systems are System Average Interruption Frequency Index (SAIFI) and System Average Interruption Duration Index (SAIDI). Traditionally, both of them have been calculated as an average of the total interruptions and their durations per costumer, respectively, within the system. The objective of this work is to develop an extension of both methods in sense to replace the one-point average value with the most appropriate probability distribution functions. The procedures have been developed and the probability distribution functions of SAIFI and SAIDI were estimated instead of point average values. The data point of each outage or of each group of outages was evaluated against the known distributions utilizing the Kolmogorov–Smirnov test. The method and procedures were tested on a small superficial example. Then they were applied to a real regional power distribution system for some selected years of their operation. The results show relatively good fit of normal distribution function. The results show a different standard deviation for different groups of SAIDI and SAIFI.

14:20
Vincent Couallier (Mathematical Institute of Bordeaux, France)
Using random step-stress models to analyse the reliability of systems with dormant failure of control subsystems : cases of tampered hazard rate mode and general cumulative exposure assumption in AFT model

ABSTRACT. In this paper, we use a probabilistic model for the reliability analysis of a repairable system whose one component has the function of controlling environmental constraints and thus providing safety and usability conditions of the critical part of the system.

More specifically, each failure of the control component results in a sudden (sometimes random) increase in the value of stress hence altering the reliability of the critical subsystem and reducing its remaining useful life. At the simplest case, this reduces to a step-stress profile of the critical subsystem where both change-time and stress value are unknown.

To take into account both inter-unit and in-time variability of stresses, we consider both Cumulative Exposure (CE) and Tampered Failure Rate (TFR) models to provide and compare closed form expressions for reliability functions and mean time to failure (MTTF) under various hypotheses of monitoring and maintenance policies of the system.

This work is well suited to the case of a perfect monitoring of the critical system (observed failure) and hidden failures of control component (observed at inspection triggered by a failure of the critical component).

Results are given for general reliability distributions and a focus is done on the Weibull distribution, including the exponential case.

14:40
Emefon Ekerette Dan (Norwegian University of Science and Technology, Norway)
Yiliu Liu (Norwegian University of Science and Technology, Norway)
Performance Assessment of Redundancy Strategies of Systems Subject to External Shocks

ABSTRACT. Engineering systems are subject to degradation due to operational conditions and usage. The systems are also subject to random external shocks that are capable of causing abrupt failures. To maintain desired system performance and enhance reliability and availability, the systems are often equipped with redundant components. Different redundancy strategies are often employed including active and passive redundancy. Choice of strategies is dependent on the operational context and desired performance. In this paper, we analyse the performance of different redundancy strategies considering component degradation and random external shocks. The piecewise deterministic Markov process is used to model the system and compared with Monte Carlo simulation to approximate system unavailability considering different frequencies of random external shocks.

15:00
Inmaculada T. Castro (Universidad de Extremadura, Spain)
Lucía Bautista Bárcena (Universidad de Extremadura, Spain)
Reza Ahmadi (Iran University of Science & Technology, Iran)
Maintenance of a parallel system using the state-dependent-mean-residual time

ABSTRACT. A system consisting of n identical components in parallel is analyzed. We asume that the system is less efficient when there are d components failed, which is called defective state. If there are less than d failed components, the system is properly working. Periodic inspections are performed on the system to evaluate its state and detect the defective state to perform preventive or corrective maintenance actions. Maintenance actions are based on the state dependent mean residual life (SDMRL), which is the expected residual time to reach the defective state, knowing the number of components in an inspection time. It is a good indicator for maintenance planning and determining the health state of the system. Two models are described to optimize the maintenance strategy proposed;one of them including minimal repairs for the failed components. Numerical examples are computed to evaluate the expected cost rate and obtain the optimal parameters for both models.

14:00-15:20 Session 11F: Joint event: International Workshop on Autonomous Systems (IWASS)
Chairs:
Marilia Ramos (University of California Los Angeles, United States)
Christoph Thieme (SINTEF Digtial, Norway)
Location: LG-21
14:00
Claire Blackett (Institute for Energy Technology, Norway)
The Ethics of AI in Autonomous Transport

ABSTRACT. In recent years we have seen an enormous uptake in the use of artificial intelligence (AI) in society. There is no doubt that AI can have positive effects in, for example, advancing healthcare through the detection of diseases, or making everyday life easier through the provision of virtual assistants and recommendation systems. However, there are an increasing number of examples of widespread misuse and/or failure of AI technologies, that give rise to questions about ethics and responsibility. For example, in 2018 it was disclosed that the consulting firm Cambridge Analytica used machine learning algorithms to harvest personal data from approximately 87 million Facebook users without their knowledge or consent and used this data to provide analytical assistance in the 2016 USA presidential election (Andreotta et al., 2021). Facebook was fined $5bn for violation of their user’s privacy in this incident. In 2020, it was revealed that the facial recognition firm Clearview AI had used machine learning to scrape approximately 10 billion images of people from social media websites, again without the users’ knowledge or consent, and sold this technology to law enforcement agencies for identification and surveillance purposes (Rezende, 2020). Clearview AI has been ordered to destroy all images belonging to individuals living in countries such as Australia and UK, and investigations of the incident are ongoing. These are but two of several recent examples that have highlighted how AI can be misused in ways that raise ethical concerns about privacy, surveillance, bias, discrimination, and attempts to influence human behaviour. One could argue, in the Facebook and Clearview AI cases, that by using social media, or other publicly available technologies, users must expect and accept that personal data is being collected about them. However, this argument ignores the users’ right to privacy and the fundamental principle of informed consent, i.e., that a user should have sufficient information to be able make their own decision about whether to participate in, or opt out of, the data gathering exercise. Although the informed consent principle originated in healthcare to manage medical ethics and law, there is an increasing need for an equivalent principle to deal with the ethical and legal challenges of AI deployment in society (Pattinson et al., 2020). The issue of informed consent regarding use of AI technologies becomes even more complex when the potential impact of technology misuse or failure extends beyond the immediate user, and especially in the transportation sector where the potential for physical harm to others may be greatest. Consider the spate of road accidents and fatalities involving the Tesla Autopilot driver assistance system, which raise serious doubts about the maturity of the AI and its readiness for deployment on public roads. Again, one could argue that by sitting behind the wheel of a “self-driving” car, the driver implicitly consents to the potential consequences of failure or misuse of the AI. However, if the car is driven on a public road and something goes wrong, there is a high likelihood that it will involve the inhabitants of another vehicle or other road users who did not consent to participation in the use of the AI technology. By allowing the deployment of AI technologies in public situations without sufficient evidence of the safety and reliability of the technologies, we unwittingly participate in mass experimental testing of this new technology, often without knowledge that the technology is being used or the potential impact on us if there is a failure. This appears to be both irresponsible and unethical. In this paper, I will explore the issue of responsible and ethical deployment of AI in society in more detail, using examples from real-life transport accidents to illustrate what can happen when this goes wrong. I will argue that the misuse and/or misunderstanding of AI technology is seemingly a direct result of the technology developer/manufacturer’s failure to adequately inform users about the presence, capabilities, and limitations of the technology. I will challenge the commonly used Levels of Automation (LOA) model and describe how it fails to consider human factors aspects, which is becoming a critical issue as the potential impacts of AI misuse or failure continue to spread beyond the immediate user. Finally, I will consider ways in which organisations could adjust and change their behaviours to enable more responsible and ethical AI technology development practices in the future.

14:20
Hyungju Kim (University of South-Eastern Norway, Norway)
Henning Mathias Methlie (University of South-Eastern Norway, Norway)
A System-Theoretic Process Analysis for autonomous ferry operation: a case study of Sundbåten
PRESENTER: Hyungju Kim

ABSTRACT. The System-Theoretic Process Analysis (STPA) is a relatively new hazard analysis method that was developed to analyse modern complex socio-technical and software intensive control systems. The main objective of this study is to apply STPA to autonomous ferry operation and discuss advantages and limitations of the method. For this purpose, we investigated hazard analysis methods required by current autonomous ship safety guidelines in Norway and discussed their limitations. We have then conducted STPA for autonomous ferry operation with a case study of Sundbåten project, and we established control structure for the fire hazard of the autonomous ferry including two human operators and two autonomous systems. The results showed that the complex interactions between the human operators and the autonomous systems can lead to serious consequences even there is no component failures. Based on the analysis results, we finally discussed the advantages of STPA for comprehensive hazard analysis of autonomous ferry operation.

14:40
Hyungju Kim (University of South-Eastern Norway, Norway)
Deepen Prakash Falari (University of South-Eastern Norway, Norway)
Real time risk analysis for autonomous ferry operation: A case study of Sundbåten
PRESENTER: Hyungju Kim

ABSTRACT. One of the key elements of successful autonomous ferry operation is the safety because there would be untrained passengers onboard. While we have couple of safety guidelines for autonomous ships in Norway, they have some limitations, and we may need to introduce a new safety approach to ensure the safe operation of autonomous ferries. The main objective of this study is to emphasize the necessity of the real-time risk analysis for autonomous ferry operation, and to demonstrate it with a case study of Sundbåten. For this purpose, we have first investigated the safety guidelines for autonomous ships in Norway and compared their limitations with the advantages of real time risk analysis. A preliminary real-time risk analysis model has been established by combination of the two different methods: Bayesian Network and Fuzzy Logic. The real-time risk model was established using three risk themes, and one of the risk themes was further developed by eleven risk influencing factors. The preliminary model successfully demonstrated the changing risk of the autonomous ferry, and remaining future works are suggested at the end of this study.

15:00
Camila Correa-Jullian (Garrick Insitute for the Risk Sciences, University of California Los Angeles (UCLA), United States)
John McCullough (Garrick Insitute for the Risk Sciences, University of California Los Angeles (UCLA), United States)
Marilia Ramos (Garrick Insitute for the Risk Sciences, University of California Los Angeles (UCLA), United States)
Jiaqi Ma (Garrick Insitute for the Risk Sciences, University of California Los Angeles (UCLA), United States)
Enrique Lopez Droguett (Garrick Insitute for the Risk Sciences, University of California Los Angeles (UCLA), United States)
Ali Molseh (Garrick Insitute for the Risk Sciences, University of California Los Angeles (UCLA), United States)
Modeling Fleet Operations of Autonomous Driving Systems in Mobility as a Service for Safety Risk Analysis
PRESENTER: Marilia Ramos

ABSTRACT. System risk analysis and safety assessments of Autonomous Driving Systems (ADS) have mostly focused on aspects of the vehicle's functionality, performance, and interactions with other road users under various driving scenarios. However, as the deployment of ADS becomes more common, the importance of addressing risks arising from fleet management operations is critical, such as the role of fleet operators in the context of Mobility as a Service (MaaS). In this work, we present a system breakdown of ADS remote operations and discuss the role and participation of fleet operators as separate entities from ADS developers. Selected high-level accident scenarios are analyzed, focused on collision events caused by the ADS vehicle operating in unsafe conditions and failed interventions by remote fleet management centers. In particular, key roles of the fleet operator identified include periodically performing inspection and maintenance procedures and acting as a safety barrier for MRC mechanisms limitations.

14:00-15:20 Session 11G: Prognostics and System Health Management V: Markov models
Chair:
Antoine Grall (Troyes University of Technology, France)
Location: CQ-009
14:00
Christian Velasco-Gallego (University of Strathclyde, UK)
Iraklis Lazakis (University of Strathclyde, UK)
Analysis of time series imaging approaches for the application of fault classification of marine systems

ABSTRACT. Artificial Intelligence (AI) can enable better coordination between ships by enhancing decision-making processes through the optimisation of marine vessels' communication technologies and the gathering of information via Internet of Ships (IoS). Although some efforts have been made to detect faults and malfunctions that can occur in marine systems, there is a lack of analysis and formalisation of fault identification (a.k.a. fault classification) approaches; the aim of which is to provide a comprehensive description of any considered fault type and its respective nature. To contribute to this unexplored field within the shipping sector, an analysis of a time series imaging approach is performed, as these approaches have demonstrated their ability to identify fault patterns that can not be perceived when considering the original time series data. As part of this analysis, a case study on the turbocharger exhaust gas outlet temperature parameter of a bulk carrier's main engine is also introduced.

14:20
Aibo Zhang (City University of Hong Kong, Hong Kong)
Zhiying Wu (Centre for Artificial Intelligence & Robotics,Hong Kong, Hong Kong)
Yukun Wang (School of Economics and Management, Tianjin Chengjian University, Tianjin 300384, China, China)
Min Xie (Department of Advanced Design and Systems Engineering, City University of Hong Kong, Hong Kong SAR, Hong Kong)
Performance analysis for a degrading system with Markov model
PRESENTER: Aibo Zhang

ABSTRACT. Most mechanical engineering systems experience deterioration due to several practical factors such as aging, usage etc. This results in the degradation of its performance. Although the deterioration is continuous in nature, it is not always practically possible to measure the deterioration, especially in the case of subsea systems. In such cases the performance of these systems is represented by multiple finite states in their lifetimes where various levels of degradation are represented by these states. Generally, the degradation process is divided into two stages, i.e., the normal and defective stages. Thus, there are three possible states for the system: normal, defective, and failed. The Markov approach has been utilized heavily for reliability analysis of such systems. However, most of the Markov models rely on an accurately exponentially distributed transition rate, which is unrealistic in certain cases. In reality, there are some uncertainties in the transition rates thanks to the limits of prior knowledge. This paper presents a multi-phase Markov model-based performance evaluation model for a three-state system with uncertainties in transition rates. The transition rates are assumed following Beta distribution with known lower and upper bounds. The average system unavailability in a given life cycle is employed to quantify system performance

14:40
Lisandro Arturo Jimenez-Roa (University of Twente, Netherlands)
Tom Heskes (Radboud University Nijmegen, Netherlands)
Tiedo Tinga (University of Twente, Netherlands)
Hajo J. A. Molegraaf (Rolsch Assetmanagement, Netherlands)
Marielle Stoelinga (University of Twente & Radboud University Nijmegen, Netherlands)
Deterioration modeling of sewer pipes via discrete-time Markov chains: A large-scale case study in the Netherlands

ABSTRACT. Sewer pipe network systems are an important part of civil infrastructure, and in order to find a good trade-off between maintenance costs and system performance, reliable sewer pipe degradation models are essential. In this paper, we present a large-scale case study in the city of Breda in the Netherlands. Our dataset has information on sewer pipes built since the 1920s and contains information on different covariates. We also have several types of damage, but we focus our attention on infiltrations, surface damage, and cracks. Each damage has an associated severity index ranging from 1 to 5. To account for the characteristics of sewer pipes, we defined 6 cohorts of interest. Two types of discrete-time Markov chains (DTMC), which we called Chain `Multi' and `Single' (where Chain `Multi'contains additional transitions compared to Chain `Single'), are commonly used to model sewer pipe degradation at the pipeline level, and we want to evaluate which suits better our case study. To calibrate the DTMCs, we define an optimization process using Sequential Least-Squares Programming to find the DTMC parameter that best minimizes the root mean weighted square error. Our results show that for our case study there is no substantial difference between Chain `Multi' and `Single', but the latter has fewer parameters and can be easily trained. Our DTMCs are useful to compare the cohorts via the expected values, e.g., concrete pipes carrying mixed and waste content reach severe levels of surface damage more quickly compared to concrete pipes carrying rainwater, which is a phenomenon typically identified in practice.

15:00
Luc S. Keizers (University of Twente / Netherlands Defence Academy, Netherlands)
Richard Loendersloot (University of Twente, Netherlands)
Tiedo Tinga (University of Twente / Netherlands Defence Academy, Netherlands)
ATMOSPHERIC CORROSION PROGNOSTICS USING A PARTICLE FILTER
PRESENTER: Luc S. Keizers

ABSTRACT. Worldwide annual costs related to preventive and corrective maintenance actions for corrosion are estimated to be 2.5 trillion USD (Latif et al., 2019). Developing predictive maintenance concepts for corrosion may assist in reducing these costs. By predicting the degradation and associated failures, maintenance can be scheduled more efficiently and effectively. Several attempts have been made in corrosion science to develop generic corrosion models inferred from local or worldwide experimental data. However, the statistical nature of these models make them only applicable for conditions that are similar to the experimental conditions. This is especially problematic for moving assets that operate in a wide variety of environments. Corrosion science also concerns development of corrosion monitoring devices. These provide insight in the ongoing corrosion process, which in many cases is sufficient to take maintenance decisions. However, in situations of changing ambient conditions, future corrosion rates might also change considerably. In that case, corrosion monitoring is inadequate, as a remaining useful life estimate cannot be given. A type of algorithm that could take advantage of both corrosion modeling and corrosion monitoring for prognostics is a particle filter. Despite the fact that corrosion is one of the most common and expensive failure mechanisms, application of a particle filter for corrosion prognostics is not yet found in literature. The core idea is to improve a corrosion model with real-time condition measurements. In this paper, a particle filter is applied to a dataset constructed from monthly exposure tests of carbon steel test specimens, published by the National Institute of Materials Science in Japan. The dataset includes varying ambient conditions because clear seasonal effects are observed. The results show that the particle filter improves corrosion predictions due to the updated model parameters. Still, a considerable amount of uncertainty was present. This is because not all environmental effects were included in the model. However, measurements compensate for this missing factors in the model and yet very acceptable remaining useful life predictions were obtained. Given the fact that the method only requires few input parameters and does not depend on large historical datasets, it is concluded that a particle filter is a promising algorithm for corrosion prognostics.

References Latif, J., Khan, Z. A., Nazir, M.H., Stokes, K., & Plummer, J. (2019, November). Condition monitoring and predictive modelling of coating delamination applied to remote stationary and mobile assets, Structural Health Monitoring, 18(4), 1056-1073

14:00-15:20 Session 11H: Oil and Gas Industry: Risk Assessment
Chair:
Luca Decarli (Eni, Italy)
Location: CQ-105
14:00
Fausta Delli Quadri (ISPRA, Italy)
Geneve Farabegoli (ISPRA, Italy)
HYDROCARBONS STORAGE TANKS: TECHNICAL EVALUATIONS RELATED TO MAJOR ACCIDENT SAFETY

ABSTRACT. This paper shows some technical evaluations about the adoption of safety requirements for hydrocarbons tanks within the activities of the Italian implementation of the Seveso Directive 2012/18/EU on the control of major-accident hazards (GU. (2015, July)). Some of these requirements, either technical or managerial, also come from the implementation of the BREF-Techniques (EC. (2006, July)) on emissions from storage, adopted in Italy under the IED (ex IPPC) Directive for hydrocarbons storage in the oil refining sector. Accident scenarios associated with the storage tanks, identified by the operator and characterized by a greater risk of contamination of the soil, subsoil and aquifer, are the following: 1. Structural collapse of the tank 2. Overfill 3. Pipeline leakage 4. Shell loss 5. Leakage from the bottom of the tank Each of the mentioned scenarios was analyzed in detail, and further assessments and investigations have been carried out by a Working Group (WG) of Competent Authorities (CA), in order to indicate the additional measures necessary for preventing/mitigating the risk of major accidents, and to achieve the highest safety goals for the main oil storages in the whole country. The WG availed itself of the technical-scientific support of the representatives of each CA, comparing and sharing information on issues relating to Seveso and IED, in particular as regards the waterproofing of the containment basins and the double bottoms of the tanks. References to events concerning the accidental release of eco-toxic substances into surface waters from storage systems, occurred in the past in some Seveso plants, have been considered too. Considerations on technical documentation produced by the national oil companies, on technical-plant aspects and on the safety management system have been carried out focusing on the risk analysis and the tank inspection and maintenance issues. Furthermore, safety aspects as well as hydrological and geological analysis of the underground storage tanks and critical site-specific environmental issues have been taken into account in the paper for a more complete view of the topic.

14:20
Yiyue Chen (China University of Petroleum (Beijing), China)
Laibin Zhang (China University of Petroleum (Beijing), China)
Jinqiu Hu (China University of Petroleum (Beijing), China)
Xiaowen Fan (China University of Petroleum (Beijing), China)
Mingyuan Wu (China University of Petroleum (Beijing), China)
Ontology Model Construction of Long-Distance Oil and Gas Pipeline Emergency Case
PRESENTER: Yiyue Chen

ABSTRACT. The emergency response decision of long-distance oil and gas pipeline is mainly based on the existing complete set of response plans. Due to the complex and changeable features of pipeline accidents, a unified knowledge framework is needed, which can not only record the coupled pipeline accident information well, but also provide reasoning basis for intelligent recommendation of emergency response. Based on modeling primitives, the ontology model of long-distance oil and gas pipeline emergency case is established. According to the data requirements of response suggestions in different emergency stages, the concept of feature and response are creatively divided, as well as the positive and negative feedback relation of response concept. Finally, taking the 11.22 Qingdao Oil Pipeline Explosion incident as an example, the knowledge representation of emergency case is established, and the applicability of ontology model is quantitatively evaluated. The applicability calculation shows that the model can describe emergency case in detail. By establishing the emergency ontology of long-distance pipeline, the unified expression of knowledge in this field is realized. It can be used as the basis for subsequent pipeline feature knowledge graph construction and emergency response recommendation.

14:40
Antonio Miranda (Naturgy, Spain)
Sebastián Martorell (Department of Chemical and Nuclear Engineering, MEDASEGI Research Group, Universitat Politècnica de València,, Spain)
Isabel Martón (Department of Statistics and Operational Research, MEDASEGI Research Group, Universitat Politècnica de València,, Spain)
Ana Isabel Sánchez (Department of Statistics and Operational Research, MEDASEGI Research Group, Universitat Politècnica de València,, Spain)
Reliability estimation of a Liquified Natural Gas bunkering operation supply
PRESENTER: Antonio Miranda

ABSTRACT. When operating with fuels, safety standards are always at stage. This is particularly true when it comes to introduce the Liquified Natural Gas, LNG as a new fuel into the maritime sector. LNG bunkering operations at port can be considered nowadays tailored to each scenario. There are studies related to the safety in the use of LNG and its consequences on people, infrastructure and the environment. Conversely, little research has been conducted on operational safety of this equipment, focusing on the continuity of the supply. This document shows the efforts made by some bunkering providers and some university research that came together to coin and introduce a valid reference to assess the reliability of the land configurations adopted to carry out LNG bunkering operations. The objective of this work is to estimate the degree of reliability of a bunkering operation according to the chosen configuration. Consequently, each defined bunkering configuration is assigned a unique Reliability Index, RI, indicating in advance what the chances are that a bunkering operation -configured as indicated- will conclude satisfactorily, that is, delivering the LNG to the ship. Previous work consists of gathering data from both generic and specific -from the field- failure rates needed to feed the models. Two existing reliability analysis techniques have been used: 1.- Establishing the Reliability Block Diagram, RBD of each configuration: TTS and MMTS 2.- Elaboration of the event tree or Fault Tree Analysis, FTA for each configuration. Next come the sensitivity and significance analysis to identify the weaker components of the ground system. As a result, prior to the operation, every ground system configuration is assigned a RI identifying the equipment more likely to fail. This information happens to be useful for several purposes such as providing spare stocks, considering redesign of maintenance procedures or eventually replacing components or even change the configuration to a more reliable one on cost-effective bases. In addition, equipment manufacturers will find RI an unbiased feature to promote their equipment in the marketplace.

15:00
Simonetta Campana (Area Tecnica ARTA, Italy)
Romualdo Marrazzo (ISPRA - Istituto Superiore per la Protezione e la Ricerca Ambientale, Italy)
Cosetta Mazzini (PTR RIR ARPAE, Italy)
Liliana Panei (Dir DGIS MITE, Italy)
Aspects of NATECH risk assessment for the underground storage of natural gas
PRESENTER: Cosetta Mazzini

ABSTRACT. The aim of this paper is to explain the results of technical assessments carried out in accordance with Legislative Decree no. 105/15 (the Italian implementation of the Seveso III directive) regarding the identification of dangers and the evaluation of accident risk in underground natural gas storage facilities. Attention will be focused on NaTech risk relative to natural events or disasters which can cause one or more technological incidents such as fires, explosions and releases. The objective of this paper is to provide elements which can be used in the assessment of natural risk resulting from flooding, seismic activity, hydro-geological instability, etc. when carrying out safety report evaluation of upper-tier establishments. In order to frame the issue, an overview will be provided of Italian regulations and guidelines for the technical evaluation of the natural gas sector, with particular attention to the situation in Seveso establishments in Italy. The following main themes will therefore be developed: information about the structural organization of the company; information about the classification of the substances in accordance with the Seveso directive; the industrial safety of the establishments; a methodological approach for risk analysis assessment with specific focus on Natech risk analysis of establishments in terms of: preliminary analyses of events, the identification of the events and accident scenarios, the calculation of consequences; safety and technical systems. Finally, some references will be provided to identify the most “critical” parameters of the various risk analysis techniques which, if not adequately evaluated, can lead to invalid results, also taking into account the adoption of the correct safety measures, with the aim of limiting the consequences of an accident scenario.

14:00-15:20 Session 11I: Manufacturing Industry applications
Chair:
Marcin Hinz (University of Wuppertal, Germany)
Location: CQ-107
14:00
Dominik Brüggemann (University of Wuppertal, Germany)
Marcin Hinz (University of Wuppertal, Germany)
Stefan Bracke (University of Wuppertal, Germany)
An improvement in the application of semi-supervised learning to sparsely labelled data

ABSTRACT. Knowledge of the quality of a machine-manufactured product is crucial to its reliability throughout the product's use phase and indispensable if a customer is to be assured of a certain quality standard. However, the quality of not all products can be determined non-destructively. In this case, machine learning methods are increasingly used to predict the quality of the product based on production parameters and non-destructively measurable attributes. Due to the progress made in recent years, products can be reliably classified if the amount of training data is large enough. However, the generating of such training data is often associated with high effort and high costs. For this reason, the amount of data to be generated should be kept as small as possible while maintaining reliable classification by the machine learning algorithm. We therefore applied a modification of the Yarowsky algorithm, a method from the field of semi-supervised learning, in combination with DNNs. The Algorithm involves a stepwise expansion of the learning dataset. To expand the learning dataset, we used samples that were assigned to a class with high confidence by the neural network. We conducted our experiments on a data set, which contains production parameters of 3600 knives. The dataset features attributes of the surface topography determined by computer vision and gloss values. The gloss values serve as target variables and were divided into three classes. For the experiments, we used a neural network architecture that was previously determined to be very performant for the problem. The knowledge gained from the last publication on the topic was used to improve the application of the method. By employing various modifications of the previous methods that were considered after the last publication, the already good results could be further improved.

14:20
Tubis Agnieszka (Wroclaw University of Science and Technology, Poland)
Arkadiusz Żurek (Wroclaw University of Science and Technology, Poland)
Analysis of adverse events of the use of drones in the processes of material handling at a manufacturer from the chemical sector
PRESENTER: Arkadiusz Żurek

ABSTRACT. The development of the Logistics 4.0 concept, demographic changes, and increasing customer requirements regarding the speed of logistics services cause more interest in using autonomous vehicles in selected logistics processes. One of the most popular systems used today is the Unmanned Aerial System, implemented primarily for routine operations or those associated with a threat to human life or health. The article aims to analyze the adverse events occurring in the process of internal flows in the area of a multi-plant production company. A manufacturer from the chemical sector was selected for the analysis, as employees' contact with hazardous materials favors the search for solutions limiting human participation in routine logistic operations. The article presents the results of a five-stage analytical procedure, which included identifying potential adverse events, their effects, and possible causes of occurrence. The obtained results will constitute the basis for the proper preparation of the system and procedures for the implementation of drones in logistics processes and for reducing the risk associated with the operation of this solution in the selected enterprise.

14:40
Junkai He (IRT-SystemX, France)
Selma Khebbache (IRT-SystemX, France)
Miguel F. Anjos (The University of Edinburgh, UK)
Makhlouf Hadji (IRT-SystemX, France)
Optimization of maintenance of complex manufacturing systems using remaining useful life prognostics
PRESENTER: Makhlouf Hadji

ABSTRACT. Complex manufacturing systems maintenance is becoming increasingly critical to ensure system availability and industrial productivity. This paper is concerned with using prognostic information for preventive maintenance planning in complex factories. Such a factory includes a series of processes, and each process is composed of one or more complex systems. A system incorporates different component types, and we consider redundant components used as backups for each type to guarantee system availability. In this paper, we consider the use of prognostic Remaining Useful Life (RUL) to decide which component to operate, and hence to ensure the availability of systems such that the production of the factory is maximized over a given planning horizon. We design a mixed-integer linear programming model to minimize the overall production loss due to maintenance. A small example is presented to illustrate maintenance planning with the proposed approach. Extensive experimental results show the capability of our approach to reach optimal solutions for realistic instances.

15:00
Marcin Hinz (University of Wuppertal, Germany)
Doha Meslem (University of Wuppertal, Germany)
Stefan Bracke (University of Wuppertal, Germany)
The application of k-means algorithm for the unsupervised analysis of surface topographies.
PRESENTER: Marcin Hinz

ABSTRACT. Unsupervised learning is a type of machine learning that deals with analyzing provided data to draw its patterns without the need of pre-expected results. Moreover, an algorithm will be applied to analyze the data, which learns from the data’s patterns and provides an output. The patterns are drawn by, among others, clustering the data. There exists vast literature concerning unsupervised learning. The literature focuses on explaining what unsupervised learning is, what algorithms are used and how to program them. Most current literature however revolves around understanding the algorithms, rather than applying them on data to solve real-life problems. This caters for saturating the theory, leaving a gap in the feasibility aspect. This paper aims to fill this gap, building a bridge between the theory and practice in the field of unsupervised machine learning. The main idea this paper tackles is to perform a parameter-analysis on an unsupervised learning algorithm finding the combination of parameters that forms clusters that comply the most with the manufacturer-set criteria. The paper briefly discusses the theory behind the methods used based on published research, but the main scope of the paper is the feasibility of the method. The data used in this paper is taken from a variety of cutlery samples with different surface topographies and divides into two main parts, mechanical and optical. The former consists of color, roughness and gloss values. Optical values on the other hand consist of a variety of line and optical measurements done with the help of computer vision. For this purpose, three types of knives were provided by the manufacturer with different surface-topographies. Surface topographies change with varying the production-process parameters. Consequently, optimizing these parameters betters the quality of the product. For analyzing the quality of the product, and hence of the accompanying optimization technique, unsupervised learning methods are used. The collected data on which the methods are applied are multivariate, with around 40 variables, structured, numerical and are taken from the surfaces of the three knife-types. The optical data is clustered using K-means, a distance based unsupervised learning algorithm. This parameter study includes pre-processing the data using standardization and normalization techniques, along with tuning K-means parameters. This method undergoes a parameter analysis, in order to find the set of parameters that gives out the clusters that are most compatible with the manufacturer’s set criteria, in this case, roughness values Ra. Moreover, the data is clustered in an unsupervised approach and upon studying a wide range of parameter possibilities, the algorithm is expected to provide an output fitting a certain prescribed (Ra) standard, mimicking therefore a supervised approach without having to train the data, but by rather studying the patterns. The significance of this study lies not only in that it reduces the present gap in literature regarding practice, or in that it introduces a new perspective with which unsupervised learning can be approached but also in its generic character. The generic character of this research allows it to be applied to other sets of extracted data of fine grinded surfaces.

14:00-15:20 Session 11J: S.18: Advanced tools and methods for occupational health and safety
Chair:
Leonardo Marrazzini (University of Pisa, Italy)
Location: CQ-008
14:00
Marcello Braglia (University of Pisa, Italy)
Marco Frosolini (University of Pisa, Italy)
Roberto Gabbrielli (University of Pisa, Italy)
Leonardo Marrazzini (University of Pisa, Italy)
Application of Lean Six Sigma techniques to the management and maintenance of special lifting equipment

ABSTRACT. This paper describes the application of Lean Six Sigma techniques in an energy technology company that provides solutions for energy and industrial customers worldwide. It outlines the systematic, step-by-step application of the Define, Measure, Analyze, Improve, and Control (DMAIC) in the management and maintenance of special lifting equipment. Special lifting equipment and assembly aids are required to place/remove machines, systems, machine parts, and system parts in/from locations that are difficult to access, to move them using crane systems, to secure them tightly when transporting to prevent hazards, to protect people and materials, and to guarantee that work is carried out economically and in a time-saving manner. These equipment are essential in carrying out production activities within the factory, and also fall within the essential health and safety requirements of the Machinery Directive 2006/42/EC, thus having to comply with legal obligations and be accompanied by related documentation and procedures. Implementing DMAIC involved process mapping methods and rigorous data collection. Root cause analysis and Pareto diagrams were applied to track and archive the manuals of use and maintenance and the EC Declaration of Conformity. In addition, a procedural standard was created regarding the transfer of equipment from suppliers and the delivery to the internal warehouses. Maintenance audits are also reported and examples of recertification and disposal are illustrated. The application of proper tools and methods helped to identify, evaluate and mitigate risks, and to reduce the complexity of risk management, and improved the company’s operational performance maintaining, at the same time, a safe work environment.

14:20
Georgi Hrenov (Tallinn University of Technology, Estonia)
Conceptual Model for the Development of OHS Management in SMEs

ABSTRACT. Eleven Estonian small and medium-sized enterprises were investigated to identify critical key elements of safety activities using the Method for Industrial Safety and Health Activity Assessment. The study was first conducted through an interviews to explain the role of key actors (such as employers, safety professionals, and safety representatives) in the administration of OHS and to explore perspectives on how to improve safety performance in SMEs. In the second phase, qualitative findings were adopted and taken for further study in the form of a questionnaire. Statistical analysis was performed using ANOVA, principal component method, and independent T-test. In organizations where management does not put safety in the first place, employees do so in practice and also do not try to follow safety policies. SME managers know little about the role of the employee representative and about the benefits of their activities. When safety representatives are elected only formally, this is practically irrelevant to OHS management, and often other employees are not informed of their rights and opportunities to hold office. For the employer and safety manager, the importance of the safety representative, who are aware of the problems in the work environment, only becomes apparent in the event of an employee’s injury or serious illness.

14:40
Victor Hrymak (TU Dublin, Ireland)
Improving the reliability of visual inspections conducted by fire and rescue services, during familiarisation visits

ABSTRACT. Introduction Fire and rescues services commonly conduct pro-active fire safety familiarisation visits for buildings within their geographical areas of responsibility. During such visits, fire crews typically look for existing fire safety hazards and relevant constructional details such as, the presence of active and passive fire safety measures, internal layout, the location of exits, hydrants and fire panels. It is also a common occurrence that fire crews will then inform building management of any observed and relevant fire hazards in an informal and advisory capacity. During these important fire prevention centred visits, visual inspection is the predominant method utilised to identify existing fire hazards within the building under analysis. Therefore, the more fire hazards observed the better prepared the fire and rescue service should be, if they subsequently attend the building during an actual fire or rescue operation.

This frequent, widespread and global familiarisation practice by fire and rescue services raises two fundamental research questions that were investigated by this study. The first research question is how many existing fire hazards are typically observed during such familiarisation visits. Secondly, can the reliability of visual inspection conduct be improved to increase the observation of in-situ fire hazards and thereby minimise the potential consequences of missing fire hazards during these visits.

Methodology A regional fire and rescue service in Ireland was recruited to take part in this study. During selected familiarisation visits, a researcher collated the forms used by fire crews to record fire hazards observed during their visual inspections. The experimental design involved one of the fire crew being tasked with observing fire hazards using a novel visual inspection method, here referred to as systematic visual search. This method is designed to maximise hazard observation during visual inspections and has been recently used in studies by environmental health and safety professionals during industrial kitchen inspections as well as by aircraft maintenance technicians for pre-flight visual inspections.

The systematic visual search method consists of three key steps. The first is to break down each room or area under analysis into its constituent constructional elements being each individual wall, the ceiling and floor. The second step is to iteratively select a particular element for individual observation. The third step is observe the entirety of the selected element by applying a visual eye scan pattern, that begins at the top left corner of the element and tracks to the right until the next element is reached. Visual search is then redirected to the left hand side of the element underneath the area already observed, and the process continued until the element has been fully observed. A useful analogy would be to describe the systematic visual search method as reading the element in the same way as you would read a page of writing in a book. That is; starting at the top of the page and moving left to right until the whole page has been read. In effect, a visual overlay is imagined which guides an eye scan pattern across the element. In this manner systematic visual search will ensure the meticulous and exhaustive observation of the element under analysis, without missing any observable hazards present. The fire crew member recruited to use systematic visual search, was the watch commander and he and trained in its use as part of this study. This training was conducted in one three hour session that included a demonstration from the researchers before satisfactorily practicing the method and receiving feedback from the research team. During a six month period in 2020, the watch commander together with 17 different operational firefighting personnel visited 23 separate premises for familiarisation purposes. The watch commander conducted all 23 familiarisation visits that were used in this study. The 17 fire fighting personnel involved conducted an average of 2 familiarisation visits each (M=2.08, SD=0.85), using the same premises as the watch commander.

Results The mean number of fire hazards observed by fire crews using their normal custom and practice for visual inspection was 9.03 per visit (SD=4.39). In sharp contrast, the watch commander who used systematic visual search, observed a mean 28.87 fire hazards per visit (SD=10.72) In effect, just over three times the number of observable hazards were elicited by using systematic visual search during familiarisation visits, when compared to custom and practice visual inspection. These results were also highly significant and with a large effect size as measured by Cohen’s “d”.

Discussion Two key findings emerge from this study. The first is that missing observable fire hazards during familiarisation visits is a common occurrence. This finding will be a matter of concern from an operational fire and rescue service perspective. However the literature demonstrates that despite widespread thinking to the contrary, visual inspection remains an error prone task that is difficult to do well. Decades of research from the visual psycho-physics literature has confirmed that due to human cognitive limitations we all posses, the reliability of visual inspection cannot be taken for granted. In short, not being able to see observable hazards during visual inspections should be an expectation, and not an aberration. The second key finding is that by using systematic visual search during familiarisation visits, the number of observable fire hazards can be increased. In short, visual search reliability can be improved with the use of a behavioural visual search algorithm as exemplified by systematic visual search.

Conclusion The number of observable hazards seen by fire crews during familiarisation visits can be increased by using systematic visual search. In doing so, fire and rescue services can now improve their operational effectiveness due to increased knowledge of fire hazards within buildings visited. In addition, there does not appear to be any reason why systematic visual search cannot extended and used whenever visual inspections are conducted for any fire or safety related risk assessment, risk management or auditing purpose.

15:00
Martha Chadyiwa (University of Johannesburg, South Africa)
Juliana Kagura (University of the Witwatersrand, South Africa)
Aimee Stewart (University of the Witwatersrand, South Africa)
Occupational injuries in South African Parks and Nature Reserves, 2007-2019
PRESENTER: Martha Chadyiwa

ABSTRACT. To describe the nature and incidence of occupational injuries among employees in South African parks and nature reserves and to investigate whether the body-part that is injured is correlated to sex or age of the employee, or to the province in which the injury occurs. Electronic records containing occupational injury data from 2007 to 2019 were analysed. Records were retrieved from the electronic database maintained by the Department of Employment and Labour’s Compensation Fund. Associations between occupational injuries, body-part location of injury and demographic variables were investigated, using the chi-square test of independence. A total of 1531 individuals received compensation for occupational injuries over the 12- year period. Most (n = 963; 62.9%) were male and almost 30% were aged 30-39 years (n = 455; 29.7%). The lower extremities were most commonly affected (n = 454; 29.7%). Rates of occupational injuries differed across provinces and (p < 0.05) and by sex (p = 0.015). To our knowledge, this is the first study that describes occupational injuries in South African parks and nature reserves. Our findings show that there is a need for the Department of Employment and Labour’s Compensation Fund and the employers in the South African parks and nature reserve sector to plan and budget for the management of reporting and recording occupational injuries.

15:40-17:00 Session 12A: Risk Management
Chair:
Terje Aven (Universitetet i Stavanger (UiS), Norway)
Location: LG-22
15:40
Terje Aven (Universitetet i Stavanger (UiS), Norway)
Azadeh Seif (Universitetet i Stavanger (UiS), Norway)
Konstantina Karatzoudi (Universitetet i Stavanger (UiS), Norway)
What are the core principles of risk management?
PRESENTER: Azadeh Seif

ABSTRACT. The ISO 31000 standard on risk management refers to eight risk management principles: Integrated, Structured and comprehensive, Customized, Inclusive, Dynamic, Best available information, Human and cultural factors, and Continual improvement. In this paper, the selection of these principles is discussed. The paper argues that other principles are more fundamental and important and should be added in guidelines and standards on how to implement risk management in organizations. First and foremost, a basic principle should be that the risk management is based on current risk science knowledge, related to concepts, principles, approaches, methods and models. From this overriding principle, the paper points to a set of more specific principles, which covers both generic management principles and more specific risk management and risk science knowledge principles.

16:00
Emmanuel Plot (INERIS, France)
Maria Chiara Leva (TU Dublin, Ireland)
Ludovic Moulin (INERIS, France)
Vassishtasaï Ramany (SNOI, France)
Philippe Decamps (SNOI, France)
Frederic Baudequin (Interactive, France)
The Development of a holistic IT platform for major risk management: the MIRA tool
PRESENTER: Emmanuel Plot

ABSTRACT. There are natural human cognitive bias that affects many aspects of our activities among which also risk management. These bias have been studied by Amos Tversky and Daniel Kahneman (1974), Raymond Boudon (2003) et and many others as Steven Pinker recently (2021). Some of them have in common a feature which is to say the tendency to selectively search for or interpret information in a way that confirms one's representation of reality. Among that, there can be a reification process at work, which can lead to forgetting that its representations are in some way like hypotheses, and as such need to be continuously compared and verified against realty. The moment we forget that the correspondence between our representations and reality (which allows us to act in the world we constitute) is only a conditional truth and need constant verification is the moment when most human and organizational error in major risk management occurs. This cognitive bias seems to be at work when some stakeholders believe that risk assessments are the right basis on which industrial site managers should build their management systems. We are one of those actors because risk assessment is the best possible basis for risk management! However, although it has taken us a long time to understand this, we believe that these assessments are only right as abstractions, and mostly wrong as they exist today when they claim to exhaust the characteristics of reality to the point of allowing the management of major risks. They are true in the conditional tense! Forgetting this detail (which is not a detail, of course!), and for many other good reasons (but perhaps more secondary), many organizations have a strong tendency to think sequentially the risk assessment phase and the day-to-day work of managing facilities and practices. In these organizations, risk assessment is carried out first, in design, by experts, and then integrated into procedures that field operators must follow, approach which can only be justified if one considers that the risk assessment adequately represents reality, in a 1:1 ratio. As this is never the case, there are gaps between risk assessments and procedures, and between procedures and field practices. And management systems appear siloed, opaque, difficult to update. We believe that the source of this situation lies in this cognitive bias of reification and its organizational extension, and that this is the reason for the unthinking of informal adjustments that tend to compensate for the holes in the risk assessments. How do we fight this? This is the question that preoccupied us for ten years of research, bringing us to design this computer platform called MIRA (Monitoring Integrated Risks Actualisation). The right approach seems to be to fully combine risk assessment and management. But not just any way! A common mistake is to think that it is enough to do the risk assessments as they are currently carried out better, or easier, or faster, or with a shorter periodicity (until you have real time), or at lower cost. Yes, that would be a good thing. But it is not enough. That would be missing the point. The basic problem is the reification of a single viewpoint on dangerous processes. Above all, it is a question of being able to dynamically enrich the intelligibility of risk assessments from different points of view and therefore to think of different levels of abstraction of the different field teams and experts, in order to re-interrogate the relationship between the assessments and the reality in motion, moving. The final objective is to change the place of risk assessment in organizations, the way it is carried out and used. It is obvious that, given the complexity of facilities and operating practices, the complexity of risk issues, and the multiplicity of points of view to be taken into account, this approach can only be managed in practice by an IT solution. But it is a human factors project, both in terms of the objectives and challenges and in terms of the IT development process itself, in the way the data model, workflows and interfaces are designed, supported by methods and models from the human and social sciences, with the constant concern for the real organization of risk management. This is the project we started ten years ago, within the framework of the European Tosca project, and which is still going on today... we are on the right track but there is still work to do. We propose to present the results of this research focusing on the way the tool was developed and applied in SNOI (the Service National des Oléoducs Interalliés) and by describing step by step the different phases of development of this tool in a historical perspective. NB: SNOI is responsible for the French part of the NATO pipeline network in Central Europe (CEPS), known as the Common Defense Pipeline (ODC). It thus operates a network of 2,300 km of pipelines and 14 SEVESO depots, including 7 classified as high threshold, for a total capacity of 500,000 m3 distributed among more than 80 buried tanks.

References Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. science, 185(4157), 1124-1131. Boudon R. (2003). Raison, bonnes raisons, Paris, PUF Pinker S. (2021). Rationality: What It Is, Why It Seems Scarce, Why It Matters, Viking Press

16:20
Maria Grazia Gnoni (university of salento, Italy)
Fabiana Tornese (university of salento, Italy)
Diego de Merich (inail, Italy)
Armando Guglielmi (inail, Italy)
Mauro Pellici (inail, Italy)
Guido Micheli (polytechnic of milan, Italy)
Gaia Vitrano (Polytechnic of milan, Italy)
Adoption level of Near-Miss Management Systems in the industrial sector: an exploratory survey

ABSTRACT. Near miss events are usually identified as adverse events that could have turned into incidents/injuries, but by chance resulted in a harmless situation. The importance of near-miss analysis lies in the roots of the events since they usually share the same causes of accidents occurring at the workplace. Therefore, they can represent a useful source of information to understand the causes of potential accidents/injuries for applying more effective preventive actions. Despite this, near-miss management systems (NMMSs) are still not so widespread in all industrial sectors, and they are mostly adopted in a few specific sectors, such as construction, mining, chemical, and nuclear industries. This study, carried out in collaboration with the Italian National Institute for Insurance against Accidents at Work (INAIL) by involving a sample of Italian companies in the industrial sector, presents the preliminary results of a survey aiming to understand the current level of adoption of NMMSs. After an introductory section, the survey is composed by two sections: the first addresses companies already adopting NMMSs, while the second considers companies still not analysing near-miss events. The objective is to outline the state of adoption of NMMSs in several industrial sectors: after this analysis, the aim is to outline drivers and barriers identified by companies in adopting the systems as well as possible strategies to support their diffusion in the industrial sector.

16:40
Surbhi Bansal (Proactima, Norway)
Caroline Metcalfe (Proactima, Norway)
Roger Flage (University of Stavanger, Norway)
Henrik Bjelland (Proactima/University of Stavanger, Norway)
Anders Jensen (Proactima/University of Stavanger, Norway)
Willy Røed (Proactima/University of Stavanger, Norway)
Outline of a risk management framework for future transport systems
PRESENTER: Henrik Bjelland

ABSTRACT. Future transport systems, through the application of cooperative intelligent transport systems (C-ITS) promises safety improvements, increased efficiency of the road network and reduced climate impact. The benefits are hinging on increased automation and autonomous vehicles. Replacing humans with machines offers superior performance on repetitive and information-intensive tasks. Individual, inter-vehicle and dynamical understanding of the traffic pattern will lead to inter-vehicle adaption of behavior and reduce reaction times and prevent more critical situations from occurring. These benefits do not come about without challenges associated with e.g., automated and human controllers’ interactions, algorithms and value judgments, vague system boundaries, vulnerability to malicious acts, regulation and standardization and liability issues in case of accidents. In this paper we outline the relevant contents of a risk management framework to support studies on implementation of future transport systems.

15:40-17:00 Session 12B: S.16: Risk and resilience analysis for the low-carbon energy transition
Chair:
Giovanni Sansavini (ETH Zurich, Switzerland)
Location: CQ-006
15:40
Matteo Spada (Zurich University of Applied Sciences (ZHAW), Switzerland)
Gunnar Dickey (Paul Scherrer Institut (PSI), Switzerland)
Peter Burgherr (Paul Scherrer Institut (PSI), Switzerland)
Accident risk assessment for solar photovoltaic manufacturing
PRESENTER: Matteo Spada

ABSTRACT. The energy sector is at a critical transition point considering the Paris Agreement and the necessary reduction in global greenhouse gas emissions to limit the global rise in temperatures to 1.5°C above pre-industrial levels [1]. In this context, the deployment of renewable energy technologies is considered a key path to the decarbonization of the energy sector and emission reductions. Among the available technologies, solar photovoltaic (PV) is expected to be a major contributor among renewables and currently accounts for 63% of new renewable installed capacity [2].

Beyond direct comparisons of cost and emissions between energy technologies, accident risk assessments can be utilized to understand and compare their overall sustainability performances more comprehensively. In fact, risk assessment for energy chains is one of the central social indicators in the context of integrated sustainability assessment [3]. In the literature, the accident risk of solar PV was evaluated in the past but is now considered outdated due to rapid technological improvements in recent years [4]. Therefore, the accident risk assessment needs to be updated considering the growing importance of solar PV.

This study presents a comparative accident risk assessment for the manufacturing of the most important commercial and future PV technologies, i.e., monocrystalline silicon (mono-Si), multicrystalline silicon (multi-Si), cadmium telluride (CdTe), copper-indium-gallium-diselenide (CIGS), and tandem perovskite silicon (PK/c-Si TSC). Designated hazardous chemicals involved in these PV manufacturing chains are selected from life cycle inventories to characterize the risk of PV production processes. The assessment quantitatively estimated the accident risk of hazardous chemicals with risk indicators, e.g., fatality rate, using global incident data collected in the PSI’s Energy-related Severe Accident Database (ENSAD) from multiple industrial accident databases [5]. The chemical risk indicators are allocated to the PV technologies to estimate manufacturing accident risk, and to compare relative contributions of the hazardous chemicals to overall PV indicators. Results indicate that hydrochloric acid, hydrofluoric acid, and sodium hydroxide are the most significant hazardous chemicals based on their contribution to the estimated fatality and injury rates and relative high use in PV panel manufacturing. Hydrogen sulfide is shown to be the most significant hazardous chemical for CIGS. The risk contribution of hydrogen sulfide indicates that CIGS has the worst risk performance compared to the other technologies, while CdTe has the best risk performance due to the limited use of hazardous chemicals in the manufacturing phase.

References 1. IPCC, 2018. Summary for Policymakers. In: Global Warming of 1.5°C. An IPCC Special Report on the impacts of global warming of 1.5°C above pre-industrial levels, Masson-Delmotte, V., P. Zhai, H.-O. Pörtner, D. Roberts, J. Skea, P.R. Shukla, A. Pirani, W. Moufouma-Okia, C. Péan, R. Pidcock, S. Connors, J.B.R. Matthews, Y. Chen, X. Zhou, M.I. Gomis, E. Lonnoy, T. Maycock, M. Tignor, and T. Waterfield (eds.). World Meteorological Organization, Geneva, Switzerland, 32 pp 2. Masson G., Kaizuka I., 2020. Trends in Photovoltaic Applications 2020, International Energy Agency (IEA) PVPS Task 1, T1-38:2020. 3. Volkart, K., Bauer, C., Burgherr, P., Hirschberg, S., Schenler, W. and Spada, M. (2016). Interdisciplinary assessment of renewable, nuclear and fossil power generation with and without carbon capture and storage in view of the new Swiss energy policy, International Journal of Greenhouse Gas Control, Vol. 54, Part 1, pp. 1-14, doi:10.1016/j.ijggc.2016.08.023 4. Riveros J., 2010. Accident Risk Evaluation of Photovoltaics (PV) in a Comparative Context, Master Thesis at ETH Zurich and Paul Scherrer Institute 5. Kim W., Burgherr P., Spada M., Lustenberger P., Kalinina A., Hirschberg S., 2019. Energy-related Severe Accident Database (ENSAD): cloud-based geospatial platform. Big Earth Data, 1-27, doi: 10.1080/20964471.2019.1586276

16:00
Stefan Kazula (German Aerospace Center (DLR), Institute of Electrified Aero Engines, Germany)
Stefanie de Graaf (German Aerospace Center (DLR), Institute of Electrified Aero Engines, Germany)
Lars Enghardt (German Aerospace Center (DLR), Institute of Electrified Aero Engines, Germany)
Preliminary Safety Assessment of Polymer Electrolyte Membrane Fuel Cell Systems for Electrified Propulsion Systems in Commercial Aviation
PRESENTER: Stefan Kazula

ABSTRACT. This paper analyses polymer electrolyte membrane fuel cell systems (PEMFCSs) as main energy provider for electrified aircraft propulsion, identifies potential weaknesses as well as safety challenges and presents potential solutions. The general design, operating principles and main characteristics of hydrogen-fuelled low temperature PEMFCSs are described. The safety assessment process in aviation according to Aerospace Recommended Practices ARP4754A and selected methods according to ARP4761 are introduced. The functions of fuel cell systems in electrified aircraft powertrains are analysed and visualised in functional structure trees on aircraft, powertrain and fuel cell system level. By means of a Functional Hazard Analysis (FHA), potential malfunctions and their effects are investigated and their severity is evaluated. Critical failure modes are identified and requirements for acceptable failure probabilities are stipulated. Within the scope of a Fault Tree Analysis (FTA), the components of a fuel cell system are assigned to the identified functional structure trees and potential causes of critical failure modes are examined. The results of the mentioned analyses reveal design challenges associated with the application of fuel cell systems in electrified aircraft propulsion, for instance concerning functional independence as well as solutions for cold start conditions, heat transfer and lightweight design.

16:20
Katherine Emma Lonergan (ETH Zurich, Switzerland)
Giovanni Sansavini (ETH Zurich, Switzerland)
Scheduling municipal carbon abatement projects under uncertainty

ABSTRACT. Around three quarters of the world’s final energy use is spent in cities [1]; decarbonizing municipal energy systems is therefore critical to achieving the wider goal of mitigating climate change [2]. However, the ability to realize carbon reduction within municipal energy systems is highly subject to the capacities of the municipalities themselves. Municipal planners are constrained by budget, staffing resources, & political cycles and must always consider multiple policy objectives. Furthermore, planning and executing projects is contingent upon first identifying and characterizing possible projects. Characterizing projects is a labour-intensive bottom-up task and cities do not always succeed in characterizing associated uncertainties, e.g., in project duration, cost, staffing requirements, and carbon abatement potential. It is particularly difficult to characterize the benefits of municipalities’ indirect energy decarbonization efforts, such as updating bylaws [1] given the involvement of other actors. Project characterization notwithstanding, prominent methods of emissions reduction planning also struggle to consider uncertainty [3].

We propose that stochastic optimisation as a suitable tool to overcome the aforementioned municipal planning challenges. We demonstrate the approach with reference to a representative city aiming to minimize its energy-related emissions considering both long-term uncertainty and shorter-term variability [4]. In particular, the model output suggests an optimal planning schedule given project uncertainties, which are captured by a representative set of scenarios. We generate probabilistic scenarios [5], [6] based on empirical data, but noted that scenarios could also be sourced pre-existing city scenarios [7]. We highlight the differences between the risk-aware stochastic optimization and expectation-based planning approaches. While stochastic optimization is a well-known method for decision-making under uncertainty, our work brings particular focus to managing the risks of potential budget- and time-overshoot for actors whose emissions reduction is key for achieving the decarbonization targets for the entire energy system.

[1] REN21, ‘Renewables in Cities: 2021 Global Status Report’, REN21 Secretariat, Paris, France, ISBN 978-3-948393-01-4, 2021. Accessed: Nov. 05, 2021. [Online]. Available: https://www.ren21.net/wp-content/uploads/2019/05/REC_2021_full-report_en.pdf [2] T. Bruckner et al., ‘Energy Systems’, in Climate Change 2014: Mitigation of Climate Change. Contribution of Working Group III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, O. Edenhofer, R. Pichs-Madruga, Y. Sokona, E. Farahani, S. Kadner, A. Adler, I. Baum, S. Brunner, P. Eickemeier, B. Kriemann, J. Savolainen, S. Schlömer, C. von Stechow, T. Zwickel, and J. Minx, Eds. Cambridge, United Kingdom and New York, NY, USA: Cambridge University Press, 2014, p. 87. [3] F. Kesicki and P. Ekins, ‘Marginal abatement cost curves: a call for caution’, Climate Policy, vol. 12, no. 2, pp. 219–236, Mar. 2012, doi: 10.1080/14693062.2011.582347. [4] P. Nesbitt et al., ‘Underground mine scheduling under uncertainty’, European Journal of Operational Research, vol. 294, no. 1, pp. 340–352, Oct. 2021, doi: 10.1016/j.ejor.2021.01.011. [5] A. J. Conejo, M. Carrion, and J. M. Morales, Decision making under uncertainty in electricity markets, vol. 1. New York: Springer, 2010. [6] Y.-P. Fang and G. Sansavini, ‘Optimum post-disruption restoration under uncertainty for enhancing critical infrastructure resilience’, Reliability Engineering & System Safety, vol. 185, pp. 1–11, May 2019, doi: 10.1016/j.ress.2018.12.002. [7] City of Toronto, ‘TransformTO Technical Backgrounder’, City of Toronto, Toronto, Technical report, Apr. 2017. Accessed: Feb. 01, 2022. [Online]. Available: https://www.toronto.ca/services-payments/water-environment/environmentally-friendly-city-initiatives/transformto/transformto-climate-action-strategy/transformto-technical-scenario-modelling/

16:40
Andrej Stankovski (ETH Zurich, Switzerland)
Leon Locher (ETH Zurich, Switzerland)
Blazhe Gjorgiev (ETH Zurich, Switzerland)
Giovanni Sansavini (ETH Zurich, Switzerland)
Development of a blackout events database for the European electrical power system

ABSTRACT. Striving to achieve carbon-neutrality goals, electric power systems often operate in near-critical loading conditions, greatly increasing the risk of widespread blackouts. Therefore, learning from operational experience can be invaluable in identifying vulnerabilities, procedural deficiencies, and alarming trends in such complex systems. Unfortunately, most of this knowledge spreads across transmission system operators (TSOs) and regulatory bodies, and existing databases are often scope-specific, lack curation and contain limited number of events. To overcome these research gaps, we aim to compile a database of safety-relevant blackouts in the European transmission system, with a primary focus on cascading events occurring in the last 20 years. The database contains over 500 events obtained from publicly available sources, including TSO reports, scientific literature, news articles, and other data collection efforts. The result is one of the largest publicly available curated datasets, which can be used by researchers, TSOs, regulators and other interested parties. Each event in the database is analyzed in detail and broken down by determined classification criteria, which include: general information (date, time, location, voltage level, type and duration of the event), indicators of severity and magnitude, primary causes, contributing factors, sequence of component failures, load conditions, cascade progression, and final states of the system/control zone. The granularity of the data makes the database suitable for machine learning techniques to identify critical factors, correlations, and patterns, which can affect the security of the power system. Preliminary analyses show that cascading failures are observed in 32% of all data entries, with weather-related incidents (58%) being the most prominent cause for these events, followed by equipment failures (24%). The average total recovery time for cascading failures is 77.9 hours, which significantly increases for weather-related (119.8h), compared to non-weather related cascades (23.8h). This is related to the magnitude of the failure, as the weather plays a prominent role in 72% of large (>100’000 affected households), and 52% of all medium-sized cascading events (10-100’000 customers).

15:40-17:00 Session 12C: S.24: Artificial Intelligence, Meta-Modeling and Advanced Simulation for the Analysis of the Computer Models of Nuclear Systems II
Chair:
Edoardo Patelli (University of Strathclyde, UK)
Location: CQ-007
15:40
Seung Gyu Cho (Ulsan National Institute of Science and Technology (UNIST), South Korea)
Seung Jun Lee (Ulsan National Institute of Science and Technology (UNIST), South Korea)
A Deep Support Vector Data Description Model for Abnormality Detection in a Nuclear Power Plant.
PRESENTER: Seung Gyu Cho

ABSTRACT. Nuclear power plants have many safety and operating systems for safe and efficient power generation, with hundreds of indicators and tens of thousands of components. Nuclear power plant operators were trained to take actions after selecting an appropriate abnormal operating procedure by judging the current state to prevent an abnormal state from developing to an emergency state. However, in the case of Korean advanced power reactor 1400, there are abnormal operating procedures consisting of 224 sub-procedures. No matter how well-trained operators are, the pressure to find the appropriate abnormal operating procedure and return normal conditions within a limited time provides the possibility of causing human errors. In order to solve these problems, studies on operator supporting systems for abnormal and emergency state diagnosis using artificial intelligence are being actively conducted. In most studies, normal and abnormal data are obtained through nuclear power plant simulators and then trained and tested. However, it is difficult to obtain a large amount of abnormal data that is used for training in the actual nuclear power plants, and diagnosis models show vulnerability in abnormal states that are not trained. If the operator can quickly recognize the entry into an abnormal state, follow-up measures will be accelerated, reduce human errors, and contribute to nuclear safety. This study introduces an abnormality detection model that determines normal and abnormal states by training only normal data and an application with other abnormality diagnosis models. In the abnormality detection model, a deep support vector data description algorithm was used for semi-supervised anomaly detection through a lot of normal data. The algorithm aims to find the smallest sphere surrounding normal data through feature space of normal data using 2d auto-encoder and convolutional neural network with the same structure. As a result, the abnormal detection model of this study can respond to all kinds of abnormal conditions and shows utilization measures to increase accuracy by reducing the number of normal labels required for existing abnormality diagnosis models.

16:00
Leonardo Miqueles (Politecnico di Milano, Italy)
Ibrahim Ahmed (Politecnico di Milano, Italy)
Francesco Di Maio (Politecnico di Milano, Italy)
Enrico Zio (MINES Paris, PSL University and Politecnico di Milano, Italy)
A Grey-box Digital Twin-based Approach for Risk Monitoring of Nuclear Power Plants

ABSTRACT. Digital Twins (DTs) can enable real-time monitoring for improved risk assessment and tailored predictive maintenance of Nuclear Power Plants (NPPs). However, typical DTs are based on black-box models and their application is difficult to accept for such safety-critical systems. In this paper, we propose a grey-box DT comprised of: i) a real physical asset whose data/information is gathered from condition monitoring systems; ii) a dynamic white-box model of the NPP; and iii) a feedback loop that retroacts on the real asset. The grey-box DT-based approach is exemplified in a case study concerning a small modular dual fluid reactor (SMDFR) to show its applicability in NPPs.

16:20
Federico Antonello (Massachusetts Institute of Technology, United States)
Jacopo Buongiorno (Massachusetts Institute of Technology, United States)
Enrico Zio (MINES Paris, PSL University and Politecnico di Milano, Italy)
A Methodology for the Dynamic Risk Assessment of Nuclear Batteries

ABSTRACT. Nuclear Batteries (NBs) are a unique class of nuclear micro-reactors, which are gaining attention for their potential to be a transportable, flexible, affordable, and decentralized low-carbon power source. The commercialization and efficiency of NBs require dedicated advanced risk assessments to address potential hazards, threats, and vulnerabilities that may challenge both safety and security. This work performs the advanced safety assessment of the nuclear battery designed at MIT, making use of a novel methodology that integrates i) System-Theoretic Accident Model and Processes (STAMP) principles to guide a qualitative exploration of the threats of the novel design, ii) Best Estimate Plus Uncertainty (BEPU) framework to investigate the behavior of NBs under accidental scenarios, and iii) the Goal-Tree Success-Tree Master Logic Diagram (GTST-MLD) framework to assess risk quantitatively. The integration of STAMP, BEPU and GTST-MLD provides systematic risk insights, giving due account to the NB interactions and dependencies among systems, structures and components.

15:40-17:00 Session 12D: S.33: Collaborative Intelligence in Manufacturing and Safety Critical Systems. The CISC and Teaming-aI EU projects II
Chair:
Hector Diego Estrada Lugo (Technological University Dublin, Ireland)
Location: CQ-106
15:40
Chidera Winifred Amazu (Politecnico di Torino, Italy)
Micaela Demichela (Politecnico di Torino, Italy)
Davide Fissore (Politecnico di Torino, Italy)
Human-in-the-loop configurations in process and energy industries: a systematic review

ABSTRACT. The human-in-the-loop performance evaluation is an area of growing interest in industries where safety-critical systems are in place. Concerns here are due to the increasing complexity of automation, new technologies for control, and safety. Because, unlike a more traditional approach to evaluating the human and the system they work with, human-in-the-loop gives a holistic view of their interaction (human, automation or artificial intelligence) and dynamics. It also emphasizes adapting the technology or automation to the human, being central, considering certain factors like risk. Therefore, there is a need to identify the relevant factors, novel measures and methods or improvements on existing methods that can be adapted for this field of research. This paper intends to present an overview of human-in-the loop in the process and energy industries by presenting a literature summary highlighting current factors and measures, methods, gaps, solutions and future work. Experimental (13) and observational (11) studies have been reviewed for results. It was observed that new factors, measures and techniques are currently being explored to fill some of the current gaps for the human-in-the-loop, for example, during performance assessment new methods and modalities have been adopted such as eye tracking and electroencephalography methods. The results and open questions from the papers reviewed and possible future research opportunities are presented and discussed in this paper.

16:00
Houda Briwa (Technological University Dublin, Ireland)
Maria Chiara Leva (Technological University Dublin, Ireland)
Rob Turner (Yokogawa, UK)
Alarm Management for human performance. Are we getting better?
PRESENTER: Houda Briwa

ABSTRACT. Industrial alarm systems are very crucial for the process safety and operational efficiency of modern industrial plants including oil & gas, chemical, petrochemical and power plants. With the evolvement of control systems, in particular, Distributed control systems (DCS), the number of alarms in a plant has increased dramatically leading to high operators’ workload, poor system performance and in some cases fatal accidents. The EEMUA 191 guideline and the ISA-18.2 standard along with the IEC 62682 define the recommended and required practices for effective alarm management. For instance, alarm rationalization is a key stage in the alarm management lifecycle defined in ISA-18.2. It seeks to define the optimal and most effective set of alarms needed to keep the process safe and within normal operating limits. This paper aims to investigate the improvement of the alarm management practices during the last 2 decades in the oil & gas industry and how it impacted the operator’s performance. It also provides an initial insight into the current state of the alarm management practices, in particular, alarm rationalization in the Oil & Gas industry, highlighting the gap between documentation and reality through a comprehensive literature review.

16:20
Joseph Mietkiewicz (TU Dublin, Denmark)
Anders Madsen (HUGIN EXPERT A/S, Denmark)
Data driven Bayesian network to predict critical alarms

ABSTRACT. Modern industrial plants rely on alarm systems to ensure their safe and effective functioning. Alarms give the operator knowledge about the current state of the industrial plants. Trip alarms indicating a trip event indicate the shutdown of systems. Trip events in power plants can be costly and critical for the running of the operation. This paper demonstrates how Trip events based on an alarm log from an offshore gas production can be reliably predicted using a Bayesian network. If a trip event is reliably predicted and the main cause of it is identified, it will allow the operator to prevent it. The Bayesian network model developed to predict trip events is purely data-driven and relies only on historic data from the alarms log from offshore gas production. We describe the method used to build the Bayesian network and the approach used to identify the most key alarm related to the Trip. We then assess the performance of the Bayesian network on the alarm log of offshore gas production. The preliminary performance results show significant potential in predicting trips and identifying key alarms. The model is developed to support the decision-making of a human operator and increase the performance of the plant.

16:40
Ammar Abbas (Software Competence Center Hagenberg GmbH (SCCH), Austria)
Georgios Chasparis (Software Competence Center Hagenberg GmbH (SCCH), Austria)
John Kelleher (ADAPT Reserch Center, Technological University Dublin, Ireland)
Deep Residual Policy Reinforcement Learning as a Corrective Term in Process Control for Alarm Reduction: A Preliminary Report
PRESENTER: Ammar Abbas

ABSTRACT. Conventional process controllers (such as proportional integral derivative controllers and model predictive controllers) are simple and effective once they have been calibrated for a given system. However, it is difficult and costly to re-tune these controllers if the system deviates from its normal conditions and starts to deteriorate. Recently, reinforcement learning has shown a significant improvement in learning process control policies through direct interaction with a system, without the need of a process model or the system characteristics, as it learns the optimal control by interacting with the environment directly. However, developing such a black-box system is a challenge when the system is complex and it may not be possible to capture the complete dynamics of the system with just a single reinforcement learning agent. Therefore, in this paper, we propose a simple architecture that does not replace the conventional proportional integral derivative controllers but instead augments the control input to the system with a reinforcement learning agent. The agent adds a correction factor to the output provided by the conventional controller to maintain optimal process control even when the system is not operating under its normal condition.

15:40-17:00 Session 12E: System Reliability II
Chair:
Christian Tanguy (Orange, France)
Location: LG-20
15:40
Jasper Behrensdorf (Leibniz University Hannover, Germany)
Matteo Broggi (Leibniz University Hannover, Germany)
Michael Beer (Leibniz University Hannover, Germany)
Reliability analysis of interdependent networks using sliced-normals

ABSTRACT. The study of dependencies in and between networks is an important step in the reliability analysis of complex infrastructure networks. Neglecting possible dependencies between networks can lead to drastic results as for example seen in the 2003 blackout in Italy. By the survival signature for the numerical reliability analysis of complex networks, the network structure is separated from the probabilistic information regarding component failures. This separation allows for complex modelling of dependencies.

Copulas have proven to be an effective modelling tool for these dependencies due to their natural separation of the dependence structure from the marginal distributions. Vine copulas, due to their graph based nature, have shown to be well suited for modelling dependencies in a network context. However, estimating an adequate vine from data is a complex process involving selection of appropriate families, parameters, and the structure of the vine itself. In 2019, Crespo et. al proposed a class of probability distributions called Sliced-Normals as a more versatile alternative. Sliced-Normals allow to flexibly characterize complex dependencies with low modelling effort.

This work applies Sliced-Normal distributions to the reliability analysis of complex dependent networks and investigates their advantages compared to the vine copula approach.

16:00
Jacek Malinowski (Systems Research Institute, Polish Academy of Sciences, Poland)
A fast method for enumerating all minimal cut-sets in a network with links and nodes failures

ABSTRACT. The paper presents a newly developed method for enumerating all minimal cut-sets in a network with link and node failures. The network is assumed to be operable if there is a path, composed of operable elements, from the source to the terminal node. If all elements of a cut-set are failed then no such path exists. A minimal cut-set contains no other cut-set. The method uses the tree of loop-free paths connecting the source with the terminal node. This tree is scanned from bottom to top, and a list of sets of network elements is produced at every multi-child node. The sets which would extend to non-minimal cut-sets are removed from the list. Finally, at the root node, the list of all minimal cut-sets is obtained. It should be noted that the sets to be removed are selected without comparing them to other sets. This feature makes the presented method faster than many other algorithms for generating minimal cut-sets, that identify them by comparing the newly obtained candidate sets with the previously obtained ones. Apart from being fast, the new algorithm is universal in the sense that it can be applied to networks modeled by directed, undirected, or mixed graphs (having directed and undirected edges). Furthermore, the cut-sets can be composed of both the links and the nodes. This is an important property in view of the fact that many existing algorithms only find cut-sets composed of links, which strongly affects their applicability. Last but not least, if the tree of acyclic paths is limited to the paths not exceeding a given length, the presented algorithm finds all minimal cut-sets intersecting these paths, but leaves out the cut-sets that only intersect the longer paths. This aspect makes the algorithm suitable for the analysis of computer or transportation networks where a limit is often imposed on the length of a route.

16:20
Temitope Ohiani (University of Strathclyde, UK)
Edoardo Patelli (University of Strathclyde, UK)
Information Propagation Method for Reliability Analysis in Complex Systems
PRESENTER: Temitope Ohiani

ABSTRACT. Resilience in large-scale complex engineering systems is often influence by the system’s reliability among many other factors. Reliability analysis allows identification of vulnerabilities and critical elements in these systems as well as meeting safety and regulatory policies. This work adopts the concepts of belief propagation for inference in a network graph where the message being passed is information about a node. One key advantage with the proposed message passing algorithm is the polynomial computation time with increasing system complexity. Furthermore, this method easily handles systems with multiple sinks because the marginal of every single node is calculated by the end of the last propagation step. For the discussed case of reliability, this information is related to failure probabilities. However, other applications of this method extend beyond reliability analysis.

16:40
Elena Zaitseva (University of Zilina, Slovakia)
Vitaly Levashenko (University of Zilina, Slovakia)
Jan Rabcan (University of Zilina, Slovakia)
Miroslav Kvassay (University of Zilina, Slovakia)
A new method for Multi-State System reliability analysis based on uncertain data and its application in the medical domain

ABSTRACT. The structure function is a typical mathematical representation of an investigated system in reliability analysis. This function defines the correlation of all possible system components states and system performance level in the point of view of the system reliability. The structure function is constructed based on complete information about the system structure and possible components states. However, there are a lot of practical problems when the complete information is not available because data from which it can be derived cannot be collected. In this paper, we propose a new method for the construction of the structure function based on uncertain or incomplete initial data with the application of a Fuzzy Decision Tree.

15:40-17:00 Session 12F: Human Factors and Human Reliability: HF in novel automation contexts
Chair:
Myrto Konstandinidou (NCSR DEMOKRITOS, Greece)
Location: LG-21
15:40
Andrew Wright (Corporate Risk Associates (CRA), UK)
Andreas Bye (IFE, Institute for Energy Technology, Norway)
Do Modern Control Rooms Pertain New Error Mechanisms?
PRESENTER: Andreas Bye

ABSTRACT. Existing methods for Human Reliability Assessment are primarily based on data collected for either analogue based control systems, or more commonly, ‘hybrid’ control systems, where an analogue design has been retrofitted with digital capability. In modernized power stations and especially in new builds, fully computer-based interfaces and controls are used, commonly without an analogue counterpart. The range of new features in a modern power station may also be accompanied by new operating philosophies, changed human-automation interaction and different types of human-system interfaces. As a result, it is expected that the nature of human work will change; introducing new types of error modes and failure mechanisms for the operating crews, or more subtly, altering the potential sources of existing modes/mechanisms. With the development of multiple new build projects in the UK, EDF Energy Generation Limited is interested in understanding whether current error mode taxonomies remain applicable for modernized systems, and the implications for Human Reliability Assessment. This paper presents activities to establish a catalogue of new failure mechanisms and human error modes relevant to modern systems, particularly with regard to automation, situational awareness, fully digital Human System Interfaces and computerized, state-based, looping procedures. The implications of these error modes for Human Reliability Assessment are considered.

16:00
Jeffrey Julius (Jensen Hughes, United States)
Mary Presley (Electric Power Research Institute (EPRI), United States)
Andrew Wright (CRA, UK)
Kaydee Gunter (Jensen Hughes, United States)
Erin Collins (Jensen Hughes, United States)
Updating HRA for Digital Environments
PRESENTER: Mary Presley

ABSTRACT. The human reliability analysis (HRA) portion of a probabilistic risk assessment (PRA) plays a key role in understanding the actions important to nuclear power plant safety and ensures the human interactions do not mask or overstate the importance of hardware components in the PRA. While methods exist for modeling human reliability for many actions, some notable gaps or inefficiencies remain. Under this project, EPRI works to systematically fill identified industry gaps in HRA through research and data gathering activities related to HRA in the digital environment, including HRA with digital instrumentation and controls and/or computer-based procedures. A research project in 2021 reviewed HRA data and data sources, then incorporated the data analysis results into guidance and tools for HRA in a Digital Environment. The end result is a technical report with a preliminary HRA method for the Digital Environment adapted from existing HRA methods. This paper provides an overview of the development process, including a review of data sources, taxonomy and approach used to update existing HRA methods.

16:20
Jinkyun Park (Korea Atomic Energy Research Institute, South Korea)
Inseok Jang (Korea Atomic Energy Research Institute, South Korea)
Yochan Kim (Korea Atomic Energy Research Institute, South Korea)
Comparing the effect of task complexities on the occurrence of Error of Omissions and Error of Commissions in an analog and digital environment
PRESENTER: Jinkyun Park

ABSTRACT. According to the operation experience of diverse industries that commercially run complicated systems, it is evident that the contribution of degraded human performance (e.g., human errors) is significant for their safety. This strongly implies that, in terms of their sustainability, one of the upmost goals is to secure detailed countermeasures for preventing the occurrence of human errors. One promising approach to address this issue is to understand a specific setting from which the chance of human error occurrences will significantly increase. In this regard, an existing study revealed that the occurrence probability of human errors observed from the full-scope training simulators of domestic Korean nuclear power plants (NPPs) seems to be strongly affected by the complexity of proceduralized tasks [1]. In addition, in the case of a task environment equipped with diverse analog HMIs (Human Machine Interfaces), it is emphasized that the occurrence probability of EOCs (Error of Commissions) appears to have a significant correlation with the complexity of proceduralized tasks [2]. These results implies that the complexity of proceduralized tasks can be used for a baseline that allows us to understand when and why human errors occur. For this reason, in this study, the correlations between the occurrence probability of EOCs and EOOs (Error of Omissions) and the complexity of proceduralized tasks are investigated by using human performance data that were collected from the full-scope training simulator of NPPs. This full-scope simulator is the replica of an MCR (Main Control Room) installed in domestic Korean NPPs that is equipped with diverse digital HMIs (i.e., digital task environment). If there are significant correlations between the occurrence probabilities (both EOCs and EOOs) and the complexity of proceduralized tasks, it is expected that we can have a relevant clue explaining when and why human error occurs.

[1] Jang, I., Kim, Y., and Park, J., 2021. Investigating the effect of task complexity on the occurrence of human errors observed in a nuclear power plant full-scope simulator, Reliability Engineering and System Safety, 214, 107704 (https://doi.org/10.1016/j.ress.2021.107704) [2] Park, J., Kim, H. E., and Jang, I., 2022. Empirical estimation of human error probabilities based on the complexity of proceduralized tasks in an analog environment, Nuclear Engineering and Technology, in press (https://doi.org/10.1016/j.net.2021.12.025)

16:40
Alf Ove Braseth (Institute for Energy Technology, Norway)
Magnhild Kaarstad (Institute for Energy Technology, Norway)
Jon Bernhard Høstmark (KONGSBERG Group, Kongsberg Maritime, Norway)
Gudbrand Strømmen (KONGSBERG Group, Kongsberg Maritime, Norway)
SUPERVISING AUTONOMOUS SHIPS – A SIMULATOR STUDY WITH NAVIGATORS AND VESSEL TRAFFIC SUPERVISORS
PRESENTER: Alf Ove Braseth

ABSTRACT. This paper explores supervision of partly autonomous ships from a remote operation centre, focusing on safety and efficiency. The purpose is to provide input for the design of operating displays. The research is financed by the research council of Norway through “Land-based operation of autonomous ships”. This paper explores the following perspectives: i) What is the preferred supervision viewpoint for one and three ships? ii) Which information is needed in the displays?

A simulator study is performed with eight participants of experienced navigators and vessel traffic operators. The study participants were shown five short videos of one or three autonomous cargo ships making a crossing in the Oslo fjord using a typical traffic situation in the area. They viewed scenarios in three different setups: i) a traditional vessel centric layout, much like the ship´s bridge, ii) a larger map centred layout, seeing the ship and it´s maritime environment from a distance (“birds view”), and iii) flipping, rotating (i) for three ships. The participants were asked to take the role as expert commentators and inform about their potential actions and communication, and when and why they would like to change the course or take manual control. Data collected consisted of audio and video data, questionnaires, and structured interviews.

The results for one ship supervision were inconclusive regarding the preferred viewing perspective, but workload was rated highest for the vessel centric viewpoint, and perceived situational understanding was rated lowest in this setup. For three ships supervised simultaneously, seven of eight participants preferred the larger map centred layout. The participants rated their workload highest and situation understanding lowest in the flipping setup. Among data explained as important for supervision are ship route plan and tracking; mouse-over extended ship info; map zooming; closest point/time of approach with alarms; adjustable vectors; rate of turn; spotting small vessels/activity, 360 deg. camera view; signal horn, harbour detailed view and wind speed and direction. In sum, the results suggest that the larger map centred layout is the most suitable, particular for supervision of more than one ship. Further work should focus on this approach using an updated simulator environment according to the study findings.

15:40-17:00 Session 12G: Prognostics and System Health Management VI: application to energy systems
Chair:
Christophe Berenguer (Grenoble Institute of Technology, France)
Location: CQ-107
15:40
Jian Zuo (Gipsa-lab, University of Grenoble Alpes., France)
Catherine Cadet (Gipsa-lab, University of Grenoble Alpes., France)
Christophe Berenguer (Gipsa-lab, University of Grenoble Alpes., France)
Zhongliang Li (LIS Laboratory, Aix-Marseille University., France)
Rachid Outbib (LIS Laboratory, Aix-Marseille University., France)
A dynamic load allocation strategy for a stochastically deteriorating multi-stack fuel cell system
PRESENTER: Jian Zuo

ABSTRACT. Fuel cells use hydrogen and oxygen as reactants with the only product of water. Proton exchange membrane (PEM) fuel cell shares the advantage of low operating temperature, high power density, and easy scale-up, making it one of the most suitable clean energy devices. In practice, the single fuel cells are fabricated into a single stack or put together into several stacks, i.e., a multi-stack fuel cell system to provide the power. Though promising for various industrial applications, e.g., fuel cell electric vehicles (FCEV), current PEM fuel cells still suffer from limited durability. Especially, the durability of a multi-stack PEM fuel cell system is one of the key problems in current fuel cell technologies. First, the growing power demand requires a fuel cell system with higher capacity. The multi-stack PEM fuel cell system enables flexible operating mode among the stacks, and the parallel structure greatly improves the system reliability compared with a single stack configuration. Finally, the possibility of power-sharing among stacks helps improving the overall system efficiency. A solution for solving the multi-stack durability challenge is to build an energy management strategy to decide for the load allocation among stacks. This load allocation is beneficial for decreasing the global system deterioration, thus improving the system lifetime. This work proposes a load allocation strategy based on the deterioration information of a multi-stack PEM fuel cell system for dynamic load demands; the proposed strategy aims at extending the system lifetime through a better management of its deterioration. It is assumed that the overall resistance of a fuel cell characterizes its deterioration as it carries key aging information and a load-dependent stochastic deterioration model is built for the overall resistance of the considered fuel cell. The objective of the allocation strategy is to maximize the system lifetime, while satisfying the external dynamic load demands, which is thus formulated as a dynamic optimization problem. The decision-making process takes the fuel cell resistance (considered as the deterioration index) of the different stacks as inputs and decides the optimal load allocation for the PEM fuel cell system. In the proposed setting, the load allocation is updated whenever there is a demand change event, until system failure characterized by the fact that the remaining stacks in the system fail to supply the required load. The behavior and performance of the proposed allocation strategy are assessed by Monte Carlo simulation. The lifetime distribution results of the proposed load allocation strategy are compared with the classical average load split method. The lifetime distribution with load allocation strategy tends to be narrower with a greater centered value, thus confirming that it can be used to extend the system lifetime.

16:00
Lea Hannah Guenther (University of Wuppertal, Germany)
Pit Fiur (University of Wuppertal, Germany)
Stefan Bracke (University of Wuppertal, Germany)
Usage data analysis of lithium-ion batteries as a base for the prediction of the product reliability in a specific second life application

ABSTRACT. To slow down the climate change and to counteract global warming, a general reduction of the CO2 emission is pursued. This goal has a massive impact on the mobility sector and a transformation to a sustainable powertrain is sought. The automobile industry is changing accordingly and the market shares of electric and hybrid vehicles have increased significantly in recent years. These types of powertrains are based on the use of a traction battery. The end-of-life (EOL) of the battery within the vehicle is defined as a capacity loss of 20 % to 30 %. Consequently, the batteries provide a remaining capacity after their EOL in the vehicle. Therefore, the used batteries can be reused within an alternative second life application with a lower load case. The reuse of the batteries in a second life application is a more efficient use of resources, reduces the disposal of these products and the energy consumption for the production of new batteries. One possible second life application is the reuse of the battery as stationary energy storages in private households in conjunction with photovoltaic installations to store renewable energy. Besides the advantages, the reuse provides multiple challenges regarding the reliability requirements, for example the necessity to guarantee the safety during the second life. The safety and the reliability of the battery are influenced by the battery degradation, which depends on its specific usage behavior as well as on the environmental conditions. Therefore, the individual usage behavior has to be considered for the development of a degradation model and a reliability assessment for used traction batteries. For this purpose, field data of the first and the second life use cases are of special interest for the estimation of the battery’s reliability in a second life application.

In this paper a usage based analysis of two data sets and their comparison with regard to the development of a reliability model for a second life application of traction batteries within a stationary household in combination with a photovoltaic installation are presented. Therefore, two datasets of field data have been collected within these research activities. One is a data set of an electric vehicle. The other data set contains data from a private household, which is equipped with a stationary battery storage system and a photovoltaic installation. Both data sets are based on batteries in their first life application. The data sets are analyzed and the battery usage scenarios are characterized under consideration of the specific circumstances. Furthermore, important similarities and differences of both applications are outlined. The analysis is focused on the development of a reliability model for the reuse of a traction battery in a stationary second life application.

16:20
Kostas Chairetaikis (NCSR DEMOKRITOS, Greece)
Sarantis Kotsilitis (NCSR DEMOKRITOS, Greece)
Effie Marcoulaki (NCSR DEMOKRITOS, Greece)
Monitoring and analysis of electric motor amperage towards detection of future motor failures
PRESENTER: Effie Marcoulaki

ABSTRACT. Electric Motor‐Driven Systems (EMDS) include equipment such as pumps, fans, compressors and material handling & processing equipment in industry, infrastructure and large buildings. The new motor designs incorporate real-time monitoring and predictive diagnostics systems to enable advanced motor management improving reliability, efficiency and safety [1]. This however does not cover existing systems were retrofitting costs to include such functionalities may be prohibitive. Electricity monitoring is gaining interest in the industrial world as a means to support energy efficiency as well as Conditional Monitoring and Predictive Maintenance (PdM). Electric signal analysis can help identify excessive current loads or signal abnormalities related to bearing, stator, motor or eccentricity faults, and propose targeted mitigating actions for equipment update, process modifications to release mechanical stress, installation of additional devices to balance the power load etc. Recent studies have proposed novel ML/AI techniques for signal analysis, with accuracy of over 90% in fault detection [2], and usually combine additional data and use wireless sensor systems [3].

The current work considers a real industrial case for which low frequency electric load data are available over a period of 1 year. The analysis tools involve machine learning algorithms, trained using the company’s maintenance logs and expert judgement to create the ground truth. The low frequency current signal (0.1Hz) from plants PLCs is collected. In a next step, we implemented feature engineering to extract useful descriptors from a thirty-minute time window of electric current measurements. After standard preprocessing steps and normalizing the dataset, an ensemble of ML algorithms is trained like CART algorithms, SVM and k-NN. All algorithms performed in a satisfactory level on classifying the healthy and the faulty instances of the dataset. Decision and Gradient Boosted trees obtained results indicating an accuracy of 80-85% in the early detection of deviation from nominal operation and potential faults. The above results can be fine-tuned according to the company’s priorities, since there is a clear trade-off between early detection for prevention of unexpected shutdowns and false alarms.

Ongoing work involves the deployment of a high-frequency monitoring system already demonstrated in industrial conditions [4]. This will allow more detailed analysis also considering the frequency domain. The monitoring system includes a friendly user-interface to improve the registration of device failures and provides alarms for predicted failures. The registered failures will be used for system training and validation of results.

REFERENCES

[1] D. B. Durocher, M. R. Hussey, D. E. Belzner and L. Rizzi, "Application of Next-Generation Motor Management Relays Improves System Reliability in Process Industries," in IEEE Transactions on Industry Applications, vol. 55, no. 2, pp. 2121-2129, March-April 2019. doi: 10.1109/TIA.2018.2879468. Available at: https://ieeexplore.ieee.org/abstract/document/8520749

[2] Ince, T., Kiranyaz, S., Eren, L., Askar, M., & Gabbouj, M. (2016). Real-time motor fault detection by 1-D convolutional neural networks. IEEE Transactions on Industrial Electronics, 63(11), 7067-7075. Available at: https://ieeexplore.ieee.org/abstract/document/7501527

[3] Medina-García, J., Sánchez-Rodríguez, T., Galán, J., Delgado, A., Gómez-Bravo, F., & Jiménez, R. (2017). A wireless sensor system for real-time monitoring and fault detection of motor arrays. Sensors, 17(3), 469. Available at: https://www.mdpi.com/1424-8220/17/3/469/htm

[4] Kotsilitis S., Chairetakis K., Katsari A., Marcoulaki E. 2020. The SUPREEMO Experiment for Smart Monitoring for Energy Efficiency and Predictive Maintenance of Electric Motor Systems. In P. Baraldi, F Di Maio & E Zio (eds) “Proceedings of the 30th European Safety and Reliability Conference and the 15th Probabilistic Safety Assessment and Management Conference”, Research Publishing, Singapore, pp. 3415-3422

16:40
Yaxin Shen (Université de Technologie de Troyes, France)
Mitra Fouladirad (École Centrale de Marseille, France)
Antoine Grall (Université de Technologie de Troyes, France)
Solar PV Panel Degradation Modeling and Maintenance planning
PRESENTER: Yaxin Shen

ABSTRACT. A solar photovoltaic (PV) panel has an average life expectancy of 25 years. There are serious degradation phenomena under complex environmental conditions. To avoid failure and the induced losses, the system must be maintained. To propose an efficient maintenance policy, first, it is essential to focus on the degradation analysis. In this paper, we consider that the failure of PV panels is caused by dust layers due to the environment. We model the dust accumulation process as  PV panels’ degradation with wind conditions as covariates. Firstly, the dust accumulation can evolve as an accumulation of shocks, and each shock results in a random positive degradation increment. Based on this, a non-homogeneous compound Poisson process (NHCPP) is an appropriate model for PV panels’ degradation. To precisely represent the effect of covariate on PV panels’ degradation, we combine proportional hazard models (PHM) and time-continuous Markov chain (TCMC) into our degradation model. Finally, based on the degradation model, we propose a maintenance policy that includes both preventive and corrective maintenance. The evaluation of the policy is based on a trade-off between the gains of power generation and the costs of maintenance. The periodic inspection interval $\tau $ and preventive maintenance threshold $F_p$ are chosen to be decision parameters to minimize the maintenance cost and maximize the expected profits. We give the numerical implementation and optimization for optimal profits. The proposed model provides an essential foundation for the study of degradation-maintenance issues on a solar farm.

15:40-17:00 Session 12H: Hydrogen Technologies
Chair:
Nicola Paltrinieri (Norwegian University of Science and Technology, Norway)
Location: CQ-105
15:40
Shenae Lee (SINTEF Digital, Norway)
Maria Vatshaug Ottermo (SINTEF Digital, Norway)
Knut Vaagsaether (USN, Norway)
Henning Henriksen (Gen2 Energy, Norway)
Odd-Arne Lorentsen (Gen2 Energy, Norway)
Solfrid Håbrekke (SINTEF Digital, Norway)
Stig Johnsen (SINTEF Digital, Norway)
Nicola Paltrinieri (NTNU, Norway)
Hazard identification for gaseous Hydrogen in storage, transportation, and use
PRESENTER: Shenae Lee

ABSTRACT. Hydrogen produced from electrolysis, often referred to as green Hydrogen, is expected to play a key role in achieving an economy with net zero emissions by 2050, especially as an alternative fuel in the transport sector. The use of Hydrogen as an energy carrier calls for the establishment of a value chain for Hydrogen, which allows for safe distribution of Hydrogen from the production site to the end users. However, ensuring safe handling of Hydrogen is challenging because of the potential for major accidents involving Hydrogen. For this reason, accident risks associated with various Hydrogen activities should be reduced to an acceptable level by implementing all the necessary safety barriers. General requirements for such barriers are found from regulations, but systematic hazard identification is needed to support decision about the barriers. The main objectives of the paper are 1) to present regulatory requirements for transport, storage and use of Gaseous Hydrogen and 2) to illustrate the hazard identifications for a Hydrogen facility, using a Hydrogen refuelling station as a case study. This study represents a preliminary analysis that can serve as a basis for a more detailed risk analysis for a specific Hydrogen activity under consideration.

16:00
Alessandro Campari (NTNU, Norway)
Maryam Alikhani Darabi (NTNU, Norway)
Federico Ustolin (NTNU, Norway)
Antonio Alvaro (SINTEF, Norway)
Nicola Paltrinieri (NTNU, Norway)
Applicability of Risk-based Inspection Methodology to Hydrogen Technologies: A Preliminary Review of the Existing Standards

ABSTRACT. Hydrogen as an energy carrier can help mitigate global warming in the forthcoming years. However, equipment exposed to hydrogen must cope with the potentially damaging effect inherent to hydrogen entry into materials which may be particularly detrimental for metals. Inspection procedures and maintenance activities are required to preserve the integrity of these technologies. In this light, risk-based inspection (RBI) aims to minimize the probability of systems failure by prioritizing the inspection of high-risk components. The risk level of each piece of equipment is defined by the Damage Factor and depends on the degrading mechanisms likely to occur. This methodology has never been adopted for equipment operating in a pure H2 environment. Hence, the following research questions arise: is the current RBI methodology applicable to hydrogen technologies in the case of gaseous hydrogen applications? Which modifications are needed to adapt the RBI standards for hydrogen damage? This study investigates these issues by reviewing and comparing the RBI standards API 580, API 581, DNVGL-RP-G101, ASME PCC-3, and EN 16991. The findings show that references to hydrogen embrittlement seem to be missing. In addition, a defined methodology to estimate the Damage Factor for pressurized gaseous hydrogen at ambient temperature is lacking. Therefore, several modifications to the current standards are essential to reduce the uncertainties in the risk assessment of equipment for hydrogen handling and storage.

16:20
Bjørn Axel Gran (Institute for Energy Technology (IFE), Norway)
Stefano Deledda (Institute for Energy Technology (IFE), Norway)
Kjell Løvold (Hydrogene Storage AS, Norway)
Evaluating the risks of using magnesium hydride for zero emission mobility
PRESENTER: Bjørn Axel Gran

ABSTRACT. There is worldwide an increased focus of zero emission mobility solutions for the maritime sector, see e.g., Hydrogen Europe [1] and national research initiatives like the Norwegian Research Center on Zero Emission Energy Systems for Transport MoZEES [2]. For all new initiatives there is also a need to understand the safety and risk aspects of having new fuels, storage solution or operational concepts, as well as the business cases around the initatives. One such example is the Safety and Risk Information and Guidance provided by Lloyds Register Consulting for the Ocean Hyway Cluster on the use of hydrogen and ammonia infrastructure [3]. Another example is the Hynor Hydrogen Technology Center, which is a fuel cell and hydrogen technology test centre owned and operated by IFE [4]. Other initiatives focus on storing hydrogen as metal hydrides, such as magnesium hydride, MgH¬2. CNRS in Grenoble developed the technology to build hydrogen storage tanks based on MgH2 and the French company McPhy demonstrated it in 2010 [5]. If mixed with proper additives, MgH2 can be installed in cylindrical pressure chambers designed for pressures up to 20 bar. At temperatures above 250 °C, MgH2 can be charged directly from an electrolyser or from a hydrogen gas bottle (exothermic process). If the pressure is reduced, hydrogen can be released above 300 °C (endothermic process). Since continuous heating is required to desorb H2, hydrogen storage in MgH2 is considered a safer solution compared to compressed or liquified H2. Indeed, if by accident there is a rupture of the metal container, the endothermic desorption of hydrogen will cool down MgH2 and, as soon as heating is stopped, no hydrogen will be released. To evaluate the safety benefits, Hydrogen Storage AS, which promotes the MgH2 imitative in Norway, conducted together with IFE a preliminary risk assessment of MgH2 for zero-emission mobility in the maritime sector. This included facilitating a HAZID meeting with various stakeholders. This paper summarises on the findings from this risk assessment and compare the findings with similar risk assessments on the use of hydrogen and ammonia solutions related to use as a maritime fuel [3]. One observation from the risk assessment is that we found many unwanted incidents which ended in the yellow ALARP (as low as reasonably practicable) area. The other observation was that all these unwanted incidents could be mitigated, making the risks acceptable. The identified mitigations were both related to technical aspects of the design as well as operational aspects of the use of magneisumhydrid. This work was funded by the Regionale Forskningsfond Oslo via the project “Magnesiumhydrid for null-utslipp mobilitet i den maritime sector” (Project no. 321779).

References 1. Hydrogen Europe, https://hydrogeneurope.eu/ (last visited 14.01.2022) 2. Aarskog, F; Hansen, O.R.; Strømgren, T; and Ulleberg, Ø. Concept risk assessment of a hydrogen driven high speed passenger ferry. International Journal of Hydrogen Energy, 2019, pp. 1-14. 3. Lloyds Register, Hydrogen and Ammonia Infrastructure, Safety and Risk Information and Guidance, Report no: PRJ11100256122r1 Rev: 00, May 2020 4. IFE Hynor Lab, https://ife.no/en/laboratory/ife-hynor-hydrogen-technology-center-ife-hynor/, (last visited 14.01.2022) 5. McPhy-Energy’s proposal for solid state hydrogen storage materials and systems. Journal of Alloys and Compounds 2013 pp. S343-S348

16:40
Brynhild Stavland (University of Stavanger, Norway)
Ove Njå (University of Stavanger, Norway)
Systems Thinking as Basis for Regulating Hydrogen Safety in Society

ABSTRACT. To reduce carbon emissions, new carbon neutral energy carriers need to be added to the energy mix. Hydrogen is often highlighted as a promising alternative because it can be produced and used without emitting CO2. However, the properties of hydrogen differ from established energy carriers such as petrol and diesel, which might make existing regulations insufficient for handling the safety implications. To ensure safety, it could therefore be necessary to critically assess the regulatory system. The safety regulation of hydrogen needs to be suited for large-scale implementations of new hydrogen-based energy carriers in society.

There are several challenges for the regulatory system related to a field that is under development with constant technological advances. On one hand, the regulatory system should provide sufficient restrictions against unsafe activities and provide safety for third persons. On the other hand, the regulations should not impose unnecessary restrictions that can limit the implementation of hydrogen-based energy carriers. This paper is based on assumption that a system thinking approach can be useful when studying regulatory processes. The regulatory system is analyzed using Leveson’s approach to systems theory and the purpose is to identify which constraints are necessary to develop an efficient regulatory system for hydrogen-based energy carriers.

15:40-17:00 Session 12J: Occupational Health and Safety
Chair:
Alessandra Ferraro (Inail (National Institute for Accident Insurance), Italy)
Location: CQ-008
15:40
Isaac Animah (Regional Maritime University, Ghana)
Mahmood Shafiee (School of Engineering, University of Kent, UK)
Status of ISO 45001:2018 implementation in seaports: A case study
PRESENTER: Isaac Animah

ABSTRACT. Seaports are global players in the maritime industry with the responsibility of promoting the well-being of employees and customers in the workplace. However, over the past years safety performance of seaports operating within the West African sub-region has become a major concern due to the lack of enforcement of national occupational health and safety regulations. This has compelled some seaports to adopt the occupational health and safety management system (OHSMS) ISO 45001:2018 standard. The aim of this study is therefore to assess the progress seaports have made in the implementation of ISO 45001:2018 standard and discuss the challenges that need to be overcome when implementing ISO 45001:2018 standard in seaports. Through the use of questionnaire, observations and review of institutional document, data was gathered from workers and senior managers of export, container depot and the stevedoring sections of seaports operating in Ghana. The findings of this study revealed that seaports within Ghana have made some progress with regards to workers’ awareness of occupational health and safety issues, usage of personal protective equipment (PPEs) and safe working procedures. However, it was also established that a lot more needs to be done to improve the current levels of communication, safety training and resources in relation to health and safety management. The findings of this research are useful to maritime institutions wishing to migrate from OHSAS 18001:2007 certification to ISO 45001:2018 certification or seeking to improve on already existing system.

16:00
Laura Tomassini (Inail (National Institute for Accident Insurance), Italy)
Luciano Di Donato (Inail (National Institute for Accident Insurance), Italy)
Marco Pirozzi (Inail (National Institute for Accident Insurance), Italy)
Cristiano Costa (Unacea (Italian construction equipment association), Italy)
Elisabetta D'Alessandri (Inail (National Institute for Accident Insurance), Italy)
Alessandra Ferraro (Inail (National Institute for Accident Insurance), Italy)
Daniela Freda (Inail (National Institute for Accident Insurance), Italy)
Safety of construction machinery - accident analysis, case studies and possible innovative mitigation tools
PRESENTER: Laura Tomassini

ABSTRACT. Despite the effort spent on safety of construction site, the number and severity of accidents related to the use of construction machinery (earth-moving machines, drilling machines, mixing machines, transport and projection of mortars and concretes, use of interchangeable equipment, etc.) are still worrying and unsustainable in social terms. This awareness has led Inail (National Institute for Accident Insurance) to establish a research collaboration agreement with Unacea (Italian construction equipment association) aimed at understanding and characterizing the recurring accidental dynamics and identifying possible strategies for procedural and technical intervention, in order to reduce its frequency and severity. The study took place in three phases. The first focused on collecting information from an Italian database related to the most recurrent and/or most serious accident dynamics for different types of construction machines in order to obtain standard "case studies" to work on. The next two phases were dedicated to the research for mitigating solutions. From the information collected and the analysis carried out, both risk-factors deriving from incorrect behavior (in the use of the machine or in the surrounding activities) and limits due to the state of the art of the machines themselves (more evident in the smaller or more obsolete ones) emerged. Then, it was evaluated to be practicable procedural/training interventions (aimed at ensuring the correct execution of the assigned tasks) and also technological ones (aimed at improving the intrinsic safety of the machinery), both oriented towards the use of innovative technological solutions. For procedural/training interventions, it was decided to implement interactive checklists developed on an augmented collaboration platform, also equipped with extended training and extended presentation features, which allows the implementation of working procedures. For specific tasks, the development of remote assistance from supervisors or experts is allowed. The solution has been developed for "case studies" of fatal/serious accidents recurring in the use of concrete pumps. The result obtained is the prototype development on the Overit Space1 platform of an interactive protocol (available through a app on mobile device) to assist the operators in the correct use of the machines in the situations identified in the case studies. Future developments will be aimed at continuing activities for the identification and analysis of other case studies (also for different types of construction machines) to which similar protocols and technological solutions can be usefully applied. It is also planned to carry on studies for technological interventions to improve machines safety through the use of advanced solutions (integrated detection systems and other).

16:20
Gunhild B Sætren (Nord Universtiy, Norway)
Hege C Stenhammer (Nord University, Norway)
June Borge Doornich (Nord University, Norway)
Jan-Oddvar Sørnes (Nord University, Norway)
Mina Saghafian (NTNU Department of Psychology, Norway)
Psychological Safety in Crisis Preparedness and Management Training
PRESENTER: Mina Saghafian

ABSTRACT. Crisis preparedness and management training is to a large degree about training for managing the unexpected. Psychological safety is a key aspect of creating an environment that upholds the criteria for optimizing mindful organizing. This provides a learning environment in which participants are not afraid of negative feedback and that is open and trustful – criteria important in creating shared situational awareness. Thus, our research question was, Is psychological safety established in simulated crisis preparedness and management training? In this study, we interviewed 10 informants and conducted a one-day observation of an exercise. Thematic analysis was used to analyse the data. We found that students, academic staff and facilitators, and mentors reported behaviour and a climate that were consistent with psychological safety but that elements such as more guidance and supervision and the evaluation of the roles of mentors were aspects for improvement.

16:40
Martha Chadyiwa (University of Johannesburg, South Africa)
Stanslaus Madende (University of Johannesburg, Zimbabwe)
Shinga Feresu (University of Johannesburg, South Africa)
Assessment of knowledge, attitudes and practices on safe disposal of pharmaceutical waste among pharmacy, medical and dental interns at Windhoek and Katutura Central hospitals in Namibia (2018 to 2019): A quantitative study
PRESENTER: Martha Chadyiwa

ABSTRACT. BACKGROUND: There is always a discrepancy in the demand of pharmaceuticals in public hospitals and their ultimate utilisation leading to pharmaceutical waste. The purpose of this study was to assess the knowledge, attitudes and practices on safe disposal of pharmaceutical waste at Namibia’s two teaching hospitals; Windhoek Central hospital and Katutura hospital among medical, dental and pharmacy interns. METHODS: The research data was collected using a self-administered questionnaire (SAQ) with closed-ended questions. A non-probability convenience sampling strategy was used to select the study participants. RESULTS: The interns were more likely to indicate pathological waste, anatomical waste and pharmaceutical waste as the main type of hospital medical waste, COR 1.47, 95% CI (0.58 – 3.74) and COR 1.09, 95% CI (0.69 – 1.70) respectively, 79% did not know the main type waste produced within the hospital. The interns were more likely to indicate contaminated products: COR 3.25, 95% CI (1.75 – 6.00) and unusable products COR 3.27, 95% CI (1.63 – 6.56), as the main reasons why the hospital generate pharmaceutical waste. The interns did not believe that pharmaceutical waste management is team work: COR 10.96, 95% CI (1.35 – 89.22). Interns viewed safe management of pharmaceutical waste as an extra burden of work COR 3.75, 95% CI (0.77 – 18.36), while they as well did not believe that continued education programs to upgrade existing knowledge about waste management should be installed: COR 1.92, 95% CI (0.17 – 21.40), 48.8% of the respondents were undecided on whether the issue of pharmaceutical waste is of great importance in public health. The interns find it less important to inform patients and their families about how to dispose unused or expired medicines: COR 0.88, 95% CI (0.61 – 1.27), 53% of the interns did acknowledge their role on pharmaceutical waste management within the hospital though they indicated that they are more prepared to sometimes participate in taking responsibility regarding the collection of pharmaceutical waste: COR 13.60, 95% CI (7.16 – 25.85). CONCLUSION: Limited knowledge on safe hospital pharmaceutical waste disposal was evident amongst the respondents. There is need to sensitize health care workers on proper and safe hospital pharmaceutical waste disposal, so as to avoid associated environmental hazards.

17:00-18:00 Session 13: Panel A Room CQ009: Changes in automation and the future of work (Bruno Siciliano University of Naples & Adrian Kelly Grid Operations and Planning · EPRI ) Panel B room CQ006 : Crisis management & Disaster Risk Reduction Tina Comes, TU Delft; t.comes@tudelft.nl

Spilt panels:

Session 1 in room CQ009: Changes in automation and the future of work in safety critical tasks: Prof Bruno Siciliano Engelberger Robotics Awards 2022, University of Naples Adrian Kelly Principal Project Manager - Grid Operations and Planning · Electric Power Research Institute (EPRI) & Deirdre Merriman, EirGrid

Session 2 in room CQ006 : Crisis management & Disaster Risk Reduction  Tina Comes, TU Delft; t.comes@tudelft.nl

Chair:
Podofillini Luca (Paul Scherrer Institute, Switzerland)
Location: CQ-006