ESREL & SRA-E 2025: EUROPEAN SAFETY AND RELIABILITY & SOCIETY FOR RISK ANALYSIS EUROPE CONFERENCE
PROGRAM FOR TUESDAY, JUNE 17TH
Days:
previous day
next day
all days

View: session overviewtalk overview

10:45-12:00 Session 8A: Risk concept issues
Location: A
10:45
Risk Trajectory ‘theory’– Temporally Conceptualizing Risk to Aid Decision-Making Processes

ABSTRACT. This paper introduces the theoretical concept of risk trajectories to bridge the gap between the capstone risk concept and practical application for decision makers. A novel theoretical framework derived from contemporary risk science that conceptualising risk over time as:

‘Sequential and causally linked series of consequences of the actions conducted as part of an activity, where the subsequent consequences are characterized by escalating levels of uncertainty due to the compounding effects of prior consequences, external factors and diminishing control of future events.’

This theoretical framework challenges traditional tools such as risk matrices by integrating elements such as uncertainty and temporal aspects of risk. The framework emphasizes the importance of agency, enabling decision-makers to proactively navigate and manage risk trajectories.

To evaluate the practical utility of the framework, the paper tests its relevance for military operations planning in complex environments where decision-making is challenged by uncertainties and actors with competing interests. Risk trajectory visualisations for plans made as part of a NATO exercise are developed before their perceived efficacy are addressed during interviews with staff officers and decision makers.

The initial results demonstrate that visualizing risk as a trajectory can provide a nuanced depiction of risk, capturing the interconnectedness of actions, escalating uncertainties, and the timing of potential exposures to challenges and opportunities. This dynamic approach can also facilitate informed planning and decision-making, allowing practitioners to anticipate potential deviations from desired outcomes and to adjust actions proactively.

While the theoretical concept of risk trajectories may show promise in enhancing risk management in complex environments, further empirical validation through real-world applications is necessary to establish its robustness and reliability.

11:00
The Concept of AI Risk

ABSTRACT. As various types of AI-applications have taken the world by storm, the concept of risk has taken an increasingly central role in discussions on their (potential) negative impacts. However, the notion of risk has a variety of conceptualizations that can be at odds with each other (e.g., theoretical versus empirical, realist versus constructivist). In this paper, I conduct a systematic literature review of the notion of AI risk. The goal is to extract the ways in which AI risk is defined, characterized, categorized, and measured. I also look at how risk is understood in relation to uncertainty, probability, harm, and threat to uncover the particular epistemologies that underlie different approaches.

I find that multiple disciplines operationalize the concept of AI risk, each working from plausible, but markedly distinct, points of departure. Of these proposals, only few substantiate their approach. Second, I show that these conceptual choices influence what is identified as relevant AI risk, and consequently, which risks will be addressed, and how this is to be done. Third, I argue that while realist approaches dominate the debate, the handful constructivist approaches can inject more critical reflection into the discourse. Fourth, I emphasize the need for further deliberation on what comprises acceptable residual AI risk, as this often remains implicit. Currently, the question of acceptable AI risk continues to be underexplored despite having far-reaching implications. Moreover, these ramifications will likely become increasingly consequential once AI risk management protocols are put in place.

Since assessing and governing the risks of AI is an inherently interdisciplinary endeavor, this paper contributes to the quality of interdisciplinary debates by highlighting that AI risk is not a given term. Instead, more explicit discussion on the implications of different pre-conceptions will help bring the debate forward.

11:15
On the trace of the true probability of failure
PRESENTER: Max Teichgräber

ABSTRACT. The Eurocode defines an annual target reliability index of 4.7, corresponding to an acceptable probability of failure of 10^(-6). This value refers to structural elements, i.e., it should be achieved for each limit state individually. Moreover, it is a nominal value, i.e., it is acknowledged that it depends on assumptions made in its calculations. The safety components of the Eurocode are historically intended to be calibrated such that this nominal value is met on average. However, the assumptions made in the past that led to the target value of 4.7 are opaque. Deviations from these assumptions, e.g. through newly developed and presumably more realistic models, can strongly change the resulting nominal probability of failure. In this context, the question arises what the ‘true’ probability of failure is? We attempt to answer this question with an empirical approach, but this is challenging for two reasons: First, little data is available, and second, most data from real life structures refers to system failure and not element failure. For these reasons, we chose a semi-empirical approach, using laboratory data from various databases of different limit states (e.g., tests on the shear resistance of reinforced concrete beams) to represent the resistance side and combine it with a representative portfolio of action combinations to represent the load side. Even though this does not result in a “true” probability of failure, at least the resistance side is based on empirical data and can therefore considered to be “true”. This makes it possible to compare different types of resistance and answer questions such as ‘Is shear failure more or less likely than bending failure’ or ‘Is building material A safer than building material B?’

11:30
The Epistemology of Risk
PRESENTER: Ole A. Lindaas

ABSTRACT. The primary purpose of analytical enquiries is knowledge production. The purpose is to generate knowing-that knowledge, hence knowledge stating that something is the case. After Edmund L. Gettier through a couple of examples demonstrated how beliefs can be justified as well as true without signifying knowledge, the traditional tripartite account of knowledge is widely considered to be inadequate. A widespread approach for countering the critic has been to add a fourth knowledge criterion to prohibit wayward lines of justification. In the risk field, however, leading scholars have made a case for subtraction rather than amendment. The claim is that in the risk domain there is nothing more to knowledge than justified beliefs. The purpose of this paper is to assess the cogency of this proposal. Noticing the truth-forbidding features of the risk, disconnecting knowledge from truth in the risk domain, is clearly an inviting move. All the same, it is also a highly challenging move in which the single most important objection is that it gives rise to a threatening circularity. Resultingly, the cogency of a two-partite justified belief concept of knowledge crucially depends on the prospects of successfully overcoming the problem of circularity. In this analysis, three different strategies for overcoming this problem will be examined. The first strategy will be to deny the claim of circularity, the second strategy will be to deny the viciousness of the circularity whereas the third strategy will be to deny the claim of the circularity to be non-transcendent. As will be shown, the third strategy offers the most promising response. But it is also a response that comes with the caveat of undermining the distinctiveness of the two-partite concept.

10:45-12:00 Session 8B: Roundtable/Panel: The Social Amplification of Risk Framework: Contemporary significance and Needs for the Future

Presenter: Angela Bearth, Seth Tuler, Lisbet Fjæran, Kenneth Pettersen Gould, Rui Gaspar & Rob Goble

Location: B
10:45-12:00 Session 8C: Bayesian Networks Modelling for Reliability and Risk Assessment
Location: C
10:45
Identification and conceptual modeling for organizational factors affecting operational safety towards extending human reliability analysis methods
PRESENTER: Tingting Cheng

ABSTRACT. Organizational Factors (OFs) can affect the likelihood of accidents as well as the severity of their consequences by influencing the actions of individuals at work. Organizational issues are recognized contributors to accidents in several industries, primarily through their influence on the human behaviors of those who ultimately interact with technical systems. Current studies have developed models to quantify the impact of OFs on organizational performance and explore the organizational mechanisms that focus on the systemic and collective nature of organizational behavior. However, these methods lack focus on the explicit impact of OFs on operating crew behavior. In the field of Human Reliability Analysis (HRA), studies aim to assess operating crew errors through Performance Influencing Factors (PIFs), but they give limited consideration to the impact of OFs and rarely examine the underlying organizational mechanisms. To bridge this gap, this paper aims to: 1) develop a comprehensive list of OFs affecting operational safety through an exhaustive literature review and categorize them, 2) provide a model by incorporating these OFs through exploring their distribution in the dimensions of organizational characteristics and organizational structural units. Bayesian Belief Network (BBN) is suggested to be used for establishing the model due to its flexibility as a modeling vehicle for “soft” causal relations. The model is built upon three primary dimensions of organizational characteristics: behavioral, structural, and processes. This model is the first step toward incorporating an OF model into the HRA process, i.e., developing an extended HRA model for complex socio-technical systems with clear causal mechanisms among OFs and PIFs. The findings of this paper are expected to have broad applications for the risk assessment of socio-technical systems, with consideration given to organizational factors.

11:00
Comparison of Fuzzy Bayesian Network Methods for maritime applications
PRESENTER: Stefanie Gote

ABSTRACT. Fuzzy Bayesian networks are commonly used to incorporate expert opinions in risk assessments and situational awareness approaches [1],[2]. While various methods exist for aggregating expert opinions, direct comparisons between these methods are often lacking [3]. The goal of this paper is to provide clearer guidance on selecting the most suitable method for a fuzzy Bayesian network. The second goal is to construct a Bayesian network to predict the probability of a ship entering the area of an offshore wind farm. To achieve the first research goal, the methods for opinion aggregation are com- pared. Three different aggregation methods were selected: the first uses a weighted agreement degree, the second employs a linear opinion pool, and the third utilizes Pythagorean fuzzy numbers with a Pythagorean fuzzy weighted geometric approach. A sensitivity analysis is conducted to compare these methods by varying the expert opinions with very strong or very low weightings. The Bayesian network is designed to provide a situational assessment for a specific ship near the wind farm area. In- puts for the root nodes of the Bayesian network are based on scenarios including weather conditions and ship characteristics, such as wind force and ship type. An expert survey was used to obtain the conditional probability tables for the Bayesian network. [1] Ayyildiz et al.(2024). A comprehensive risk assessment framework for occu- pational health and safety in pharmaceutical warehouses using pythagorean fuzzy bayesian networks. https://doi.org/10.1016/j.engappai.2024 [2] D’Aniello, G. (2023). Fuzzy logic for situation awareness: A systematic review. https://doi.org/10.1007/s12652-023-04560-6 [3] Zarei et al.(2019). Safety analysis of process systems using Fuzzy Bayesian Net- work (FBN). https://doi.org/10.1016/j.jlp.2018.10.011

11:15
A Bayesian Network Approach to Dynamic Risk Assessment of Hydrogen Refueling Stations
PRESENTER: Subhashis Das

ABSTRACT. Hydrogen is a promising energy vector, especially for hard-to-abate sectors such as heavy-duty transport. However, establishing a robust and safe hydrogen infrastructure, including hydrogen refueling stations (HRS), is crucial to realize this potential. Risk assessment plays a pivotal role in identifying and mitigating potential hazards to ensure the safe and reliable operation of HRS. Traditional risk assessments for new technologies like hydrogen often face challenges due to insufficient data and uncertainties. Bayesian Networks (BNs) offer a flexible framework to address these challenges by incorporating probabilistic reasoning and expert knowledge, enabling decision-making even with incomplete information. In this study, BNs were applied to analyze an HRS, focusing on quantifying uncertainty using the concept of total probability bias. The methodology involved several key steps: First, an FMEA (Failure Modes and Effects Analysis) was employed for hazard identification, while a Bow-Tie (BT) diagram was used to model worst-case scenarios. Second, the BT was transformed into a BN to represent event connections and identify potential failure points visually. Third, Relevant reliability data for components and systems were integrated into the BN to provide estimates of failure probabilities. Finally, the BN was dynamically updated with new operational data, allowing for continuous refinement of risk assessments, improved risk mitigation strategies, and more informed decision-making processes. This dynamic risk assessment method, using Bayesian network modeling, enables faster and more accurate risk evaluations, which enhances risk management and decision-making. The approach offers a flexible framework that incorporates uncertainty quantification, supporting the safe integration of hydrogen into the energy landscape.

11:30
Visualize, Understand and Analyze Complex Models: Bayesian Network Application for Energy Systems Analysis
PRESENTER: Luca Podofillini

ABSTRACT. Modern risk, resilience and reliability analyses often involve complex computational models, for example to represent the system dynamics, human-system interactions, and multi-factor and multi-player influences. Dynamic probabilistic safety assessment, scenario-based resilience modelling, and agent-based simulations are some examples. To represent the effect of aleatory (variability) and epistemic (lack of knowledge) uncertainties or, more generally, to explore the model response space, multiple runs are often performed, generating a large number of response trajectories or evolutions. Depending on the cases, these responses are typically processed and analyzed via clustering and uncertainty analysis techniques. The present work investigates a complementary approach: training a surrogate model, a Bayesian Belief Network (BBN), for visualization of the relationships among the model variables. In addition, the speed of these models also enables interactive what-if analyses for which running the complex model would not be too slow. Specifically, this work addresses energy system models, which aim to provide a detailed representation of the energy system sectors and are typically used to identify energy system pathways that achieve one or more objectives under different policies and constraints. The BBN (both structure and parameters) is trained on runs of a simplified version of “Swiss TIMES” energy system model. The simplifications reduce the number of model variables and the computational time, so that this preliminary work could focus on the investigation and demonstration of potential BBN uses. Applications of the surrogate BBN model are provided, demonstrating interactive analysis (supported by the visualization of key variables, their relationships and interactions), fast and intuitive uncertainty propagation, and support for goal-driven analysis (backward reasoning from outcomes to the inputs that produce these outcomes). Future work will improve the BBN learning process to allow the use of larger and more realistic energy system models and of fewer data records for learning.

11:45
EnhancedBayesianNetworks.jl: A Comprehensive Julia Framework for Probabilistic Modeling and Risk Assessment with Imprecise Probabilities
PRESENTER: Andrea Perin

ABSTRACT. EnhancedBayesianNetworks.jl is a novel Julia framework that implements Enhanced Bayesian Networks (eBNs) with imprecise probabilities. Enhanced Bayesian Networks extend the application breath of standard Bayesian Networks by allowing the definition of continuous nodes using probability density functions (pdfs) while still employing exact inference algorithms. The framework uses Structural Reliability Methods (SRMs) to evaluate Conditional Probability Tables (CPTs), which allows for the inclusion of low-probability failure scenarios. The integration with imprecise probabilities, using both credal sets theory and propagation of intervals and probability boxes theory, improves the robustness of probabilistic modeling and risk analysis. EnhancedBayesianNetworks.jl is written in Julia, a general-purpose programming language that combines high-performance computing with multiple dispatch capabilities, ensuring the flexibility and efficiency required for computationally complex workloads. This framework addresses the need for reliable tools in modern engineering, enabling a multi-scenario reliability analysis under aleatoric and epistemic uncertainties and offering support for external models and advanced simulations. Ongoing development aims to explore the incorporation of Dynamic Bayesian Networks (dBNs) to handle the modeling of complex multivariate time series in the presented framework, further broadening applicability in various fields.

** This abstract is for a special session

10:45-12:00 Session 8D: Advances in resilience of energy networks
Location: D
10:45
Modelling Power Disruption Scenarios and Assessing Resilience of a Power System
PRESENTER: Isabel Asensio

ABSTRACT. This study quantifies and analyses the impact of attacks on energy infrastructure within an anonymised European region, focusing on vulnerability of cross-border electrical interconnections. The analysed region includes several EU Member States, but due to sensitivity of the results, their geographical location is not disclosed. Utilizing the PyPSA-Eur model, we simulate the effects of selected disruption scenarios on these connections in the year 2025. The scenarios encompass a range of hybrid threats designed to assessed the resilience of the energy grid and identify potential risks to energy stability, load balancing, and market response. By estimating potential disruption duration for each analysed power line, we quantify and assess resilience of the power system of the region due to external disruptions. These results will allow to understand criticality of connecting power lines, and rank their importance. The study will serve as an example of resilience assessment of a power system due to specific type of disruptions. These results might contribute to the strategic planning efforts of policymakers and energy stakeholders, providing insights on safeguarding cross-border energy interconnections in Europe against the growing risks posed by hybrid threats and attacks.

11:00
Vulnerability and Robustness Analyses for the Planning of Resilient Hydrogen Networks
PRESENTER: Till Martini

ABSTRACT. Hydrogen is increasingly recognized as a promising fossil-free energy source and transport vector and thus plays a vital part in many energy-transition scenarios. Computer-aided hydraulic modeling, able to predict the physical conditions within hydrogen grids, is crucial for planning the successful transformation of existing natural gas infrastructures to support hydrogen as a new medium. Given hydrogen's potential key role, the urgency for resilient hydrogen grids is amplified by potential threats such as sabotage and political sanctions, underscoring the importance of simulation tools that can anticipate network behavior under non-standard circumstances. Although transient modelling of hydrogen dynamics is crucial for examining the immediate consequences of extreme contingencies, it is imperative to develop robust algorithms to evaluate the resilience of networks when confronted with such off-design events. Hence, we introduce a framework, specifically designed to evaluate transient responses to extreme events in hybrid or pure hydrogen networks that include storage solutions. This advanced numerical approach enables predictions of system behavior before, during, and after disturbances thereby allowing vulnerabilty, robustness and recovery analyses towards aiding the planning of resilient hydrogen grids.

11:15
Vulnerability Assessment of the Offshore Wind Farms by Using the Functional Resonance Analysis Method

ABSTRACT. The German government has set a legal requirement that offshore wind energy is to be expanded to a total capacity of 70 gigawatts by 2040, making it a significant part of the electricity generation. Energy supply is a critical infrastructure sector and of great importance to society. The Europe-wide power outage in 2006, which was caused by a poorly planned disconnection of an extra-high voltage line, showed what consequences can occur when the power supply is interrupted.

The challenge with offshore wind farms is that in the event of incidents, intervention is difficult and may not be possible in a timely manner because the exposed location is not reachable without ship or helicopter. Additionally, major components are unique technology with long production times. Attacks or accidents can occur at various points in the infrastructure and might have a wide range of damaging effects. The aim of this study is to present a method that enables the identification of the most vulnerable functions of the infrastructure.

The analysis is carried out in two phases. The first phase is the Functional Resonance Analysis method (FRAM). It enables the visualization of the process starting from wind to the power generation over the transmission to the onshore substations. Therefore, it represents the main control-, regulation- and energy generation processes. The second phase is a vulnerability assessment by using Krings method, whereby additional factors such as failure effects and downtimes are considered in order to define the vulnerabilities.

These results will be compared between the perception of the vulnerability by the stakeholders in the offshore wind energy sector, which have been determined in an interview series. It is noticeable that the interview participants tent to overrate the vulnerability of the offshore wind turbines while the FRAM model indicates a higher vulnerability for the offshore platforms.

11:30
Towards Risk-Informed Transmission Grid Outage Planning

ABSTRACT. Power grid outage planning is a class of preventive maintenance (PM) problems whose main objective is the optimal allocation of maintenance activities, such as component repairs, refurbishments, and upgrades. To ensure safe operations during maintenance, N-1 security constraints are commonly enforced, creating robust preventive maintenance plans against single-component failures. While uncertainty has been addressed extensively in the PM optimization literature, many grid outage planning approaches are deterministic. Deterministic security-constrained outage planning problems, due to their combinatorial nature, are challenging problems, and incorporating uncertainty can further increase the computational burden and challenge numerical tractability. However, omitting sources of uncertainty can compromise the cost-effectiveness and safety of the plan. This work addresses this gap by introducing a risk-informed approach to power grid outage planning that accounts for operational and planning uncertainties. We present an illustrative case study of a planning problem under uncertainty based on our previous work on risk-informed optimisation. We discuss the benefits and drawbacks of the proposed risk-informed approach and speculate on further extensions for power grid outage planning and related problems.

11:45
Households' energy resilience in times of crisis and sustainable transition
PRESENTER: Linda Kvarnlöf

ABSTRACT. In recent years, Sweden and Europe have faced intermittent high electricity prices and increased risk of power shortages, exacerbated by the war in Ukraine. During the same period, we have seen increasing examples of how climate change is leading to extreme weather events, such as heatwaves and floods, which in turn affect both infrastructure and people. These changes have led to an increased interest, and relevance, in Swedish households’ crisis preparedness. However, not all ways of preparing for crises, adapting to a changing climate, and managing disruptions are in line with a sustainable energy transition. Households that want to increase their energy resilience, and cope with both temporary disruptions and a transition to a more sustainable energy system, therefore need to consider many different aspects. A wide range of initiatives, such as education, study circles and information campaigns, are currently in place to support households in their preparedness and transition efforts. In this paper, we present a number of different initiatives and discuss in which ways these either cooperate or stand in conflict with each other when it comes to strengthening households’ energy resilience.

10:45-12:00 Session 8E: Physics-Informed Machine Learning for RAMS
Location: E
10:45
Intelligent Anomaly Detection for Drivetrain Systems in Wind Turbines
PRESENTER: Zifei Xu

ABSTRACT. The safety and reliability of the drivetrain system in offshore wind turbines are crucial for their effective operation. Detecting anomalous behaviour within the drivetrain and providing reliable prognostic information can significantly reduce the risk of severe failures. Ensuring the reliability and safety of intelligent models is of paramount importance in the AI-driven, data-centric era. To address this challenge, this paper presents an intelligent anomaly detection model capable of issuing alerts prior to abnormal shutdowns, thereby ensuring system safety. A physics-informed probabilistic neural network was developed, integrating physical insights into the neural framework to manage prediction uncertainty and enhance the safety and reliability of failure alarms generated by the intelligent model. Overall, the proposed method offers a more reliable prognostic framework to enhance the safety and stability of wind turbines, including offshore installations, during operation while reducing costs.

11:00
Dynamic Fault Localization via Continuous Interaction Inference in Multi-Body Dynamical Systems
PRESENTER: Vinay Sharma

ABSTRACT. Dynamical systems like gears, bearings, and motors are vital in industrial applications. Interactions between components, such as meshing of gear teeth, are key to performance. However, faults like cracks, spalls, or wear can degrade these interactions, potentially leading to failure if left unaddressed.

State-of-the-art fault detection methods rely on system-level monitoring signals, like vibrations or acoustic responses. While effective in identifying faults, they often fail to pinpoint the exact source. To overcome this, systems can be modeled as graphs, where pairwise interactions between components are inferred, resolving system dynamics at the component level and facilitating fault localization.

Graph-based methods, such as Neural Relational Inference (NRI) [1] and Collective Relational Inference (CRI) [2], infer interactions from trajectory data but typically categorize them based on predefined classes. In contrast, dynamical systems are governed by continuous properties, such as stiffness and damping, requiring a regression-based approach to estimate continuous parameters.

We propose a physics-informed graph inference network that models each edge and its associated nodes as a generalized dynamical system. For each edge, the network infers dynamic parameters and integrates them into a physics-informed graph neural network to predict future system trajectories. This approach enables accurate local system identification without prior knowledge of interaction types.

The method was tested on multi-body spring-mass-damper systems with varying stiffness and damping, successfully identifying these parameters across all configurations and demonstrating potential for fault localization. In a second task, the model was tested on a system with an isolated spring featuring time-varying stiffness to simulate degradation due to crack propagation. The model accurately predicted stiffness evolution, highlighting its ability to track changes in system behavior over time.

References [1] T. Kipf, et al. “Neural relational inference for interacting systems,” ICML 2018 [2] Z. Han, et al. “Collective relational inference for learning heterogeneous interactions,” Nature Communications, 2024.

11:15
Security-Constrained Optimal Power Flow with Physics-Informed Graph Neural networks
PRESENTER: Anna Varbella

ABSTRACT. Security-Constrained Optimal Power Flow (SC-OPF) is crucial for maintaining power system stability under normal and contingency conditions. However, the increasing integration of intermittent energy sources and the unpredictability of system contingencies have significantly escalated the computational complexity of large-scale SC-OPF. This adds to the growing list of challenges that future power system operations and security face. This work introduces PINCO, an unsupervised Physics-Informed Graph Neural Network (GNN) approach that greatly improves the computational efficiency of SC-OPF, thus contributing to a secure real-time operation of the power system. PINCO leverages the underlying physical laws governing power systems to provide near-optimal solutions without compromising security, even under N-1 contingency scenarios. It solves the AC-OPF problem using a GNN that can generalize across different grid topologies, incorporating a modified version of the traditional Physics-Informed Neural Network (PINN) framework designed to handle problems with hard constraints. Unlike conventional data-driven methods, our approach does not need a labeled dataset, offering more adaptable and robust solutions. We evaluate PINCO on IEEE benchmark test systems, namely IEEE9, IEEE24, IEEE39, and IEEE118 bus systems. The results show a significant reduction in computational time, which enables more frequent and comprehensive security assessments. Furthermore, the results demonstrate its scalability across various grid sizes and topological changes. Moreover, the validation using a complete ACOPF solver, AC-IPOPT, confirms the consistency and accuracy of our results. Overall, by enabling faster and more frequent security assessments, PINCO contributes to the resilience and stability of power grids. Therefore, it presents a valuable tool in the face of increasing renewable energy integration and evolving grid dynamics.

11:30
Degradation Prediction for Hydraulic Piston Pump Based on Physics-informed Recurrent Gaussian Process
PRESENTER: Rentong Chen

ABSTRACT. Accurately degradation analysis and prediction for hydraulic piston pump is crucial to ensure hydraulic system reliability, reduce unexpected downtime, and optimize maintenance schedules. The hydraulic piston pump’s degradation from wear is typical gradual failure mode. Traditional methods for degradation modelling often rely on physics of failure models or machine learning models. However, physics of failure models may not fully capture the degradation process of the hydraulic piston pump with multiple randomness and uncertainties. Machine learning models generally needs massive degradation data to learn black-box models to reach high accuracy prediction. In order to incorporate the benefits of both methods, a novel physics-informed recurrent Gaussian process model is developed to describe degradation process of hydraulic piston pump and predict degradation. Firstly, the wear process model of three friction pairs including swash plate/slipper, valve plate/cylinder block, and piston/cylinder bore for a type of hydraulic piston pump is investigated. Secondly, the degradation process of hydraulic piston pump is constructed by physics informed recurrent Gaussian process (PI-RGP) model. Comparing with Gaussian process model, recurrent Gaussian process model can reflect time accumulative effect. The mean function of the model is generated by deriving equations from physics of failure model to guide the forecasting process, so that the degradation model is more in line with the actual wear process. In addition, the model can also initiate small data training, and then update and extrapolation the model with new measurements. Finally, the experimental results indicate that the proposed PI-RGP model has foresight of the degradation process and can further improve the degradation prediction accuracy of hydraulic piston pump.

11:45
Rapid prediction of human evacuation from passenger ships based on machine learning methods
PRESENTER: Xinjian Wang

ABSTRACT. Compared to land-based evacuation scenarios, research on human evacuation from passenger ships presents unique challenges due to factors such as the complex geometric layout of ships, passengers' lower familiarity with the environment, and the impact of sea conditions. Rapidly predicting evacuation time is therefore important and crucial for safety of passenger ships at sea. This study aims to address the challenge of rapidly and accurately predicting human evacuation time from passenger ships using methods such as simulation modelling and predictive analysis. Firstly, the key risk factors affecting human evacuation from passenger ships are identified through literature reviews and accident report analysis, and a set of evacuation risk factors is established based on different combinations of these risk factors. Secondly, a simulation model for human evacuation from passenger ships is developed, and its reliability is verified by comparing the simulation results with actual evacuation drill outcomes. Based on this model, different evacuation scenarios are simulated using various combinations of risk factors, and the impact of key factors—such as guiding behaviour, personnel attributes and initial distribution, day/night environment, stair availability, and ship inclination—on evacuation efficiency is systematically analysed. Finally, several well-established machine learning models, including Random Forest, Support Vector Regression, and Neural Networks, are used to rapidly predict human evacuation time in different scenarios. The model with the shortest prediction time and highest accuracy is chosen. The results show that the simulation data closely align with the actual drill data. Among all the predictive models, Support Vector Regression performs the best, providing rapid and accurate predictions of human evacuation time from passenger ships. The findings make significant contributions to improve evacuation safety of passenger ships and crowd management.

10:45-12:00 Session 8F: Inspection models
Chair:
Location: F
10:45
A HYBRID MAINTENANCE POLICY FOR A BALANCED SYSTEM CONSIDERING TWO-LEVEL INSPECTIONS TYPE

ABSTRACT. Preventive maintenance of balanced systems is increasingly recognized for its pivotal role in maintaining operational efficiency across various industries. Despite this, the optimization of maintenance strategies for balanced systems with both balanced and non-balanced components remain underexplored. Existing policies often focus on threshold optimization without addressing the optimization of maintenance intervals in such systems. In this work, we propose a hybrid maintenance policy that combines two-level inspections and age-based preventive replacement for balanced systems operating under a degradation-shock environment. Our study is based on a real-world case involving a shredder machine in a scrap-based steel production line, which includes two balanced components: hammer blocks, and one non-balanced component: the grate. Each hammer block is a k-out-of-n system consists of n identical hammers arranged in series, with load-sharing dependencies. The system becomes imbalanced when weight differences between the hammer blocks, caused by wear and shock-induced mass loss, exceed a predefined threshold. Unlike traditional balanced systems, the out-of-balance state here does not immediately trigger failure. Instead, the system continues to operate for a sojourn period, during which a penalty cost is incurred, before collapsing. The system is thus subject to three competing failure modes: imbalance-induced failure, hammer block failure, and grate failure. To evaluate the performance of the proposed maintenance policy, the entire scrap-based production line is modeled using discrete-event simulation with Monte Carlo methods, allowing for the exploration of its decision variables. This approach provides a comprehensive framework for optimizing maintenance in balanced systems under real-world conditions, combining theoretical insight with practical applicability. A numerical example is presented to demonstrate the effectiveness of the proposed policy.

11:00
A prescriptive maintenance policy in the presence of imperfect maintenance

ABSTRACT. In this paper, we propose a maintenance policy which integrates prescriptive maintenance and imperfect maintenance actions. The policy consists in performing an inspection at a predetermined time and, based on its outcome, decide whether to immediately replace (i.e., perfect replacement) the unit or to postpone its replacement. If the replacement is postponed, an imperfect maintenance is concurrently performed which reduces the degradation level and modifies (stochastically) the degradation behavior of the unit. After the first inspection, the imperfectly maintained systems are then inspected again and, based on the outcome of this second inspection, it is decided whether to immediately replace the unit of to further postpone its replacement to a future time where no inspection is performed and the unit is replaced systematically. In case of this second postponement, the degradation information gathered at the inspection time is also used to (possibly) adjust the usage rate of the unit. The idea here is to use the latest degradation measurement to quantify the uncertainty brought about by the imperfect action and intervene accordingly with the usage rate. Any change in the usage rate is assumed to affect both the future evolution of the degradation of the system and its operational costs. The driving idea behind the conception of this policy is to investigate if and how prescriptive maintenance actions (i.e., changing the usage rate) can assist in reducing the uncertainty brought about by the imperfect maintenance action. Failure is defined by the first passage time of the degradation process to a predefined threshold. It is also supposed to be not self-announcing and that failed units keep operating, albeit with reduced performances/additional costs.

11:15
METHODOLOGICAL FRAMEWORK FOR OPTIMIZING INSPECTION FREQUENCY IN A PREVENTIVE MAINTENANCE POLICY

ABSTRACT. Determining the optimal inspection frequency is a significant challenge because it involves balancing frequent inspections, which can increase operational costs, and sporadic inspections, which can result in unexpected failures [1]. This paper proposes a stochastic simulation model that captures the inherent uncertainty in equipment behavior by defining probabilistic parameters such as maintenance costs and the probability of finding the equipment in a certain deterioration state. The model employs the Weibull distribution to model the time to failure of the equipment [3] and the Monte Carlo simulation to evaluate different inspection intervals and their impact on costs. The optimal inspection frequency is determined by minimizing the total expected cost, which includes inspection, maintenance, and failure costs. Additionally, a sensitivity analysis is performed to examine how key variables affect the model, with results visualized using surface plots. Metrics will also be calculated to consider the risk and variability of results. Among these metrics, dominance analysis and other techniques, such as Conditional Value-at-Risk (CVaR), are included to evaluate the outcomes in higher-risk and uncertainty scenarios [4]. This approach is particularly useful in highly uncertain industrial settings, providing an effective tool for preventive maintenance planning.

11:30
Modified-opportunistic inspection with misclassification errors

ABSTRACT. Inspection of critical systems plays a crucial role in maintenance management, with periodic inspections aimed at preventing costly failures that could compromise system availability. This paper explores a scenario where non-maintenance-related events during system operation present opportunities for unscheduled, opportunistic inspections that can occur before scheduled ones. We develop a maintenance model that balances the costs of scheduled and opportunistic inspections, while accounting for judgment errors in both. Such errors may lead to misclassifying defective equipment as functional (false negative). A key challenge for managers is determining the optimal inspection intervals, considering all these factors. Our model not only identifies the initial time for a potential opportunistic inspection but also the maximum time the system can go without being inspected. An important characteristic of the model is its ability to adapt the frequency of each type of inspection based on their respective levels of false negatives. Consequently, the model serves as a decision-support tool to help define adequate maintenance plans, accounting for potential errors and assessing which opportunities most effectively contribute to observing the system state at a lower cost.

10:45-12:00 Session 8G: Collaborative Intelligence and Safety Critical Systems Applications I
Location: G
10:45
Harnessing the Potential Collaborative Workstations to Enforce the EU Social Pillar: A Qualitative Jump in Workplace Safety, Resilience & Risk Management

ABSTRACT. The rapid integration of collaborative workstation technologies in the workplace, such as AI-driven decision-making systems and collaborative robots, poses both opportunities and challenges for enforcing EU regulation, particularly in areas of workers' rights, labour conditions, and occupational health and safety. This paper investigates the potential of these technologies to enhance compliance with existing labour protections and fulfil the targets laid out in the European Pillar of Social Rights Action Plan (European Commission, 2022). Building on the theoretical frameworks of Aloisi (2021) and Ferreras (2017), this paper explores how automation, when aligned with existing EU soft law and hard law, can uphold high standards in labour conditions and strengthen workplace safety protocols. However, it also critiques the gaps in existing EU legislation, for instance the fact that the Working Time Directive and the Transparent and Predictable Working Conditions Directive lag considerably behind technological advancements, as they were drafted with more traditional technologies in mind. The data collection consists of 25 semi-structured interviews conducted to date, focused on three stakeholder groups of a) employees and b) managers of companies developing or using data-driven technologies, and c) professionals who facilitate, study and/or seek to influence them. Many interviewees reported associating the growing automation in their workplace with an increase in unfair, misguided or detrimental automated monitoring practices, more stressful working conditions, and generally, an uneven distribution of the benefits obtained through the introduction of collaborative technologies. This has a double negative effect, as neither do employees experience higher rates of satisfaction, nor is the full potential of collaborative workstations achieved. By seeking to understand which changes are required to increase workers’ agency and enfranchisement in the deployment and use of collaborative workstations, this research contributes to the conference themes of advancing resilience, reliability, and safety in both workplace environments and society at large.

11:00
Framework for an Open-Access Dynamic Accident Knowledge Graph Platform for the Critical Infrastructures in Process Industry
PRESENTER: Shuo Yang

ABSTRACT. Critical Infrastructures’ (CIs) resilience is fundamental to the common good of the environment, economy, and society. Hazard identification is the starting point of risk and resilience management. However, state-of-the-art techniques like HAZID/HAZOP highly depend on the expert group's subjective judgment. Business managers complain about the multiple safety certificates required from different regulators, as the hazard identifications from different groups are inconsistent and do not recognize each other.

There are already many established accident report databases: the National Transportation Safety Board (NTSB) Accident Database, Aviation Safety Network (ASN), Bureau of Aircraft Accidents Archives (B3A), International Maritime Organization (IMO) Global Integrated Shipping Information System (GISIS), etc. However, they only provide information and other knowledge about the hazards. Thanks to the development of large language models, this research aims to propose a framework to connect and extract evidence-based hazard information from these databases and keep updating the data at daily intervals. The extracted data could be structured into a dynamic Knowledge Graph (KG) according to the different functions of CIs and the category of the hazards. Combined with the blockchain technique, experts could also modify and improve the unsupervised KG and then contribute to a consensus on the hazards of CIs.

11:15
From Theory to Practice: Achieving Reliable Robot Autonomy through Symbol Grounding
PRESENTER: Aayush Jain

ABSTRACT. In an era where robotic autonomy is becoming increasingly pivotal across various sectors, the symbol grounding problem remains a significant barrier to achieving reliable, context-aware automation. This paper presents novel frameworks to enhance robot autonomy by integrating symbol grounding into autonomous systems, specifically focusing on robot manipulation tasks. We first introduce a method for programming robots using behavior trees, derived from single demonstrations, which embeds symbolic knowledge into robot operations, enabling adaptability. Second, we discuss a framework that integrates spatial scene graphs with BTs to improve task execution through an enhanced understanding of spatial relationships and object interactions, which is crucial for dynamic and cluttered environments. Lastly, we present a neuro-symbolic approach for failure detection and diagnosis in robotic systems. This approach leverages the synergy between symbolic reasoning and neural network capabilities to detect and diagnose operational failures accurately, addressing the critical need for reliability in automated systems. The preliminary evaluation results demonstrate advancements in the programming, execution, and failure detection of robot manipulation tasks, paving the way for more adaptive and intelligent robotic systems in complex real-world applications.

11:30
A Time-series Data Generation Tool for Risk Assessment of Robotic Applications
PRESENTER: Andrey Morozov

ABSTRACT. Robotic systems increasingly rely on artificial intelligence (AI) to enhance their capabilities in performing complex tasks across various domains. However, the development and evaluation of AI systems usually require high-quality datasets. In addition to normal datasets, faulty datasets are critical for enabling anomaly detection and failure prevention, which are essential for ensuring the safety and reliability of safety-critical robotic applications. However, faults are rare in real-world environments. Although fault injection techniques allow for the manual injection of configurable faults, deploying such methods directly in real-world settings is rather risky. As such, it is important to develop a data generation tool which is low-cost, safe, and efficient. To address this, we developed a time-series data generation tool for the risk assessment of robotic applications. This ROS-based simulation tool integrates three key modules: (1) a Gazebo-based scene generator that can configure different working scenarios (e.g., drilling and welding) by adjusting end-effectors, workpieces, and hand positions; (2) an online fault injector that can introduce faults into robotic systems with configurable parameters; and (3) a risk monitor that records faulty data and safety violations in real time by measuring the distance between hands and end-effectors. Proposed tool facilitates the generation of time-series fault data and helps identify faults that may pose risks in human-robot collaboration scenarios. Additionally, the proposed simulation tool enables fast and safe deployment for other robot-related research areas, e.g., deep learning-based anomaly detection, failure prediction, and risk assessment.

11:45
Enhancing Resilience in Robotic Systems through Self-Awareness and Adaptive Recovery
PRESENTER: Ruichao Wu

ABSTRACT. Robotic systems are becoming increasingly complex in both structure and behavior, integrating multiple components to provide advanced services. As this complexity grows, so does the likelihood of faults, making resilience essential for maintaining dependable functionality. This paper proposes a novel framework to enhance the resilience of robotic systems by enabling self-awareness and self-recovery in response to errors. To manage this complexity, we introduce the concept of a ``Skill Chain,” a set of components that collaboratively deliver specific services required for the system to transition through its states and achieve its mission goals. By continuously monitoring its internal state, the system detects errors before they propagate beyond the skill chain level. Upon detection, the system evaluates its current state and available resources. If spare components or other fault tolerance mechanisms exist, it reconfigures itself by forming a new skill chain capable of maintaining service delivery. Recovery strategies, such as backward recovery to return to a previously successful state or forward recovery to adapt service delivery, are dynamically applied to ensure minimal disruption. The paper introduces the approach concept and demonstrates its applicability on an exemplar robotic system.

10:45-12:00 Session 8H: Risk governance II
Location: H
10:45
Civil defense in spatial planning: A review of current knowledge in Sweden

ABSTRACT. Sweden is facing a shifting security landscape due to Russia's large-scale invasion of Ukraine and Sweden’s recent membership in NATO. This has created an urgent need to strengthen total defense, encompassing both military and civil defense. This development follows a post-Cold War period during which much of the Swedish civil defense planning was dismantled. As a result, the knowledge and capacity once held by Swedish authorities to integrate total defense considerations into spatial planning processes have largely been lost in recent decades, making this a prioritized area to reconstruct and develop. Civil defense is a critical consideration in spatial planning, where decisions have long-term impacts that can span decades. This applies to all levels of planning, from detailed development plans for individual structures, to national infrastructure and other essential societal functions. By incorporating civil defense into the spatial planning process, the foundation for a resilient society is created by prioritizing safety, preparedness, and reliable functionality both in everyday life and during crises or attacks. Currently, there is limited up-to-date reviews of existing knowledge status in this area in Sweden, since security issues have not been broadly included in the concept of resilience in planning generally. Based on scientific and grey literature supplemented by key informant interviews, this paper presents the findings of a knowledge overview that compiles existing knowledge and experiences, as well as identifies future knowledge development needs for civil defense in Swedish spatial planning.

11:00
Safe and Unsafe Information: Managing Risks in the Era of Generative Artificial Intelligence

ABSTRACT. The transformative impact of digitalization on organizations has significantly increased the availability of organizational information to the public. This shift amplifies the responsibility of organizations to ensure the safety of their digital products and services, as unsafe information can cause harm to society or the environment. Generative artificial intelligence (GenAI) introduces unique risks by enabling the effortless production of ungrounded and potentially harmful content, such as hallucinations, which can propagate misinformation when uncritically used. These challenges necessitate a departure from traditional corporate social responsibility (CSR) frameworks towards more robust risk management strategies. This paper develops a taxonomy of characteristics of safe versus unsafe information from GenAI, characterized by three dimensions: correct, open, and benignant for safe information; and incorrect, protected, and dangerous for unsafe information. Drawing on empirical data from Italian organizations we validate and verify the alignment of established risk taxonomies and derive practical recommendations for mitigating these risks. These include implementing rigorous data validation pipelines, restricting inputs to trusted and verified sources, and employing robust processing and oversight mechanisms. By embedding these strategies into governance frameworks, organizations can mitigate the risks of unsafe information while ensuring that GenAI contributes positively to societal and environmental well-being.

11:15
Risk Communication in Trauma Handling

ABSTRACT. Unaccompanied minor refugees are one of the most vulnerable sets of population that are in the risk of sinking in trauma and lifelong negative consequences that come along with trauma as these minors flees to another country all by themselves, without any parents or guardians who could be their support system. In Norway, the state takes the responsibility for their wellbeing and trauma handling since these minors don’t come with any parental figures. Since the number of applications from unaccompanied minor refugees seeking for asylum is increasing recently, the subject of trauma handling has become an important concept that needs attention. Therefore, this research aims to study how risk communication among refugees, host community and other involved stakeholders could be used effectively in trauma handling to improve the wellbeing of unaccompanied minor refugees. To study this, ten employees who, work with these unaccompanied minor refugees in a municipality in Norway were interviewed to know the situational and practical challenges they undergo in trauma handling and risk communication related to trauma. Along with this, some of the official guidelines that are followed by these employees has also been studied to check the accuracy. After a very deep study with these two research methods, three important findings were made. One is, to deal with trauma, risk communication should focus on creating a favourable environment, not just educating the refugees on risks. Second is that host communities also go through trauma due to this new situation. This has to be communicated and given attention, and the final one is, trauma handling requires flexible and effective cross communication of risks inside the different fields of employees working towards this goal with different backgrounds.

11:30
Advocating for a Contextual Approach: Integrating the Risk Governance Framework into deforestation mitigation strategies in Africa
PRESENTER: Chinwe Oramah

ABSTRACT. Deforestation in Africa poses significant environmental, socio-economic, and political challenges, necessitating a structured approach to risk management. This paper explores the integration of the Risk Governance Framework, developed by the International Risk Governance Council (IRGC), into deforestation mitigation strategies across the continent. The framework's emphasis on adaptive management and inclusive decision-making provides a comprehensive methodology for addressing the complex risks associated with deforestation. Key processes such as pre-assessment, risk appraisal, characterization and evaluation, management, and communication are examined in the context of socio-political factors driving deforestation and mitigation strategies in Benin, Burundi, Zambia, and Sudan. By systematically applying these processes, the study highlights the importance of stakeholder engagement, transparent communication, and context-specific strategies in developing sustainable solutions to deforestation. The findings offer valuable insights for policymakers, environmental organizations, and local communities, contributing to more equitable and resilient deforestation mitigation strategies in Africa.

11:45
Risk Identification and Operational Risk Assessment for the Safety and Availability of the Bjørnafjord Bridge, Norway

ABSTRACT. The Bjørnafjord Bridge project is an ambitious infrastructure undertaking in Norway aimed at constructing a combined bridge consisting of a 760 m long cable stay bridge, and a 4770 m long floating bridge over the Bjørnafjord, being one of the longest floating bridges in the world.

The study presents an operational risk assessment (ORA) for the operational safety and availability of the Bjørnafjord Bridge. The multi-disciplinary risk assessment considers, with a focus on human safety and operational downtime, various number of hazards related to among others, ship collisions, traffic safety, geo-hazards etc.

The risk assessment is based on a through system description including bridge design, geophysical information, traffic forecasts and nearby ship traffic. Based on the system description, a hazard identification (HAZID) was conducted engaging a broad range of relevant stakeholders to ensure a complete spectrum of potential operational risks. The identified hazards are modelled as an event tree analysis determining the frequency of each hazard and combined with the consequences in terms of fatalities and disruption for the risk calculation. Finally, the risk results are compared to the defined risk acceptance criteria, and where the risk is found to be higher than the acceptance criteria, additional risk reducing measures are identified. The study also includes a cost-benefit analysis of the capabilities of selected risk reducing measures.

This comprehensive risk assessment of the Bjørnafjord Bridge project, considering both current and future transportation trends (conventional and electrical vehicles), stands out as a significant and forward-thinking contribution to the increasingly sophisticated field of human safety and operational reliability assessments. The project underlines the importance of applying an ALARP (As Low As Reasonably Practicable) mindset and a cost-benefit analytical approach to bridge safety and hazard identification, setting a precedent for future infrastructure projects.

10:45-12:00 Session 8I: Oil and gas
Location: I
10:45
Risk Level project in Norwegian oil and gas – 25 years anniversary

ABSTRACT. The Risk Level project (RNNP) was launched in 2000 and presented the first results in 2001, with annual reports since then. The reports cover all petroleum activities offshore and onshore Norway, based on voluntary and mandatory submission of data from oil companies and rig owners. The reporting covers personnel injuries, major accident precursors, environmental spills, well barriers, topside fire and explosion barriers, marine system barriers, maintenance, crane and lifting incidents and accidents, working environment, safety climate, risk perception and work related illnesses. The broad reporting over such a long period is unique in a worldwide perspective, and has had a very significant impact in Norway, for authorities, employees and employers.

The results have shown significant improvements over the 25 years period, especially in the first half of the period. The scope of the reporting has been increased significantly since the first report, but the data basis has been reduced significantly in some areas, due to the improvements achieved over the years.

The RNNP has been important to achieve consensus between the different parties about levels and trends, as well as focus areas for improvement and motivation for risk reduction.

The paper will review the trends in the overall performance for the various parameters, but the main focus will be on the development of indicators and the modelling of major hazards, and the challenges that diminishing volume of data implies. Also changes in framework conditions will be addressed, including climate issues and the green change away from petroleum.

The paper will also discuss the possibility to develop an overall indicator to cover all health, environment and safety aspects, and focus on the advantages and disadvantages of such an indicator.

11:00
Driving through the simulated night: A comparison between simulator-based and traditional night driving training
PRESENTER: Martin Skogstad

ABSTRACT. Implementing new technology is a costly endeavor for many companies. This is especially true in cases where the technology is not well perceived and received by the employees potentially leading to financial costs, reduced worker motivation and reduced safety. This paper looks at how two well-known indexes, alongside a self-made scale on workplace technology experience, predicts acceptance of future workplace technology. 78 participants from the Norwegian oil and gas sector were shown four demonstration videos of new technology (not yet released) embedded in a questionnaire that included a Technology Acceptance Model scale for each technology, willingness to change scale (Resistance to Change), general readiness towards new technology (Technology Readiness Index), workplace technological experience and workplace implementation experience (self-made scales). Workplace technology experience was the strongest predictor of believed technology acceptance in all four technologies. The suggested explanation is that previous technologies introduced at work creates a trust, or lack thereof, in those developing, choosing and implementing future technology. This trust is a stronger predictor of future acceptance than an individual’s general technology readiness or resistance to change. More research is needed to understand the mechanisms by which technology experience influences future acceptance. However, the findings signify the importance of technology choice, as this not only influences the current implementation, but also future technologies.

11:15
Risk assessment of a liquid storage terminal: comparison of Brazilian and Dutch methodologies
PRESENTER: Edmilson Silva

ABSTRACT. Risk assessment is an important tool to ensure the safety of an industrial facility. In Brazil, it is applied in the decision-making process for the licensing of hazardous installations. This paper aims to compare the Quantitative Risk Assessment (QRA) methods used in Brazil (state of São Paulo), France and the Netherlands, with regard to flammable liquid storage terminals. The results calculated using the Brazilian methodology have been compared with the results obtained by RIVM (National Institute of Public Health and Environment) and INERIS (French National Institute for Industrial Environment and Risks) published by Lenoble et al in 2011. The risks of the same flammable liquid storage depot used by RIVM and INERIS have been evaluated based on the guidelines defined by the environmental agency of the state of São Paulo, P.4261 - Risk of accidents of technological origin - Method for decision-making and terms of reference (P4.261/11 2nd ed.).

11:30
Decommissioning Framework for Offshore and Oil and Gas Installations
PRESENTER: Joe Ford

ABSTRACT. Decommissioning an offshore oil and gas installation generates large volumes of waste materials, some of which can be classified as hazardous. This, coupled with the changes in operators, workforce and numerous legislations and regulations, means that decommissioning process is complex and subject to confusion. This paper outlines a decommissioning framework for offshore oil and gas installations, which aims to provide guidance throughout the decommissioning process as well as bringing clarity, consistency and sustainability to the process. The framework is informed by previous research findings, including the analytical hierarchy process findings, the Bayesian network and discussions with industry experts, which highlighted key factors in handling hazardous waste. The key factors included understanding the legislation, sharing effective knowledge, and identifying waste materials. The framework has been applied to a historical case study to demonstrate its relevance and effectiveness.

11:45
Data-Driven Multi-Failure Degradation Modeling for Neutron Generators in Logging-While-Drilling Service

ABSTRACT. Logging-while-drilling tools are essential to the oil and gas industry for formation evaluation and acquisition of real-time formation data. These tools operate in extreme environments, facing high temperatures, vibrations, and pressures that can accelerate wear and lead to failures. Malfunctioning tools risk inaccurate data, causing delays or even operational shutdowns, that can result in nonproductive time and financial losses. Field engineers traditionally analyze sensor data after each run to assess tool health, but the large data volume makes manual analysis time-consuming and prone to error. A more efficient approach involves domain experts to identify key subsystems and channels with relevant degradation data. Statistical features extracted from these channels provide insight into system degradation over time, which can be used to build machine learning models that estimate tool condition from the recorded data.

We present a data-driven approach for estimating the health state of neutron generators in drilling tools, starting by identifying two primary failure modes. The challenge lies in modeling and monitoring their interactions and progressions. Building on prior work in fault detection, diagnostics, and remaining useful life estimation for one failure mode, we expand to model an additional failure mode. A health indicator derived from sensor data of the neutron generator is developed, serving as a statistical representation of degradation over time. This indicator is used to train a machine learning model that predicts the health state, using real-world drilling data collected from various global operations.

The models for both failure modes are integrated into a decision support system that automates health assessments, enabling more precise monitoring and effective maintenance. This system helps field and maintenance engineers quickly assess neutron generator condition based on run data, improving reliability and reducing downtime. This work is part of a larger initiative to develop a digital fleet management system for drilling tools.

10:45-12:00 Session 8J: Exploring the opportunities and challenges of AI in managing Major Accident Hazards in high-risk environments (Presight)
Location: J
10:45
AI as a copilot for conducting a bowtie analysis in the modelling phase of Risk Management.
PRESENTER: Petter Johnsen

ABSTRACT. Artificial Intelligence (AI) presents opportunities in the modelling phase of the ISO 31000 Risk Management framework, particularly in creating bowtie models to prevent Major Accident Hazards. Bowtie models are used to visualise risks in an easy and simple way. Our focus is on exploring how AI can act as a collaborative tool. Or in other words as a "copilot" working together with human expertise to identify and assess potential risks, consequences, and threats. This integration of AI and “a human in the loop” seeks to expand and provide greater insight of a company's risk picture. AI’s capacity to process and analyse vast quantities of data and identify trends offers significant advantages in generating predictive insights into potential hazards and influencing factors. By using AI to augment traditional risk analysis, we aim to explore whether this approach can improve the quality and relevance of identified threats. Ai is not intended to replace existing methods, but to add value by using its capabilities to process large volumes of data and insight on trends. While we recognise the risks, our focus is on using innovation and technology to improve efficiency and quality. We aim to explore AI’s potential to enhance decision-making and foster innovation. Our focus will be on how AI as a copilot can contribute to minimising risk for clients while simultaneously fostering innovation. We want to explore how AI can improve the analysis of a bowtie model to uncover factors that might otherwise be overlooked. This exploration seeks to balance cutting-edge technology with practical benefits to maintain robust, forward-looking risk management strategies.

11:00
Increasing confidence in AI models by explaining uncertainty in predictions
PRESENTER: Andreas Hafver

ABSTRACT. A major challenge in AI is that models are sometimes confidently wrong. This can have severe consequences if the AI models are used for critical decisions. One way to address this issue is to use interpretable models or explainability methods so that reasons can be provided for predictions. These reasons can then be scrutinized by humans to determine if they trust the model, however, the explanation could potentially be convincing but wrong. Another approach is uncertainty quantification methods to provide a measure of confidence about predictions. However, a measure of uncertainty is of limited value unless we understand what the uncertainty is based on. In this paper, we recognize that explanations of predictions and measures of confidence about predictions are useful for decision-makers. However, we hypothesize that decision-makers could benefit even more from explanations of the uncertainty in predictions. This paper introduces an approach based on the Tsetlin Machine that provides predictions together with a measure of uncertainty about the predictions, explanations for the predictions, and explanations for the uncertainty in the predictions to explore how the explanation of confidence would add value to a decision-maker. Additionally, we propose to use uncertainty explanations with ``human-in-the-loop" feedback in a continuous cycle to improve the model. This approach enhances both the technical and practical aspects of AI, making it more reliable and trustworthy in high-stakes applications such as healthcare, energy, transport and finance. Using real-world data, we explore the importance of local interpretability—ensuring decision-makers gain relevant insights for individual predictions and its uncertainty—and global interpretability, which provides a comprehensive understanding of the model’s decision process. This global understanding, enriched by expert feedback, enables further refinement of the model.

11:15
Artificial Intelligence and Major Accident Risk: Towards a Framework for Systemic Safety Risk Assessment

ABSTRACT. Preventing major accidents is fundamental in all safety-critical operations. History has shown that the introduction of new technology may create novel safety risks. Today, fatalities associated with artificial intelligence (AI) are reported in manufacturing, healthcare, and transportation. In the energy sector, AI is increasingly used in an operational context where the potential for major accidents is considerable. Although the introduction of AI is typically done with a strong emphasis on information security and civil rights risks, there is limited systematic focus on major accident scenarios. This is partly because our current methods and approaches are not designed to do so and partly because the risk focus in the AI domain is on other potential outcomes. Additionally, AI is a diverse field with a range of industrial applications, from preventive maintenance and diagnosis to autonomous systems like robots and drones, and operator support systems. This paper describes the results of a project that developed a framework for the classification of major AI (and AI-related) technologies as a basis for major accident hazard identification and safety impact assessment for each technology category. The result is a risk-based framework that can be used in the introduction and application of AI in different contexts, increasing awareness and understanding of AI as a factor in major accident risk assessments. The framework has a systemic focus that emphasizes the interplay between technical, operational, and organizational factors.

11:30
Risk Management and Uncertainty of Artificial Intelligence in a High Hazard Industry
PRESENTER: Elisabeth Lootz

ABSTRACT. The Norwegian Ocean Safety Authority (HAVTIL) has observed rapid changes involving Artificial Intelligence (AI). Currently we have limited knowledge of possible consequences, and more uncertainties related to the development, deployment and maintenance of AI compared to other technologies. Deployed in high-hazard contexts such as the petroleum industry, AI can introduce new safety risks that must be mitigated to ensure prudent operations. Meticulous risk management throughout the AI's lifecycle, understanding new uncertainties, will be essential for both providers and deployers. The possibilities AI represents are vast, and optimism among the players is tremendous. However, the speed of development of AI solutions, prediction risk, unexpected or malicious use, new discipline experts unfamiliar with the potential of major hazards offshore can make it challenging to develop sufficient risk understanding and necessity of reliable AI predictions in a high hazard industry. We assert that the concept of uncertainty in our regulations will be more essential in risk management of safety to provide knowledge-based decisions to ensure safe operations when new technologies such as Artificial Intelligence (AI) is introduced.

10:45-12:00 Session 8K: Fire risks I
Location: K
10:45
THE NOVEL MULTI-LEVEL METHODOLOGY FOR FIRE RISK ASSESSMENT OF RESIDENTIAL BUILDINGS
PRESENTER: Mirjana Laban

ABSTRACT. The frequent occurrence of devastating residential fires worldwide highlights the urgent need to improve the fire safety of residential buildings. The proposed multi-level methodology includes: 1) Spatio-temporal analysis of fire distribution in residential buildings, using the Fire Hazard Map; 2) Mapping and analysis of the residential buildings’ fire risk, using the Fire Risk Map; 3) Quality control of the building's fire safety performance, using the checklist method; 4) Risk assessment in terms of need for installation of a fire protection systems, using the Euroalarm method; 5) Quantitative fire risk assessment, using the event tree method. This approach plays a crucial role in managing fire risk in buildings and mitigating risks at the urban level, as it enables resource planning, the implementation of fire protection measures, and the organization of firefighting interventions in the event of a fire, based on a global spatial representation of the building and descriptive information. The newly developed methodology was validated on a selected multi-storey residential building located in Novi Sad, Serbia. The fire risk assessment revealed basic indicators of frequency, time, and place of fire occurrence in residential buildings and indicated a high fire risk for the subject buildings. The analyzed building has a very low level of fire safety. The existing fire protection measures are insufficient to reduce the fire risk of the building to an acceptable level. The residents who have the highest probability of survival in the event of a fire are those who remain in their apartments and wait for the fire brigade to rescue them. In contrast, residents who start evacuating late, using a stairwell filled with combustion products, have the lowest probability of survival.

11:00
Understanding and preventing fires close to the body
PRESENTER: Edvard Aamodt

ABSTRACT. Background: Several fatal fires start in the clothes worn by the deceased or in the furniture the deceased was lying or sitting in, and such fires are known to cause fatalities even in homes with automated sprinkler systems installed. Studies of fatal fires have contributed to knowledge on how these fires happen, but there are fewer studies on near-misses and successful preventive measures. We have therefore interviewed people working with fire prevention in Norway to document what fire preventive measures they use, their thoughts on the effectiveness of the measures, and what they see as key solutions to prevent fatal fires where someone is in immediate closeness to the object of first ignition.

Objectives: - Investigate existing measures to prevent and mitigate the consequences of fires where a person is in immediate closeness to the object of first ignition. - Investigate what measures the persons working with fire prevention for people at risk find practical to use and have good experience with. - Suggest measures to prevent and mitigate the consequences of fires where a person is in immediate closeness to the object of first ignition.

Conclusion: Home visits are an important tool for giving advice and for discovering that a person needs additional fire preventive measures. Measures can be as simple as making sure that someone with reduced mobility who smokes indoors have a steady glass of water near them, as an ashtray and extinguishing medium. The most expensive measure suggested, which can also be tricky to implement, is personal protection water mist systems. Both financial, practical and organisational aspects are important for successful implementation of measures, as well as a good co-operation between the team implementing measures and the person receiving them.

Acknowledgement: This work financed by the Norwegian Directorate for Civil Protection and the Norwegian Building Authority

11:15
A Standardised Way of Municipal Learning From Fires? Opportunities and Challenges From a Development Project
PRESENTER: Edvard Aamodt

ABSTRACT. This paper addresses a standardization process of learning from fires in Norwegian fire and rescue services. Fires kill on average around 40 persons each year in Norway. To prevent future loss of lifes, organisations should learn from past incidients, which is a requirement by Law. Today, fire and rescue services are rather autonomous organizations in municipalities, and the terminology, practice and procedures for learning from fires are varied. There has been identified a need to standardize "evaluation" of fires, and a need for a research approach to the topic. In this project, we use a participatory approach including workshops and interviews to explore the topic of standardisation of learning from fires, and the paper highlights opportunities, challenges and concrete learning artefacts from this development project. Theories of organisational learning/learning from accidents and guidelines from human and organisational performance (HOP) are applied as frameworks for the work.

11:30
Amplifying Wildfire Risk Preparedness in Southeast Australia: A Creative, Community-based Approach
PRESENTER: Olivia Jensen

ABSTRACT. The risks associated with bushfires in Victoria are compounding as demographics change, community trust shifts, and anthropogenic-induced climate crises intensify and elongate fire seasons. Particularly, as the number of residents in the wildland-urban interface (WUI) increases, more people are exposed to wildfire risk, but do not undertake the preparedness actions recommended by public authorities. As trusted risk messengers retire, and new forms of risk messaging emerge, it is important for risk communication to build trust to help residents internalise risk and influence them to engage in protective action. Trusted scientists, practitioners, and volunteers through the Country Fire Authority and community depots are often overworked and underfunded. In this context, this paper reports on a collaborative intervention by researchers and practitioners to design creative forms of risk communication to support informed decision-making under uncertainty. Can engaging bushfire survivors to share their testimonies and insight impact affective and emotional responses in a way that operationalises uncertainty to improve message effectiveness? Can aligning with celebrity community members such as AFL players who have large social followings to disseminate a seasonal message of responsibility incentivise hazard preparedness? Does a catchy jingle as part of risk messaging raise and sustain public engagement on seasonal risks that lose salience over time? With an impetus on learning alongside people facing these risks, and frontline risk managers, this research will shed light on how people respond to different types of risk messaging in relation to wildfire preparedness. Ultimately this will enable fire organisations and local governments to incorporate evidenced-based concrete recommendations into messages increasing reach and efficacy.

11:45
Exploring the Relationship Between Human Mobility and Wildfire Risk in Portugal’s Changing Rural Landscapes

ABSTRACT. Wildfires are an escalating threat in many regions, with 2024 seeing major incidents in Brazil, Chile, California, Portugal, and Greece, among others. While climate change is often cited as a key driver of wildfire frequency and intensity, human activity—particularly land management practices—plays an equally significant role. Rural depopulation and the abandonment of agricultural and forestry activities have transformed many European landscapes, leading to an accumulation of combustible vegetation and, consequently, greater wildfire risk. Although the link between rural abandonment and wildfire proliferation is well-documented, less attention has been paid to the experiences of those who remain in fire-prone areas or newcomers who settle there. This study applies an environmental mobility lens—a field of research that explores how environmental conditions influence human im/mobility and how human movements, in turn, affect ecosystems—to examine the relationship between human mobility and wildfire risk in Portugal. Focusing on the Pinhal Interior Norte region, devastated by the 2017 wildfires, the research employs ethnographic methods, including interviews with long-term residents and recent migrants, to investigate why and how people choose to stay, leave, or relocate to fire-vulnerable areas. The findings confirm that rural abandonment, common in Portugal and other parts of Europe, contributes to landscape changes that heighten wildfire risk. However, new patterns of human im/mobility may offer opportunities for landscape restoration and wildfire risk mitigation. This study ultimately contributes to understanding how human factors—particularly mobility and cultural connections to the land—influence both vulnerability and adaptive responses to wildfire risk.

12:00
Design of Optimal Wireless Sensor Networks for enhanced wildfire risk mitigation at wildland–human interfaces
PRESENTER: Effie Marcoulaki

ABSTRACT. Critical infrastructure and fire-vulnerable facilities are often located in close proximity to wildland domains, including urban settlements, human activity areas, and industrial zones. Vulnerability is frequently assessed through physical analyses of the fire's direct effects on specific types of facilities (e.g. storage tanks). However, wildfire dynamics and behaviour in the extended wildland domain are often neglected, overlooking scenarios where distant ignitions from wildfires can trigger Natech events and the release of hazardous substances, e.g., from Seveso sites, potentially leading to domino effects. Proper consideration of wildfire dynamics is essential to determine the response times required to prevent disasters. This study considers Wireless Sensor Networks (WSNs), as early detection systems, and uses wildfire simulation datasets to obtain statistical insights into response times and optimize the sensor locations accordingly. The study considers a case study of a wildland-industrial interface in Spain, including time-to-failure data on storage tanks in immediate fire proximity. Results demonstrate that early wildfire detection systems significantly enhance risk awareness, underscoring the potential of optimized WSNs for mitigating wildfire risks at wildland-human interfaces.

10:45-12:00 Session 8L: Safety, Reliability, and Security (SRS) of Autonomous Systems I
Location: L
10:45
Evolving Perspective of Safety, Reliability, and Security of Autonomous Systems – Findings from IWASS 2024
PRESENTER: Andrey Morozov

ABSTRACT. This paper summarizes the key insights and discussions from the International Workshop on Autonomous System Safety (IWASS) 2024, held in Krakow, Poland. As the fifth iteration of the IWASS series, the workshop brought together experts from academia, industry, and regulatory bodies to address critical challenges in the safety, reliability, and security (SRS) of autonomous systems. The event highlighted the interdisciplinary nature of SRS, focusing on diverse domains such as automotive, maritime, robotics, and industrial automation. Key themes included human-autonomous system interaction, risk assessment methodologies, regulatory challenges, and advancements in sensor technologies. Discussions underscored the need for robust frameworks to ensure safe and reliable system operations, emphasizing the integration of real-time monitoring, explainable AI, and continuous safety assessments. The findings from IWASS 2024 offer a roadmap for future research and industry collaboration, aiming to overcome existing barriers and foster the safe and widespread adoption of autonomous technologies.

11:00
The Challenges of Building Trust in Autonomous Navigation Systems: A Perspective on the Tester

ABSTRACT. This study seeks to enhance understanding of the tester’s role in building trust in autonomous maritime systems. As ongoing maritime developments point towards the implementation of advanced navigation systems with the potential to enable autonomous- and remotely operated ships, the industry aims to enhance safety, efficiency and reduce its environmental footprint. However, assessing the reliability and robustness of intelligent and complex systems, envisioned to operate in complex and dynamic environments, poses significant challenges towards existing regulatory frameworks and assessment practices. Hence, to build trust in autonomous systems and ensure that safety requirements are met, comprehensive and systematic testing is necessary. In this context, automated testing through simulation is an important method. This approach involves automatically and iteratively probing the system under test with a range of scenarios. However, considering the potential size of the scenario space, an automatic approach to evaluating system outputs is needed. Still, whilst automatic evaluation can greatly improve the efficiency and coverage of the test process, human testers provide significant added value through their domain expertise, knowledge, and experience. This includes evaluating the overall results of the automatic process, performing spot-checks on individual cases, and investigating specific results. Together, the aggregate of these evaluations supports the tester in building confidence in the system’s test results, and trust in the system’s overall performance. Considering the novelty of autonomous navigation systems, this paper reviews relevant regulations, challenges related to requiring safety equivalence, and the need for testing. Furthermore, this paper explores the anticipated role-change of human testers and reflects on how to support them in their new tasks.

11:15
Safety Argumentation for ML-Enabled Perception Systems for Autonomous Trains State of the Discussion and Perspectives
PRESENTER: Mirko Conrad

ABSTRACT. Railway is constantly gaining importance as a sustainable means of transportation. Highly automated train operation is a means to increase the utilization of existing networks. Technical solutions for driverless operation (Grade of Automation GoA3 and higher) typically rely on Machine Learning (ML) components to evaluate sensor data and establish situational awareness. Due to the inherent complexity and black-box nature of ML components, traditional approaches for safety argumentation are not directly applicable to ML-enabled perception systems.

In our paper we present the state of the discussion on this subject and sketch-out potential approaches for technical solutions and the associated safety argumentation. First we discuss safety goals and objectives of perception systems for automated train operation. We then highlight problem areas that prevent the use of traditional methods for arguing the safety of ML components and propose possible technical solutions and methods at ML component level and at the level of ML-enabled perception systems as a whole, taking into account the status of work in ongoing major funded projects. Finally we discuss strategies for safety argumentation and the role of safety argumentation as a driver for development decisions

11:30
Assuring Safety of AI-based Systems: Lessons Learned for a Driverless Regional Train Case Study
PRESENTER: Marc Zeller

ABSTRACT. Artificial Intelligence (AI) offers great potential to enable the fully automated operation of trains. Mandatory novel functions to replace the tasks of a human train driver, such as obstacle detection on the tracks, can be realized using state-of-the-art Machine Learning (ML) approaches. However, the use of AI/ML to implement perception tasks in the railway context poses a new challenge: How to link AI/ML techniques with the requirements and approval processes that are applied in the railway domain in practical way? Within the safe.trAIn project we laid the foundation for the safe use of AI/ML to achieve the driverless operation of a regional train. Based on the requirements for the certification process in the railway domain, safe.trAIn investigated methods to develop trustworthiness AI-based functions, taking data quality, robustness, uncertainty, and explainability aspects of the ML model into account. In addition, the project developed a safety argumentation strategy for an AI-based obstacle detection function of a driverless regional train. In this paper, we describe the challenges to assess an AI-based obstacle detection function according to the given regulation in the railway domain. Moreover, we describe our safety assurance strategy applied to our case study in the safe.trAIn project.

11:45
From Nobel Prize(s) to Safety Risk Management: Lessons learnt from 2018 Uber collision for their application to autonomous train systems

ABSTRACT. Lessons learnt from the Case Study Analysis of the 2018 Uber Automated Vehicle Collision are presented in the paper. The System for Investigation of Railway Interfaces (SIRI) Cybernetic Risk Model is used for modelling and analysis of the NTSB Report NTSB/HAR-19/03. Elements of effective safety risk management system compose the SIRI risk model.

The research and development of autonomous driving systems in the automotive sector is driving the trend towards autonomous train systems as well. However, railway safety risk management has been facing a crisis since the promulgation of the 2004 EU Safety Directive. This directive demands a cultural shift away from current engineering safety management practices despite the good statistical record of railways as a safe transport mode. Using techniques such as Failure Mode, Effects, (and Criticality) Analysis and Bowtie analysis do not support identification of systematic errors at the higher levels of socio-technical system that is involved in monitoring and certifying the autonomous train systems. Further, the traditional risk assessment process used in the rail sector does not address decision mistakes noted in the decision making under uncertainty literature. Past research on accident case studies indicates that organisational and management factors that contribute to fallible decisions are not included in the risk assessment process. Thus, the hypothesis of underestimation of significant hazards and risks such as autonomous train system collision hazard and over-estimation of failure of human safety supervisor of autonomous train system is proposed.

One benefit of the paper is to contribute to reflection on the part of systems engineers to help them plan, design, develop and operate safe autonomous train systems and related signalling systems as well.

10:45-12:00 Session 8M: Nuclear safety (IFE) Safety, automation and awareness
Location: M
10:45
Systemic Automation Failures and Nuclear Safety

ABSTRACT. Systemic Automation Failures and Nuclear Safety

11:00
EMBRACING A RISK-INFORMED SAFETY CASE FOR NEXT GENERATION NUCLEAR REACTORS: BENEFITS, CHALLENGES AND LESSON LEARNED
PRESENTER: Cesare Frepoli

ABSTRACT. Several advanced nuclear reactor designs are now under active development in the U.S. and across the globe, promising sustainable solutions to the growing world energy needs and security. The designs currently considered are quite diverse and different from the more established light water reactor technology that has dominated the operating commercial nuclear landscape. In response, the U.S. Nuclear Regulatory Commission (NRC) staff is moving forward with development of the 10 CFR Part 53 rulemaking, which is intended to establish a more risk-informed, technology-agnostic framework for licensing and regulating such new designs.

A full risk-informed analysis required integrating multiple disciplines, starting from the development of comprehensive Probabilistic Risk Analysis (PRA) models for the Structures, Systems and Components (SSC) involved, a realistic modeling of accident progression for a variety of internal and external events, a realistic modeling of the radiological consequences to those events, all combined with a believable treatment of uncertainties in each part of the analysis.

Risk-informed design and regulation has been embraced by most of engineering disciplines like in the aerospace industry. Despite its benefits and regulatory initiatives to push toward more risk-informed safety assessments, the reality is that when developing the safety case, the nuclear industry is only gradually considering that approach, while conservative, bounding, deterministic demonstration of safety remains prevalent in most licensing applications.

This paper outline what a full risk-informed safety analysis would entails. The paper provides insights on the historical reasons for such industry trends and lesson learned from initial case studies and/or applications of risk-informed approach. The review also illustrates methods that will greatly facilitate the adoption of these methods by leveraging modern software and data management techniques.

11:15
A Taxonomy of Human Tasks for Human Reliability Analysis of Small Modular Reactors
PRESENTER: Claire Blackett

ABSTRACT. Small Modular Reactors (SMRs) represent a radical shift in how nuclear power plants (NPPs) will be operated. SMR designs propose advanced characteristics such as smaller, simpler plants, increased use of automation, and increased reliance on inherent safety properties and passive safety systems. This allows for new operating paradigms such as remote, autonomous and/or multi-unit operation, reduced staffing plans, and alternative applications such as hydrogen production, and district heating. As a result, the role of the human in the operation of SMRs is anticipated to change significantly by comparison to conventional large NPPs. For Human Reliability Analysis (HRA), this means that that our current knowledge and assumptions about human performance may no longer be valid, as these are based on analysis of operator actions at conventional plants. Further, existing HRA approaches may not adequately capture the extent, or impact, of the potential changes to operational tasks and scenarios. At the same time, the role of the operator as a safety barrier in an SMR is expected to be at least as important as at a conventional NPP as it is highly likely that human operators will still form part of the defence-in-depth strategy if automated, inherent, or passive safety systems fail or do not work as required. The Swedish Radiation Authority has recently awarded funding to Risk Pilot for a project to investigate the types of tasks that operators will be expected to perform in an SMR plant, with emphasis on the identification of new types of tasks that differ from conventional NPPs. In this paper we will present some key findings from a literature review and how these inform our initial attempts at the development of a task taxonomy to support HRA for SMRs.

11:30
Automation criterion for Small Modular Reactors

ABSTRACT. As it happened with basic software algorithms in 80s, Artificial Intelligence (AI) powered algorithms are slowly but surely becoming integral part of control systems in safety critical industries, including the risk averse nuclear field. The level of control will surely vary across different domains, and given the conservatism of the nuclear industry, we should not expect in a Nuclear Power Plant (NPP) fully autonomous control systems soon. However, the introduction of higher levels of automation is likely, where AI-based automation is given more responsibility, with the human still being able to take control when necessary. We argue that the essential criterion for high level automation is the assurance that the control transition between AI and human (in either direction) is achieved with acceptable risk. The paper attempts to explain what this risk is and how to make it acceptable in the nuclear settings, and beyond.

11:45
Safety and Security Concept of Similar ASEAN Working Culture in Experimental Facility of Nuclear Research Reactors

ABSTRACT. According to previous Thailand’s case study on the cultural impact of operators and users in the same working environment of a Nuclear Research Reactor (NRR), the working culture of operators (insiders) was recognized to influence users (outsiders) who work in an NRR experimental facility. Considering user thought thus becomes a significant approach for minimizing human error in working. In 2017, ASEAN Large Nuclear and Synchrotron Facilities Network (ASEAN LNSN) was organized to share large nuclear and synchrotron facilities among ASEAN countries including NRR experimental facilities. Due to human factor impact, it is important to understand whether or not the NRR facility sharing influenced various working cultures on safety and security issues. Thus, this study intends to show how to apply the homogenous working culture concept as evidence to compromise unnecessary concerns and minimize stress among ASEAN users in terms of safety and security in facility sharing through Hofstede’s cultural indices: (1) power distance index (PDI), (2) individualism index (IDV), (3) masculinity index (MAS), (4) uncertainty avoidance index (UAI), and (5) long-term orientation index (LTO). A homogenous working culture survey is expected to help provide a foundation for improving safety standards and operational performance across ASEAN’s NRR experimental facilities, facilitating cross-border knowledge sharing and collaborative safety and security practices.

10:45-12:00 Session 8N: WORKSHOP: International Workshop on Energy Transition to Net Zero: Reliability, Risk, and Resilience (ETZE R3) I: Hydrogen safety
Location: N
10:45
Towards a reliability database for hydrogen equipment
PRESENTER: Tony Kråkenes

ABSTRACT. The availability of reliability databases for hydrogen equipment is essential for risk and reliability assessments of both established and new hydrogen facilities. Existing hydrogen event databases have significant shortcomings both in terms of quantity and quality of data, hence providing limited value for rigorous assessments. There are three main challenges with hydrogen event data collection and sharing. First, there is a lack of data due to short operational time, missing registration and lack of sharing between stakeholders. Second, there is a lack of standardisation in the industry when it comes to data collection, including a lack of definitions related to e.g. equipment types and failure modes. Third, there is also currently a lack of stakeholder incentives to perform data collection and sharing due to market considerations, limited financing and the absence of regulations. In order to obtain high-quality reliability data for hydrogen equipment, we argue that each of these challenges must be addressed in a structured and holistic manner. In particular, a systematic approach for collection and analysis of failure data should be investigated. Looking at the more mature petroleum industry for inspiration (the OREDA project), we propose a similar setup for hydrogen equipment. Hence, we present an adaption of the OREDA approach to the hydrogen domain. As an example and test case, we explore the equipment type pressure swing adsorber (PSA) used for (blue) hydrogen purification. The PSA example is detailed in terms of equipment structure (i.e. subunits, components and boundary) as well as failure modes and their criticality.

11:00
HAZOP study of a water electrolysis plant used in green hydrogen production

ABSTRACT. To advance the green hydrogen economy, ensuring high reliability and safety of the water electrolysis plant is crucial. This study presents a HAZard and OPerability (HAZOP) analysis to identify the probable deviations in a water electrolysis plant with proton exchange membrane (PEM) electrolyzer from its intended operation, along with their causes and consequences. HAZOP is a risk and reliability analysis technique that identifies operational failures by analyzing logical sequences of cause-deviation-consequence for various process parameters. To conduct HAZOP, the water electrolysis plant is divided into sub-subsystems and deviations for each sub-systems is identified using guideword and process parameters. A literature review is performed to identify the causes and consequences of each deviation and recorded in a dedicated HAZOP table. The HAZOP analysis shows that deviations in a PEM electrolyzer are interconnected, with one deviation potentially triggering another. The performance of the entire plant is heavily influenced by its sub-systems, as faults in auxiliary components can impact the electrolyzer's efficiency, degradation, and safety. Key consequences of these deviations include reduced efficiency, degradation of the PEM electrolyzer, and the formation of a flammable mixture. This work provides great input for forecasting component failures and performing maintenance actions to prevent failures/ accidents, or to restore desired hydrogen production rate.

11:15
A Model of Hydrogen Ignition Probability based on Leak Data from Hydrogen Filling Stations
PRESENTER: John Spouge

ABSTRACT. The probability of ignition of hydrogen leaks is one of the main uncertainties when analysing the risks of hydrogen infrastructure. Several ignition probability models are available, but they are mainly based on judgement and give widely differing results. To develop an improved model, this paper collects available data on 168 leaks from hydrogen filling stations (HFS) and tube trailer transfer at similar facilities, using incident reports collected from public domain sources. The dataset is dominated by leaks in Japan and the USA, and has only one leak in China and none at all in Germany and South Korea, despite these countries all having large HFS populations. Although there are differences between the data from different countries, the study maximises the dataset by combining all available data. A new ignition probability model (HFS-2024) is based on this dataset, expressing ignition probability as a function of the hydrogen release rate. The confidence ranges of the data validate the new model, as well as validating some previous models in specific release rate ranges, but only the HFS-2024 model matches the data over the whole release rate range. The paper discusses the limitations in the work, which illustrate the need for better data on both ignited and unignited hydrogen leaks. The new approach provides a pathway for updating the model as experience with hydrogen leaks is accumulated.

14:45-16:00 Session 10A: Knowledge-related topics
Location: A
14:45
Propagating knowledge strength through assurance arguments using three-valued logic to assess confidence in claims
PRESENTER: Andreas Hafver

ABSTRACT. Assurance refers to the substantiation and scrutiny of claims about a system's capabilities and the risks associated with it. The assurance process involves formulating claims that capture stakeholders’ interests in the system and building structured arguments to validate and verify the claims. The end goal is to determine if there are sufficient grounds for confidence in the claims, which requires a measure of confidence and a method for propagating it through the assurance argument.

One common approach for propagating uncertainty through arguments is by use of probability theory and Bayesian Networks. However, the probability numbers used in such models do not capture uncertainties in the knowledge used to assign them. Many authors have therefore suggested alternative approaches based on extensions of probability theory, including Dempster-Shafer theory and subjective logic. However, such quantitative methods have been criticized based on ambiguity in interpretation and examples of seemingly inconsistent results. Another framework called Assurance 2.0 moves away from the focus on quantifying confidence and rather aims towards “indefeasible justification”, meaning qualitative confidence that there are no overlooked or unresolved doubts that could change conclusions.

In this paper, we propose to use the concept of knowledge strength as a practical way to assess confidence in claims. Specifically, a claim is considered true or false only if there is strong knowledge to substantiate it; otherwise, it is treated as uncertain. We then propagate confidence through the assurance arguments using three-valued logic. Inspired by Assurance 2.0, we emphasize the need for addressing doubts that could topple an argument and the need for incorporating counter evidence in the form of defeaters. Our proposed approach is demonstrated on an example of a machine-learning-based crack detection tool.

15:00
Managing assumptions in climate risk assessments using assumption-based planning

ABSTRACT. Making assumptions is inevitable when performing any risk assessment especially when it comes to climate risk. Assumptions can be conceptualised as assertions that are taken as given, and in climate risk assessments these may relate to models, data, scenarios and policy analysis. One implication is that the risk assessment results are only valid to the extent that the assumptions hold. In general, there is uncertainty about whether an assumption will be valid for a future activity, and if it is not, this may have consequences. This time dimension is fundamental for assessing climate risks. Making assumptions thus introduces a type of risk that has been labelled as assumption deviation risk. An option to avoid this type of risk is to conduct an uncertainty assessment with respect to the future state or condition that the assumption concerns. In practice, this quickly becomes infeasible for the large number of assumptions that are typically made in climate risk assessments and in fact in any risk assessment. It also tends to introduce new assumptions as part of the uncertainty assessment. We propose an alternative approach, to manage the assumption deviation risk by monitoring the assumptions as the assessment proceeds and implementing various strategies to control existing and emergent uncertainties. The approach is based on the assumption-based planning framework, which has been adapted to handle assumption deviation risk in a risk assessment setting. In the present paper, we apply this previously proposed scheme for establishing such strategies to climate risk with a focus on just one instance, flooding risks resulting from a changing climate. Our preliminary findings indicate that assumptions are not adequately treated in flood risks assessments making it difficult to know how assessments are affected when changes happen, or new knowledge becomes available.

15:15
The Use of Expert Knowledge in the Danish National Risk Profile

ABSTRACT. Expert opinion is an engrained part of risk analysis, but there needs to be more debate on the efficacy of Strength of Knowledge (SoK) in aggregated strategic risk reports like the Danish National Risk Profile (NRP). This paper discusses the utility of expert opinions by government agencies and the uncertainties associated with this practice. The NRP is a publication issued by the Danish Emergency Management Agency (DEMA) that describes threats to Danish society in the short to medium term. The ambition is for government agencies, companies, and other organisations to use the profile as a strategic document that identifies which threats they are to include in their strategic risk assessment. The risks are determined by experts from 15 government agencies who have provided their opinions on 14 different threats, which are then analysed and ranked by DEMA. The use of experts and a qualitative methodology leads to the question of how the SoK could be a factor that influences the national risk strategy and, thereby, the mitigation initiatives taken by private, public and non-governmental organisations. The paper utilises an analytical approach based on the sensemaking of experts, resulting in proposals for risk strategies in the NRP. The paper concludes that SoK has a significant but implicit role and that uncertainties can result in over or under-representation of threats by DEMA, which in turn can lead to faulty strategic risk governance decisions by the readers of the report.

15:30
Application of Monte Carlo Simulation in Modeling the Lifetime of Industrial Components

ABSTRACT. This paper explores the application of Monte Carlo Simulation (MCS) to model the lifetime of industrial components, specifically focusing on a contactor commonly used in automation systems. The study emphasizes the challenges posed by the variability in system performance and the need to incorporate expert knowledge for accurate modeling. Using expert data, MCS is employed to simulate different scenarios and determine a probabilistic distribution that reflects the uncertainties in component lifespan. The analysis reveals that factors such as temperature, environmental conditions, and switching frequency have significant impacts on the failure rate of the component, thereby influencing its reliability. The results demonstrate the effectiveness of MCS in providing a more precise estimation of component lifetime, offering valuable insights for maintenance planning and operational decision-making. The study concludes that incorporating Monte Carlo methods into reliability assessments enhances the ability to manage risk and optimize system performance, ensuring safer and more efficient operation of industrial systems.

15:45
A framework for evolving assumptions in risk analysis

ABSTRACT. Risk assessment can be used to evaluate the risks of complex systems and emerging technologies, such as the human–climate nexus and automation technologies, and to inform pathways and policies. Due to the interconnected and evolutionary features of such topics, risk analysts must navigate the dynamics of changing assumptions and probabilities in the risk assessment. However, the current risk analysis approach neglects to a large extent an explicit consideration of these dynamics, either oversimplifying complex systems or neglecting the likely human response to emerging technologies. In this article, we outline why the evolutionary dynamics of assumptions and probabilities in a risk assessment must receive close attention, and then we provide a possible framework through which to consider the dynamics. Ultimately, we propose a formal approach to conceptualizing and implementing the risk description with respect to feedback loops and complex adaptive systems.

14:45-16:00 Session 10B: Risk communication issues I
Location: B
14:45
Risk Communication in Adaptive Flood Risk Governance: A Literature Analysis of the 2021 Flood in the Netherlands
PRESENTER: Lijie Dong

ABSTRACT. The 2021 Limburg flood exposed significant challenges in managing extreme floods, particularly in flood risk communication in the Netherlands. Many residents, especially those living along the tributaries of the Meuse, reported receiving insufficient or delayed warnings, leaving them unprepared for the flood’s impact. This communication failure revealed weaknesses in the dissemination of information and coordination among key stakeholders across various governance levels. As a result, the public's dissatisfaction with the communication mechanisms has in turn, deepened their distrust of the government's capacity to manage future floods.

Applying theories of risk communication and multilevel governance, this study develops a structured framework of risk communication within the context of flood risk governance across local, regional, national, and EU levels. Based on this framework, we performed a document analysis focusing on the Limburg flood, including relevant policy documents, government flood assessments, third-party reports, and key academic literature. Through this analysis, the research examines how risk communication is conducted among key stakeholders across multiple layers of governance at each stage of flood risk management (phases of preparedness, response, recovery, and prevention). The study also identifies the barriers to effective risk communication, such as information inefficiency, unclear responsibilities, and misalignment between government and public risk perceptions. Finally, the findings will provide actionable recommendations to enhance flood risk communication, thereby improving the flood preparedness and adaptive capacity of Dutch society. Furthermore, this research provides theoretical and practical insights on flood risk communication for future studies.

( Dear professors, this paper is currently under the literature analyzing phase, with full manuscript completion expected by mid-November. Many thanks for your understanding!)

15:00
Analysing Inter-Professional Communication in Climate Adaptation
PRESENTER: Nohah Forde

ABSTRACT. The inter-professional communication between climate adaptation practitioners may play a crucial role in how adaptation strategies are developed and implemented. This study addresses a significant gap in understanding how communication between diverse professionals affects adaptation strategies across various disciplines. This research explores how communication flows and power dynamics among professionals influence climate adaptation responsibility and action. We aim to explore the communication ecosystem of climate adaptation professionals, with a particular focus on understanding how responsibility for adaptation is distributed, communicated, and enacted. It contributes to inter-professional communication scholarship - usually discussed in healthcare settings - by applying this to a new, but equally complex setting, of climate adaptation. Our approach employs semi-structured interviews with a diverse range of professionals working on climate adaptation in New Zealand, including scientists, policymakers, engineers, and communicators. While data collection is ongoing, we anticipate uncovering how different forms of expertise are valued, how power is distributed, and how these factors influence sense-making and decision-making processes in climate adaptation efforts. Findings might have implications for climate policy development, and professional practice, potentially reshaping how climate risk and adaptation are communicated and managed across sectors. This study aims to foster more effective, equitable, and responsive climate adaptation strategies through an examination of inter-professional communication in relation to climate adaptation, and ultimately, risk.

15:15
KnowYourStripes? Assessing the extent to which ‘warming stripes’ are an effective format for communicating environmental risks
PRESENTER: Ian Dawson

ABSTRACT. In 2018, Professor Ed Hawkins published an image consisting of blue and red vertical stripes in various hues that represented the change of the average global annual temperature between 1850 and 2018. Subsequently, the image (a.k.a., warming stripes or climate stripes) ‘went viral’ on social media (#showyourstripes) and became an iconic symbol of the threat posed by climate change. Furthermore, these stripe graph formats are now increasingly being adapted by the scientific community (e.g., the IPCC) and used to communicate other environmental risks (e.g., biodiversity loss, sea-level change). However, no studies have empirically assessed the extent to which stripe graphs influence knowledge, perceptions, and behaviours concerning environmental issues. To address this knowledge gap, we conducted a study in which participants were divided into three groups. Group 1 saw Hawkins’ original blue-red stripe graph. Group 2 saw the same graph, but the blue-red hues had been changed to yellow-purple hues respectively. Group 3 (control condition) did not see a graph. Participants then completed a series of measures, including climate change knowledge, perceived risk, behavioural intentions, and subjective graph evaluations. Our analysis identified no between-group differences for knowledge, risk perceptions, and behavioural intentions. However, we found participants evaluated the blue-red graph significantly (ps < .0001) more likeable, trustworthy, helpful, and accurate than the yellow-purple graph, even though the two graphs depicted exactly the same data. We also found that stripe graphs influenced participants to make inaccurately high estimates of future global temperatures. Hence, our results suggest that, while the blue-red stripe graph is extremely popular, it may not be effective at enhancing knowledge or motivating mitigation behaviours. Considering the popularity of stripe graphs among laypeople and, increasingly, the scientific community, it appears further research is needed to identify how the format can be enhanced to better achieve important environmental risk communication goals.

15:30
Exploring Stakeholder Perspectives: Uncertainty communication challenges of Titanium Dioxide

ABSTRACT. Titanium dioxide (TiO2) is commonly used in food, medicines, and other products. In 2021, the European Food Safety Authority (EFSA) banned its use as a food additive (E171) due to being unable to rule out genotoxicity concerns. However, the European Medicines Agency (EMA) continues to approve its use in pharmaceuticals to prevent supply disruptions. These divergent regulatory approaches also reflect the challenges for the European Union (EU) One Substance – One Assessment reform, which aims to consolidate scientific and technical work on chemicals across EU agencies. This study investigates the challenges in communicating uncertainty regarding TiO2 by gathering stakeholder insights. Through 15 interviews, we systematically examined the difficulty of communicating scientific uncertainty and balancing risk-risk trade-offs in the EU regulatory decision-making context. In addition to the in-depth analysis of the regulatory challenges, the paper also provides recommendations to improve science-informed communication of uncertainty.

14:45-16:00 Session 10C: Maritime I
Location: C
14:45
Towards strategic risk assessment method based on first principles
PRESENTER: Jakub Montewka

ABSTRACT. There are diverse strategic collision risk assessment methodologies and tools for the waterways. Some of them rely solely on physical ship dimensions and approach angles (collision diameter), the other ones try to understand the risk from the violation of the free space around the ships (ship domain), and the last ones calculate the risk by an intersection of the routes (velocity obstacles).

However, from an operational standpoint, navigators on board ships have a distinct perspective. Their decision-making and risk perception rely on specific parameters describing traffic situations as perceived on board, such as the distance between the ships, their angle of approach, density of surrounding traffic, obligations of navigational rules, and meaningful encounter-related indices.

Therefore, to properly reflect the risk of a maritime accident at a strategic level, a proper understanding of the risk at both operational and tactical levels is required. Thus, this study aims to combine the operational and strategic viewpoints by reflecting and incorporating the operational decision-making variables into strategic planning. The risk level of ship collision is determined by assessing the navigator’s stress level considering the first principle techniques.

A tailored Bayesian Belief Network (BBN) uses the listed parameters with a geometric complexity to quantify the risk of ship collision. The geometric complexity is the covariance of the encountering vessels’ relative velocity, which quantifies the complexity of the encounter. The set of explanatory variables as well as their thresholds used in the BBN are gathered through expert elicitation that aims to mimic real encounter scenarios. The output of the BBN provides the risk level of the encounter and, using first principles thinking can be used to create a hotspot map. As a result, it combines the navigator’s point of view with the geometric conditions and helps stakeholders to deploy efficient risk-mitigation strategies for safer waterways.

15:00
Hierarchical Bayesian Model to Assess Maritime Object Detection Reliability in Different Weather Conditions

ABSTRACT. The fast-paced improvement of computer vision models has enabled the use of cameras to monitor a variety of objects in a scene. The use of these object detection models for maritime surveillance is gaining momentum. Obtaining high resolution images of the vessels supports the identification of the ships, improving maritime traffic monitoring and safety. However, there are limitations with the visual data and the detection capabilities of these models. The object detection might fail depending on the weather conditions, or if the observed vessels are small and are too distant. Therefore, having a model that can predict the object detection reliability under these various conditions is desirable. In this work, we focus on maritime traffic observations made in the archipelago of Helsinki, in Finland. Over 9 hours of video footage was recorded over the course of 19 days, capturing the ship traffic in different weather and light conditions. A dedicated object detection and segmentation algorithm, based on the YOLOv11 model, was trained to detect the ships in this data. Ground truth annotations for bounding boxes and segmentation masks of the ships in the video frames were made semi-automatically with the TrackAnything tool. Moreover, the geographical position of the ships and their distance to the observation station was retrieved from the Automatic Identification System (AIS) data broadcasted by the vessels. We examine how the reliability of the object detection is impacted by different weather conditions (characterized by visibility, cloud cover, rain, and luminosity measurements) along with the ship's size and distance from the observation point. A hierarchical Bayesian model of Multinomial Process Tree is proposed to predict the ship detection false positive and false negative rates. Our model mitigates the uncertainty in the intelligence provided by object detection models, with improved understanding on the boundary environmental conditions for reliable performance.

15:15
Risk assessment methods for carbon capture and storage in the maritime value chain
PRESENTER: Marta Bucelli

ABSTRACT. Carbon capture and storage (CCS) is of paramount importance to the energy transition. According to BloombergNF, CCS could account for around 30% of global industrial direct emissions abatement by 2050. Nevertheless, technical, financial and regulatory challenges need to be overcome to improve the efficiency and scalability of the CCS value chain as well as reduce costs. The safety of Carbon Dioxide (CO2) transport and storage is among the key aspects for large scale CCS deployment. This paper presents a streamlined quantitative risk assessment method tailored for the CCS maritime value chain. The method is demonstrated and tested on a specific case study focusing on the transport of liquefied CO2 from a port in Norway to an offshore storage location. A set of reference accident scenarios is defined for the transfer and transport of liquefied pressurized CO2. The suitability of specific existing data and models for QRA applied to the CCS maritime value chain is also investigated and presented. The goal is to assess and define a set of qualified data sources and best available consequence models to perform QRA for the maritime CCS value chain. An assessment of the limitations and challenges of conventional risk assessment methods is finally performed.

15:30
Using Process Modeling Approach and Qualitative Data to Build a Unified Understanding of Icebreaker Operations
PRESENTER: Mahsa Khorasani

ABSTRACT. Icebreakers operation is critical to maritime operations in ice-covered waters, as it contributes to the navigation safety, fuel consumption reduction of merchant vessels, and optimal route planning. However, the complexity of icebreaker operations, due to the challenging operational environment and the involvement of multiple components, remains largely undocumented. This lack of documentation has led to a limited understanding of the processes and missed opportunities for analysis and optimization of the operations. To enhance understanding of the process of operations, this paper presents a model representing the process of icebreaker operations. Through a 10-day expedition aboard an icebreaker, an author observed daily activities and conducted interviews with crew members to identify key activities, events, data resources used in decision-making, and workflows. The icebreaker operation is modeled using Business Process Model and Notation (BPMN), which details the events, activities, sub-activities, their sequences, triggers for activation, gateways, responsible operators, as well as the factors and data resources that influence decision-making. The model was subsequently refined with input from maritime and icebreaker experts to ensure accuracy and practical relevance. The formalized process model provides a structured representation of icebreaker operations, serving as a foundation for optimizing workflows, designing simulation models, automating decision-making, enhancing crew training, and supporting further analysis and research. This model can help to improve safety, reliability, and efficiency in ice-covered waters, while also supporting real-world applications and research that depend on precise process documentation.

15:45
A New AI Solution to Maritime Cybersecurity Risk Prediction
PRESENTER: Zaili Yang

ABSTRACT. The digitalisation of maritime systems, including ships, ports, and operational networks, has significantly increased their exposure to cyber threats and risks. These risks can disrupt critical infrastructure and cause global repercussions, requiring new solutions to improve maritime cybersecurity risk prediction. This study aims to develop a new AI solution with limited data to enable cybersecurity risk prediction. It utilises Large Language Models (LLMs) for prompt-based zero-shot learning, enabling accurate classification of text and extraction of key cyber risk factors. A comprehensive dataset spanning 2001 to 2020 was developed, introducing new risk factors critical for assessing cyber threats, yet appearing in any state-of-the-art studies in the field. This extracted dataset was integrated into a Bayesian Network (BN) model to identify probabilistic relationships and predict potential cybersecurity risks. The hybrid approach is among the pioneers of using new AI technologies for text mining to enrich risk data and realising multiple source data fusion for improved risk prediction, hence making significant theoretical contributions to safety sciences. By leveraging the advanced capabilities of LLMs alongside probabilistic modelling, the study has shown its methodological novelty through a scalable, adaptive methodology that can enhance risk predictive accuracy and strengthen general and maritime systems against evolving cyber risks in specific. From an applied research perspective, it provides an in-depth analysis of maritime cybersecurity within the context of the fast growth of maritime digitalisation and brings significant managerial insights into practice. Such insights are invaluable for stakeholders, enabling them to identify vulnerabilities, anticipate threats, and prioritise resources effectively. This integrated framework equips policymakers with the tools needed for proactive decision-making, supporting the development of targeted cybersecurity strategies to minimise operational disruptions.

14:45-16:00 Session 10D: Impact Assessment in Resilience Analysis of Energy Systems
Location: D
14:45
Toward an Integrated Framework for Smart City Economic Resilience Assessment (IF-SCERA)
PRESENTER: Matteo Spada

ABSTRACT. In this work, we present the initial steps toward developing a framework to assess the economic resilience of smart cities, particularly in response to cyber threats targeting the electrical grid. With increasing digitalization in critical infrastructures, there is a growing need for efficient and comprehensive resilience evaluation. To address this need, the Integrated Framework for Smart City Economic Resilience Assessment (IF-SCERA) is proposed. This framework offers a Geographic Information System (GIS)-based platform for analyzing city-level resilience, focusing specifically on economic resilience. IF-SCERA integrates multi-criteria impact analysis, enabling a more comprehensive evaluation than existing methods, which often rely on aggregated macroeconomic indicators that overlook interactions between different infrastructures and between infrastructures and the economy. The present work will introduce the Economic Resilience Criticality (ERC) index, assessed via a multidimensional tool that identifies where a city's economy and business sectors are most or least resilient. Unlike traditional methods that focus on economic values under normal business conditions, the ERC index addresses the specific needs of emergency situations, when the restoration of economic and societal wellbeing is paramount. The index combines conventional economic indicators, such as financial losses and recovery cost, while incorporating broader aspects related to societal wellbeing, providing more comprehensive assessment of resilience. This integrated approach not only advances the state of the art in economic impact analysis but also ensures that cities are better equipped to respond to disruptions, enhancing their overall resilience to cyber threats.

15:00
Influence of Decision-Maker Risk Preferences on Interdependent Infrastructure Resilience Pathways

ABSTRACT. While critical infrastructure systems are essential for cities, their increasing interdependencies can worsen risks from external shocks, leading to disproportionate impacts on the regional economy and communities. In this study, we propose a methodological framework to identify and evaluate cost-effective pathways for enhancing resilience in large-scale interdependent infrastructure systems, considering decision-makers' risk preferences. We focus on understanding how decision-makers with varying risk preferences perceive the benefits from infrastructure resilience investments and compare them with upfront costs in the context of high-impact low-probability (HILP) events. First, we compute the costs of interventions as the sum of their capital costs and maintenance costs. On the other hand, the benefits of the interventions include the reduction in physical damage costs and business disruption losses resulting from the improved resilience of the network. In the final stage, we develop statistical models to predict the perceived net benefits of different network resilience configurations in power, water, and transport networks. These models are employed in an optimization framework to identify optimal resilience investment pathways for the interdependent infrastructure network. By incorporating Cumulative Prospect Theory (CPT) in the optimization framework, we show that decision-makers who assign higher weights to low probability events unintentionally allocate more resources towards strategies mitigating high-impact low-probability (HILP) events, like severe earthquakes and hurricanes. We illustrate the methodology using a case study of the interdependent infrastructure network in Shelby County, Tennessee, and its vulnerability to seismic hazards. The proposed simulation framework can be adopted by decision-makers and stakeholders to develop integrated resilience investment road maps for interdependent infrastructure networks and allocate resources accordingly.

15:15
Risk-Appropriate Critical Infrastructure Protection: An Approach to Balancing Security and Resilience under Uncertainty
PRESENTER: Daniel Lichte

ABSTRACT. The protection of critical infrastructure has become increasingly important in today's dynamic and uncertain security landscape. Critical infrastructure is the backbone of modern society as it provides essential services and goods. However, due to its distributed and networked structure, it is are also vulnerable to deliberate disruptions, possibly leading to significant interruptions in population supply. We argue that an integrated approach that combines security and resilience is needed to protect critical infrastructure and related services in a holistic manner, in particular because security and resilience have complementary goals: security aims to minimize the probability of successful attacks, while resilience concepts focus on enabling systems to cope effectively with the resulting impact. As a result, countermeasures in the two areas are generally very different. At the same time, they are typically associated with high costs, which means that an efficient trade-off must be found within budgetary constraints. Therefore, we propose a two-step framework to identify risk-appropriate countermeasure configurations for the comprehensive protection of critical infrastructure. In a first step, we combine models of vulnerability to physical attacks and critical infrastructure performance, which is used as an indicator of potential intra- and inter-sectoral cascading impacts. In this way, the effectiveness of countermeasures can be assessed, reflecting the inherent uncertainties through the use of random variables. In a second step, we associate countermeasures with necessary investments and enable cost-benefit analysis using Pareto optimality. This allows the identification of efficient measure combinations under existing uncertainties. Applying this approach to a simple case study shows that equivalent combinations of countermeasures can be found in both areas, although the contributions to risk mitigation are scenario dependent. The cost-benefit analysis shows that equivalent combinations can be differentiated in terms of their efficiency. Finally. we show how this analysis can be used to derive risk-appropriate protection solutions.

15:30
Resilience enhancement of microgrids with distributionally robust optimal sizing and location of distributed energy resources under supply-demand uncertainty and random contingencies
PRESENTER: Pascal Quach

ABSTRACT. Microgrids equipped with local distributed energy resources (DER) and islanding capabilities have been shown to enhance the resilience in modern power systems by mitigating disturbances. However, the size and location of distributed energy resources are critical factors in determining their economic and technical viability. In this study, we model random power line contingencies alongside commonly studied generation and demand uncertainties to guide investment decisions, improving system defenses against unexpected outage events whilst maintaining economic and technical optimality. We develop distributionally robust optimization models for the two-stage stochastic programming optimal design and operations problem under simultaneous continuous supply-demand uncertainty, and discrete random contingencies. The solution methods rely on known tractable reformulations of DRO problems that allow us to solve the problem using off-the-shelf commercial solvers. The models’ performance are evaluated on a numerical case study with representative supply-demand days and failure uncertainties in order to explore the tradeoffs between investment costs, operating costs, and resilience. Furthermore, we conduct a systematic analysis of the impact of varying contingency uncertainty levels and assess out-of-sample performance in our experiments.

15:45
Evaluative Methodologies for Resilience and Reliability in Multi-Terminal HVDC Transmission Systems

ABSTRACT. Modern energy systems, typified by intricate configurations and dependencies, necessitate advanced analytical methodologies for resilience and reliability assessment. This research delves into two distinct case studies that scrutinize these parameters within varied contexts. The first study methodically ranks the critical components of a transmission system, employing Dynamic Fault Tree (DFT) analysis. This approach elucidates components' significance based on multiple importance measures, thus facilitating pre-emptive maintenance and risk management strategies. The second study focuses on the resilience of multiterminal HVDC-VSC transmission frameworks, especially tailored for expansive offshore wind farms. Utilizing Markov Automata, the study simulates various operational states, from full functionality to detachment scenarios, rendering insights into system behaviours over infinite durations and specific time-bound intervals. These probabilistic and mean time evaluations are pivotal for strategic planning and resource allocation, especially in the face of disruptions. Collectively, the two case studies underscore the importance and versatility of employing advanced analytical tools to address the multifaceted challenges of modern transmission systems, fostering improved reliability and resilience in our energy infrastructure.

14:45-16:00 Session 10E: Explainable Artificial Intelligence (XAI) for Reliability, Availability, Maintainability and Safety (RAMS) Applications
Location: E
14:45
Towards Explainable Deep Learning for Ship Trajectory Prediction in Inland Waterways
PRESENTER: Kathrin Donandt

ABSTRACT. Accurate predictions of ship trajectories in crowded environments are essential to ensure safety in inland waterways traffic. Recent advances in deep learning promise increased accuracy even for complex scenarios. While the challenge of ship-to-ship awareness is being addressed with growing success, the explainability of these models is often overlooked, potentially obscuring an inaccurate logic and undermining the confidence in their reliability. This study examines an LSTM-based vessel trajectory prediction model by incorporating trained ship domain parameters that provide insight into the attention-based fusion of the interacting vessels’ hidden states. This approach has previously been explored in the field of maritime shipping, yet the variety and complexity of encounters in inland waterways allow for a more profound analysis of the model’s interpretability. The prediction performance of the proposed model variants are evaluated using standard displacement error statistics. Additionally, the plausibility of the generated ship domain values is analyzed. With an final displacement error of around 40 meters in a 5-minute prediction horizon, the model performs comparably to similar studies. Though the ship-to-ship attention architecture enhances prediction accuracy, the weights assigned to vessels in encounters using the learnt ship domain values deviate from the expectation. The observed accuracy improvements are thus not entirely driven by a causal relationship between a predicted trajectory and the trajectories of nearby ships. This finding underscores the model’s explanatory capabilities through its intrinsically interpretable design. Future work will focus on utilizing the architecture for counterfactual analysis and on the incorporation of more sophisticated attention mechanisms.

15:00
Cyber resilience as an organizational outcome in the age of AI – explainability as a means to foster adaptive capacity

ABSTRACT. Explainable AI (XAI) has received growing attention the last years due to the growth of more sophisticated machine learning models. These modern models are often complex, and their internal workings are often challenging to explain, fuelling the interest in terms like explainability, interpretability and transparency. Although XAI is considered to be a highly multidisciplinary topic, and the explainability dimension of the concept implicitly point at the human recipient of the explanations, both the human and organizational perspective seems to be currently neglected in XAI research. This is highly problematic within the cyber security domain where cyber resilience is dependent on sociotechnical dimensions relating to both human as well as organizational aspects. While AI is suggested to have big impacts on the cyber security domain, it is still undiscovered what role explainability of AI will have. This paper builds on a sociotechnical understanding of the XAI concept and draws on the theoretical perspective of adaptive capacity as fundamental in the understanding of cyber resilience. We combine these perspectives when we approach current literature on XAI in the context of cyber resilience, illustrating that XAI is mainly treated as a technical artefact, neglecting the human and organizational dimensions that are crucial to develop and foster adaptive capacity. We discuss the implications of this narrow view on XAI in the context of cyber resilience, suggesting future research avenues to be followed to fill the current gaps.

15:15
Failure Causality Diagnostic in Industrial Systems through Automated Machine Learning
PRESENTER: Bahman Askari

ABSTRACT. In the context of modern industrial systems, efficient failure management is crucial for maintaining operational integrity, minimizing downtime, and maintenance optimization. This paper explores the application of Automated Machine Learning (AutoML) to enhance both failure causality diagnostic and failure causality prognostic in industrial systems. Different failure causes are detected by failure causality diagnostics, and the upcoming failure could be prevented by failure causality prognostics. Indeed, upcoming failures could be avoided by preventing their causalities.

Traditional machine learning approaches require significant manual intervention for model selection, hyperparameter tuning, and feature engineering, which can be time and cost-consuming. AutoML, on the other hand, automates these processes, enabling quicker and more accurate predictions while reducing the need for extensive domain expertise.

Our approach integrates AutoML into real-time failure diagnostics, identifying the root causes of system malfunctions using historical and sensor data. Simultaneously, it applies AutoML for prognostics, predicting Remaining Useful Life (RUL) of components and foreseeing future failures. By leveraging both data-driven models and physics-based insights, our approach improves the reliability of diagnostics and prognostics in various industries, including manufacturing, aerospace, and energy.

The competing risks can be considered an application of this approach. High probable competing risks to cause are detected based on historical data in diagnostic phase and by handling them in the prognostic phase, the anticipated failures could be prevented.

Finally, the experiments are conducted using real-world industrial datasets, demonstrating the superior performance of AutoML compared to traditional machine learning methods in both diagnostic accuracy and prognostic precision. This study shows that AutoML can significantly enhance decision-making processes in maintenance planning and risk mitigation, ultimately reducing operational costs and improving system reliability.

15:30
Predictive Maintenance for Safety-Critical Systems: The Autonomous Predictive Beam Interlock System at the European Spallation Source

ABSTRACT. The Autonomous Predictive Beam Interlock System (APBIS) is a machine learning driven extension of the existing Fast Beam Interlock System (FBIS) at the European Spallation Source (ESS) in Lund, Sweden. The project integrates predictive maintenance and anomaly detection into the FBIS framework to improve operational safety, efficiency and reliability. As a predictive layer, APBIS provides real-time insights by analysing multivariate time series data to detect anomalies and predict system failures before they occur. The system operates within a distributed architecture using Field Programmable Gate Arrays (FPGAs) at the Signal Conversion Unit (SCU) level, ensuring low-latency processing. The adaptability and responsiveness of APBIS are facilitated by machine learning techniques, such as Echo State Networks (ESNs) and online learning. These techniques enable APBIS to effectively adapt to changing conditions, ensuring its continuous and reliable performance. Given the scarcity of operational data, APBIS uses simulated data and generative models for pre-training its algorithms.

APBIS addresses key challenges such as implementing machine learning models on FPGAs for real-time anomaly detection, using distributed learning for system-wide insights, and integrating explainable AI methods to improve models and enhance operator understanding. Deep learning models provide preemptive warnings, while a decentralised approach allows multiple models on distributed hardware to share information, creating a global view of system health. Continuous learning ensures that APBIS evolves without catastrophic forgetting, while XAI techniques provide operators with interpretable insights, increasing transparency and trust. APBIS demonstrates the feasibility of integrating advanced predictive maintenance into the FBIS framework. Distributed learning and online training reduce downtime, minimise manual intervention and improve system safety.

15:45
SHAP Analysis for Diagnosing Anomalies in Semiconductor Manufacturing
PRESENTER: Joaquin Figueroa

ABSTRACT. We consider the problem of predicting the quality of semiconductor devices and, in case of low quality, diagnosing the anomaly occurred during production. A multi-branch neural network is developed for quality prediction based on multimodal data. Specifically, a dedicated autoencoder is trained for each data modality; then, the latent representations provided by the encoders are concatenated and a regression layer is added for quality prediction. Shapley Additive exPlanation (SHAP) is used to quantify the contribution of each data modality to the quality outcome. Since different data modalities contain information about different production stages, the causes of the production anomaly can be identified. The developed method is demonstrated using a synthetic case study, which mimics the complexity of semiconductor manufacturing. Wafer map (images) and signal measurements (time series) from a production machine are the two considered data modalities. The method is shown able to effectively predict the quality of semiconductor devices and diagnose anomalies occurred at different stages of production.

14:45-16:00 Session 10F: Maintenance models
Chair:
Location: F
14:45
Balancing downtime and maintenance costs for multi-component systems with economic and structural dependencies
PRESENTER: Jussi Leppinen

ABSTRACT. In the maintenance of multi-component systems, failures can cause downtime costs because immediate maintenance may not be possible. This leads to the need to balance these downtime costs with periodic maintenance costs while ensuring that the system operates with the requisite reliability, given its structural and economic dependencies. Specifically, we extend our Markov Decision Process model (Leppinen et al., 2025) in two ways. First, we allow the system to operate even if some of its components have failed. This significantly increases the range of practical maintenance problems that can be addressed. Second, we derive the expected cost of system downtime from the system's reliability function, obtained from a binary decision diagram. A modified policy-iteration algorithm is used to determine the optimal policy to minimize the discounted total costs, combining maintenance and downtime expenses. By adjusting the relative sizes of these costs, we derive a range of Pareto-optimal policies. In addition, we generate recommendations for holistically improving the system by identifying structural changes that are warranted with reduced downtime or lower maintenance costs.

15:00
Parameter estimation in a bivariate Wiener model subject to imperfect maintenance actions considering unbalanced observations
PRESENTER: Inma T. Castro

ABSTRACT. Industrial systems normally present interrelated parts that influence the system performance. For instance, lighting systems composed of many LED lamps, which present a likely dependence because of their common usage. Maintenance actions are performed on these systems to mitigate the degradation effects and widen their lifetime. In this work, imperfect maintenance actions are implemented by using the so-called ARD (Arithmetic Reduction of Degradation) model. With it, the reduction is performed in the overall degradation accumulated in the system since the beginning. In this framework, the inference problem in a two-component degrading system is analysed. The system degradation follows a bivariate Wiener process, whose dependence is modelled using the trivariate reduction method. Maximum likelihood method is employed for parameter estimation. When parameters are estimated from a degradation model, they usually are estimated from degradation observations, or for failure observations. The novelty of this work is that model parameters are estimated from maintenance information. It is assumed that maintenance data are collected in an unbalanced design, which means that the data from both degradation processes are not necessarily measured at the same time. Different observation strategies are considered, so that degradation levels can be observed between maintenance actions, as well as just before or just after maintenance times.

15:15
Polishing Policies: Renewal Theory for Improving Maintenance Heuristics
PRESENTER: Daniel Koutas

ABSTRACT. Prognostic Health Management (PHM) combines prognostics, i.e., the prediction of the Remaining Useful Life (RUL) of a degrading engineering system, with subsequent health management, i.e., the optimal maintenance planning, informed by the prognostics model [1]. A popular choice for maintenance decisions are parametric heuristics, such as probability thresholds [2]. In real-life applications, heuristics might be preferred to black-box optimization models (e.g., from reinforcement learning) due to their interpretability. This is especially crucial in safety-critical systems. If replacements of a system are modeled “as good as new” and the associated failure times follow some distribution, then the maintenance of a collection of these systems can be modeled as a renewal-reward process [3]. Consequently, this process should be considered when designing heuristic strategies. In this contribution, we build upon the work of [4]. We further develop heuristic maintenance policies taking into account the costs associated with the underlying assumption of a renewal-reward process. These policies are combined with a data-driven optimization of parameters of the heuristic to ensure good real-life performance. We apply and test our proposed policies on a virtual RUL simulator as well as a public dataset. For maximum comparability, the evaluation is performed with respect to the metric proposed in [4].

References

[1] Kim, N. H., An, D., & Choi, J. H. (2017). Prognostics and health management of engineering systems. Switzerland: Springer International Publishing.

[2] Tian, Z., & Liao, H. (2011). Condition based maintenance optimization for multi-component systems using proportional hazards model. Reliability Engineering & System Safety, 96(5), 581-589.

[3] Hoyland, A., & Rausand, M. (2009). System reliability theory: models and statistical methods. John Wiley & Sons.

[4] Kamariotis, A., Tatsis, K., Chatzi, E., Goebel, K., & Straub, D. (2024). A metric for assessing and optimizing data-driven prognostic algorithms for predictive maintenance. Reliability Engineering & System Safety, 242, 109723.

15:30
A Data-Driven Framework for Optimized System Maintenance
PRESENTER: Rim Kaddah

ABSTRACT. System maintenance is crucial to ensure the safety, reliability, and performance of modern systems. Effective maintenance reduces downtime, prevents unexpected failures, and extends the lifespan of equipment. With continuing monitoring though the deployment of digital twins we can we develop new data driven approaches that are more efficient than traditional maintenance policies. Digital twin technology provides a real-time virtual model of a physical system, enabling continuous monitoring, diagnosis, and advanced analytics. In this study, we propose a framework combining predictive and prescriptive pipelines, utilizing machine learning techniques to tackle optimization problems. Specifically, we explore methods such as Smart Predict-then-Optimize (SPO) and we compare with the traditional approaches Predict-then-Optimize (PTO) for system maintenance. Different maintenance policies, including Condition-Based Maintenance (CBM), periodic CBM, and predictive maintenance, are applied within this framework. An illustrative example of battery maintenance demonstrates the practical implementation of these methodologies. By leveraging data-driven approaches, this framework enhances decision-making and helps prevent costly disruptions in critical systems.

15:45
A computer-aided evaluation method of maintenance operation space based on knowledge reuse
PRESENTER: Qidi Zhou

ABSTRACT. Industry 5.0 requires the manufacturing industry to realize the human-centric maintenance task for complex industrial tasks. It forces designers to reserve sufficient maintenance operation space during the early digital prototype design phase of the lifecycle, which is difficult to balance digital design for complex system with small space and compact layout. At present, designers conduct virtual simulation and analysis for maintenance operation space based on subjective experience. This digital method relies heavily on subjective expertise and manual iterations, leading to inefficiencies and inconsistent outcomes, which caused by the lack of visual model representation of subjective experiences and the absence of intelligent computer-aided technologies. Hence, a study on computer-aided evaluation of maintenance operation space based on knowledge reuse is proposed. First, an Applied Ontology for Maintenance Operation Space Evaluation (AOMOSE) for common structural components and activities are constructed. Second, an AOMOSE application which combines computer-aided and natural semantic processing technology is proposed to achieve four-step rapid analysis, including value matching, position obtaining, model import and positioning, and model interference detection. A case study on an aircraft APU starter-generator demonstrates this knowledge reuse method’s effectiveness and feasibility. This method is a fusion of knowledge engineering, maintenance design, natural semantic processing, and computer-aided technologies. It explores the feasibility to achieve advanced maintenance in industry 5.0.

14:45-16:00 Session 10G: Collaborative Intelligence and Safety Critical Systems Applications II
Location: G
14:45
Physiological Impact Assessment of Decision Support Systems on Control Room Operators: An ANCOVA Analysis

ABSTRACT. This study investigates the physiological effects of AI based Decision Support Systems (DSS) on control room operators through a comprehensive analysis of multiple physiological indicators. Using data from 42 participants divided into control and experimental groups, we analyzed heart rate, temperature, electrodermal activity (EDA), and pupil diameter across three scenarios of increasing complexity. Analysis of Covariance (ANCOVA) was employed to control for baseline differences, revealing significant reductions in pupil diameter (p = 0.0029) for the DSS group, indicating lower cognitive load. While other physiological measures showed consistent trends suggesting reduced stress with DSS use, these differences were not statistically significant. The findings provide empirical evidence for DSS’s positive impact on operator cognitive load, particularly during complex scenarios, while highlighting the need for comprehensive physiological monitoring in assessing human-system interaction.

15:00
Enhancing Software Safety Through Programming Languages: A Study of Rust
PRESENTER: Thor Myklebust

ABSTRACT. Ensuring software safety has become a paramount concern in modern software development, with the choice of programming language playing a crucial role. This paper investigates the role of Rust, a systems programming language, in enhancing software safety. Through a combination of literature review, evaluation of safety standards, interviews with software engineers, and practical experimentation, we explore the unique features of Rust that contribute to safer software development practices. The study includes an analysis of Rust, its ownership model, and its concurrency mechanisms, comparing them with traditional languages like C and C++. Furthermore, we conduct interviews with software engineers to gather insights into their experiences with Rust, including its adoption challenges, benefits, and implications for transitioning from C++ and Python to Rust. Additionally, we present a practical experiment involving code development in Rust to demonstrate its effectiveness in ensuring safety and reliability. The findings of this study provide valuable insights into the role of programming languages, particularly Rust, in advancing software safety and offer practical guidance for software developers aiming to leverage safer alternatives in their projects.

15:15
Human adaptability and resilience in safety-critical tasks: Insights from a Formaldehyde Production Plant Simulation

ABSTRACT. Traditional safety approaches assume failures result from unidentified risks during design, leading to incomplete rules and procedures. However, not all risks can be anticipated. In safety-critical operations, human performance evolves through continuous feedback, where decisions, errors, and corrections are interdependent rather than isolated events. This complexity is evident in process industries, where operators must interpret dynamic conditions, adjust responses, and recover from deviations under time constraints.

Resilience engineering shifts the focus from rigid procedures to adaptive responses, emphasizing real-time adjustments and recovery. Integrating these principles into risk assessments can enhance system flexibility and reliability, better accounting for uncertainty and unexpected disruptions.

This study explores human adaptability in safety-critical operations by analyzing task adherence, error correction, and performance dynamics in a simulated formaldehyde production plant. It examines how participants engage with safety procedures, manage errors, and adjust to evolving conditions. By identifying cognitive mechanisms underlying decision-making and recovery under uncertainty, the study provides insights into operators' resilience in complex work environments.

Findings may inform adaptive safety protocols, integrating real-time monitoring and predictive analytics to support operators in high-risk industries.

15:30
The Hidden Gem of IEC 61508: Unveiling the Advantages of the 1oo2D Structure in Embedded Systems
PRESENTER: David Schepers

ABSTRACT. Embedded systems are used in a wide range of applications, many of which are safety-critical. A failure in such systems can cause significant issues related to safety, functionality, and the overall availability of the application. To meet safety requirements, it is often necessary to develop safety-critical embedded systems in compliance with the IEC 61508 functional safety standard. This standard outlines various architectures for safe hardware design, with common safety structures including 1oo1, 1oo2, and 2oo3, which are widely implemented for safety functions. The optimal solution depends on several factors, such as the desired Safety Integrity Level (SIL), cost constraints, and application availability. This paper emphasizes the rarely applied 1oo2D structure as an excellent compromise between cost, assembly space, safety, and availability. The 1oo2D architecture consists of two redundant channels continuously monitoring themselves for hardware failures. With intelligent testing mechanisms, hardware failures can be isolated to the relevant channel, allowing the faulty channel to be deactivated, while the system continues to operate in a reduced 1oo1 configuration. This approach helps prevent spurious trips of the safety function without the need for the more costly 2oo3 structure. To demonstrate the advantages of the 1oo2D structure, a simple prototype of an optical smoke detector is introduced, currently being developed according to IEC 61508. A Failure Modes and Effects Analysis (FMEA) shows that all potential hardware failures can be safely detected and assigned to the corresponding channel, thereby avoiding false fire alarms while ensuring the availability of the safety function. Digital embedded systems are particularly well-suited for implementing the 1oo2D structure, as hardware failures can typically be detected and isolated to the relevant channel. This reduces spurious trips of the safety function while ensuring high reliability, low costs, compact assembly, and availability.

14:45-16:00 Session 10H: Societal safety/security
Location: H
14:45
Norwegian Police Officers Experiences from Armed Confrontations

ABSTRACT. Even though Norwegian police officers’ have legal rights to use force, also in armed confrontations, they seldom make use of their firearms. The use of firearms lies at the extreme edge of police work, also in Norway. However, police officers need to prepare also for armed confrontations to maintain safety for themselves during confrontations, but also to ensure the safety of ordinary citizens. Armed confrontations may often be the most challenging, dynamic and stressful incidents police officers face in line of duty. This study undertakes to discuss the relevance of police training in armed confrontations as experienced by police officers that have been in armed confrontations and decided not to shoot the perpetrator. Data stems from 30 semi-structured interviews with Norwegian police emergency response officers who have experienced an armed confrontation with a subject where the police officers perceived to be within the regulations and weapon laws to discharge their firearms against the subject, but for some reason chose not to make use of their firearms. Findings indicate that Norwegian police officers mostly receive their experience in armed confrontations through their training, and not through practical street level experience from armed confrontations. Thus, both the relevance of armed response training and using experiences from armed confrontations as learning opportunities are of the utmost importance for police officers capacities for handling armed confrontation.

15:00
On Pareto Optimality of Physical Security Concepts Balancing Multiple Protection Objectives
PRESENTER: Dustin Witte

ABSTRACT. Operators of critical infrastructures are confronted with safeguarding their assets against a wide range of potential threats. Conventional risk analysis include the assessment of scenario likelihoods, but for security risks, these are difficult to estimate. This poses a significant challenge for decision making. In order to configure a physical security system, an approach for decision making considering these uncertainties must be found. As a solution, we propose to specify multiple protection objectives and base the formulation of the decision problem on the fulfillment of these protection objectives as well as security system cost. In our approach, we initially develop relevant scenarios via morphological analysis. Based on these scenarios, we define site-specific protection objectives including minimum requirements and analyze their fulfillment by qualitative or quantitative models, depending on the level of available information. With these models, we search for Pareto-optimal security system configurations regarding protection objectives and costs. We demonstrate our approach using a notional case study on a generic critical infrastructure site. There, the conducted optimization yields a manageable number of optimal configurations. Additionally, the resulting optimal configurations show varying trade-offs between protection objectives, as well as costs. Out of this Pareto-optimal set, a selection may be narrowed by the operator’s assessment of appropriateness. In this way, the proposed approach enables operators to identify an optimal security system configuration that is tailored to their specific requirements, thus supporting decision making.

15:15
Towards more mature method development processes in societal safety and resilience - the case of risk and vulnerability assessment in Sweden
PRESENTER: Henrik Hassel

ABSTRACT. In developing societal safety and resilience, countries like Sweden face increasing challenges due to trends such as escalating climate change, changing security climate, increased societal complexity, and downsizing of public sectors. This stress the importance that methods used to improve safety and resilience should be based on strong evidence for their positive impacts. However, previous research show that these methods typically are not building on each other or evaluated in systematic ways. Hence, the underlying evidence is weak and it is possible to question the degree of knowledge evolution in the field making selecting of which method to use in a particular case problematic. To spur more systematic and comprehensive method development processes, this paper proposes a maturity model of method development and evaluation that can be used to support longitudinal processes of going from a promising, innovative idea to a fully developed method with positive impacts supported by strong evidence. The model draws on fields with longer tradition of iterative, up-scaling processes, such as software development, medicine development and public health interventions; while also accounting for idiosyncrasies of societal safety, such as the practical difficulties of testing and collecting data on effects methods. In the paper, the maturity model is demonstrated by applying it to evaluate a method for municipal Risk and Vulnerability Assessment that the authors have been involved in developing and implementing over the past five years. We believe that the proposed maturity model can support knowledge evolution in the field as well as to set up systematic, evidence-based method development processes.

15:30
On achieving an appropriate level of security for national security purposes
PRESENTER: Tore Askeland

ABSTRACT. The purpose of the Norwegian act relating to national security (Security Act) is to protect Norway's sovereignty, territorial integrity and democratic system of government, and other national security interests, to prevent, detect and counter activities which present a threat to security, and to ensure that security measures are implemented in accordance with the fundamental legal principles and values of a democratic society. Pursuant to the act, all ministries shall identify so-called fundamental national functions (FNFs) within their areas of responsibility which are “services, production and other forms of activity that are of such importance that a complete or partial loss of function will have consequences for the state's ability to safeguard national security interests”. These FNFs constitute the superstructure to identify organizations responsible for critical national objects, infrastructures and information systems to be protected. The act mandates that these organizations establish an “appropriate level of security” for these critical assets, without specifying how this term should be understood and used for risk acceptance. The Norwegian National Security Authority has issued several guidelines to help the ministries and organizations in their protective security work. We have explored the challenges that the defence sector experiences in adhering to the Security Act, regulations and follow guidelines to achieving an appropriate level of security for national security purposes. Our recommendations are to: (i) establish a model for detailing the FNFs into more granular functions, (ii) develop national scenarios from which it is possible to link an organization’s functions and assets to the FNFs and the current national planning, including in crises and war, and metrics to support criticality assessments and (iii) develop common guidelines for organization-specific threat scenarios to support a more consistent and verifiable definition of appropriate security.

15:45
Preventing violent extremism in Norwegian municipalities. Between political visions of security and local expectations of social welfare

ABSTRACT. This paper presents data from the Norwegian grant scheme for preventing radicalization and violent extremism in local communities. The local projects we have analyzed frame their preventive needs as coalescing with the functions and practices of local welfare work. Another reoccurring finding is that local preventive projects prioritize competence development among first-line workers. There is a general dearth in providing effect evaluation in local preventive work. However, communities prone to extremist milieus tend to be more specific in describing how violent extremism is being prevented. On the other hand, several communities apply for funding through the grant scheme without being impacted by violent extremism. Thus, the grant scheme can be considered a particular avenue for ensuring basic welfare services and for preventing extremism in local Norwegian communities.

14:45-16:00 Session 10I: Health issues
Chair:
Location: I
14:45
A systematic risk-benefit assessment of sunscreen use in the Norwegian population in reply to mixed messages in risk communication
PRESENTER: Ellen Bruzell

ABSTRACT. Even though Norway is among the countries with the highest incidence and mortality of skin cancer worldwide, consumer organisations pay less attention to cancer protection and focus on possible and unverified health and environmental hazards. This unnuanced communication may potentially lead to insecurity among consumers about sunscreen protection. Therefore, VKM performed an assessment1 of 1) the hazard of dermal exposure to the six most frequently used UV filters in sunscreens in Norway: bis-ethyl-hexyloxyphenol methoxyphenyl triazine; butyl methoxydibenzoyl methane; 2-ethylhexyl salicylate; ethylhexyl triazone; octocrylene; and titanium dioxide in nanoform, 2) the hazard of sunscreen use, and 3) the protection of sunscreen use against UV-induced adverse effects. Scientific publications and reports up to 2022 were retrieved in two searches to assess adverse and protective effects of sunscreen and adverse effects of UV filters. Specific searches were made for data on concentrations and dermal absorption of UV filters and individual user amounts of sunscreens. We developed a method for systematic evaluation of the quality of studies on analysis of UV filter concentrations in sunscreen. Probabilistic methods were used to estimate exposure to each UV filter. Health outcome studies were assessed for risk of bias and level of evidence2. The concentrations of UV filters were below the limits set in the EU cosmetics regulative. Thus, the risks related to the evaluated UV filters were considered negligible. The determination of the hazards (and thereby risks) associated with sunscreen use was precluded by insufficient evidence. Since VKM found confidence in the evidence that sunscreens protect against certain skin cancers, we concluded that for the general Norwegian population, sunscreen use is beneficial. For communication consistency this message may be communicated to the public by all relevant authorities. 1 VKM et al. (2022). Risk-benefit assessment of sunscreen. VKM Report 2022:10,Oslo, Norway.

2 OHAT Handbook(2019). OHAT, NTP, NIEHS, USA.

15:00
Parameter Estimation in the SIR Model Using Collocation Method
PRESENTER: Lukas Pospisil

ABSTRACT. In this paper, we present a methodology for estimating the parameters of a system of ordinary differential equations (ODEs) for the SIR model, a critical tool for understanding the dynamics of infectious diseases. The SIR model is essential for predicting outbreak patterns and informing public health interventions, playing a pivotal role in safety analysis. The parameters of the model are estimated from measured data while simultaneously solving the corresponding system of ODEs numerically. Our approach is based on the collocation method, where the solution is expressed as a linear combination of B-spline basis functions and fitted to the data through regression. The square Euclidean measure is used for both regression fitting and minimizing the ODE error. This problem is formulated as a multicriteria optimization task, balancing errors in the model fit and the numerical solution of the ODE system. The entire methodology is implemented in the MATLAB environment. We present numerical results and demonstrate the effectiveness of the approach for parameter estimation in epidemiological models using artificial benchmark datasets.

15:15
Preliminary risk analysis of a Superconducting Gantry for Hadron Therapy

ABSTRACT. Hadron therapy offers one of the most advanced options for cancer treatment. Its advantage with respect to conventional radiotherapy (e.g. ionizing photons) resides on the properties of hadrons (protons and ions) to reach deep-seated tumors with precise dose deposition, while sparing the surrounding healthy tissues. An effective implementation of hadron therapy is the use of a rotating beamline around the patient, namely the Gantry. A Gantry makes it possible to irradiate the tumor from different angles with advantages with respect to a fixed beamline. The use of the Gantry in combination with carbon ions allows for even superior treatment capabilities. At present, this solution is a considerable engineering challenge, because of the size of the magnets, their weight and last but not least, the costs. In the context of the European project HITRIplus, several research institutes and clinical centers are studying and developing a compact and affordable Gantry for carbon ion hadron therapy. To meet the project goals, the Gantry beamline employs superconducting magnets. The superconducting magnets technology allows for a significant reduction in size and in weight with respect to normal-conducting magnets. Nonetheless, it suffers from the risk of quenching, i.e. the loss of the superconductive state, with potential damage to accelerator components and consequent treatment disruption. This paper presents the results of a preliminary risk analysis of a superconducting Gantry, with multiple objectives. The first objective is to define the design requirements for mitigating the risks and thus to limit their consequences by the adoption of protective systems. The second objective is to identify and address the design trade-offs that concern patient safety, integrity of the superconducting magnets and the operability of the particle therapy accelerator. Finally, this preliminary risk analysis compares the results previously obtained for conventional normal-conducting Gantry, with the intent of summarizing benefits and risks.

15:30
Safety and security co-design with application to medical device industry

ABSTRACT. Engineering is experiencing rapid changes in response to new needs, from embedding “intelligence on board” for communication. control and decisions, up to large scale architecture such as systems of systems (SoS) and Internet of Things. These systems rely on high degree of autonomy together with complex functional and structural dependencies, and are exposed to endogenous and exogenous risks which cannot be understood and mitigated as sum of their parts. Among these risks, the ones related to security are becoming a serious concern for safety and operability, in many industrial sectors and particularly within Operational Technologies. In terms of solution, there is a unanimous agreement to start integrating safety and (cyber)security processes at a certain point by considering the undesired effects of security over safety and vice versa. A more effective solution would be to merge the safety and security processes in early product development stages, making the system safe and secure by design. This approach is defined as codesign and implies a more radical change of state-of-practice. The healthcare sector recognizes the importance of integrating security and safety in the life cycle of a medical device. This is reflected in the medical device regulations MDR 2017/745, which enforces manufacturers to address security from early design stages. Summarizing paragraph 17.2 in Annex I of MDR, security shall be included in life cycle of medical devices that incorporate software or software that are medical devices in themselves. This is eventually the “game changer” in favor of codesign. This paper defines the principles of safety and security codesign for medical devices through a use case, starting from the design principles and analysis methods, and converging in a new state-of-the-art. Design challenges and expected benefits are discussed as based on expertise in the IT-OT security domain, and risk management by a medical device manufacturer.

15:45
Social isolation and integration: Risks for multiple myeloma patients?
PRESENTER: Nadine Müller

ABSTRACT. The COVID-19 pandemic had a severe impact on social contacts especially for patients with a high risk of negative outcomes like cancer patients (Bergerot et al. 2022). To make matters worse, social support can usually help patients to cope with treatment cycles, thus increase their overall well-being (Ruiz-Rodriguez et al. 2022). Therefore, the lack of social support posed an additional risk for cancer patients, not only affecting the emotional and physical condition but also medical outcomes such as the survival (Corovic et al. 2023). One explanation for this could be that treatment decisions are not only based on risk assessment tools but also on soft factors like the social integration of a patient as physicians are aware of the benefits on treatment outcomes. During the treatment decision process, social integration could on the other hand also be a strain for patients, when multiple supporters are included in decision making who can not fully assess the medical information about treatment risks, thus overestimating risks and neglecting benefits. In this presentation we are going to address the question if social integration or isolation can be seen as risks for cancer patients. The analysis is based on a large representative real-world evidence register of over 9.000 multiple myeloma patients treated between 2016 and 2024 in Germany. This allows us to compare the impact of social integration on treatment eligibility assessments and outcomes before and after the COVID-19 pandemic and to explore interactions with mediating factors like risk assessment, symptoms, concomitant diseases, stage or treating institution.

16:00
Independent safety investigation in Norway – the work of UKOM from a risk management perspective.

ABSTRACT. The Norwegian Healthcare Investigation Board (UKOM) is a relatively new government body tasked with its mission of conducting independent safety investigation of serious incidents in the healthcare services. This study was developed prior to this proposal and aims to This study aims to explore how UKOM's work has evolved from a risk management perspective, and how UKOM employees experience the processes of investigation and analysis, resulting in investigation reports. The study also aims to explore how investigators perceive investigations to influencing the healthcare system and contributing to system change and learning. The study employs an explorative design with a qualitative approach including semi-structured individual interviews with 11 participants from UKOM, conducted between January and April 2023. The study revealed that the learning potential from serious incidents is a key factor in selecting issues and events for investigation. Results showed that the composition of interdisciplinary investigator teams is an important resource, bringing diverse perspectives to the process and facilitating mutual learning within the team. The study also demonstrated that the legally mandated focus on learning, not punishment, fosters transparency and a safe space for sharing stories, thoughts, and opinions for both healthcare professionals and next of kin.

14:45-16:00 Session 10J: The NASA and DNV Challenge on Optimization Under Uncertainty I
Location: J
14:45
The NASA and DNV Challenge on Optimization Under Uncertainty
PRESENTER: Vegard Flovik

ABSTRACT. This paper presents an Uncertainty Quantification (UQ) challenge focusing on key aspects of uncertainty quantification and optimization-based design in the presence of aleatory and epistemic uncertainty.

15:00
Uncertainty Quantification and design optimization using multi-objective optimization techniques

ABSTRACT. The NASA-DNV challenge problem aims to develop methodologies for Uncertainty Quantification (UQ) in safety-critical and high-consequence systems with sparse or expensive data. The challenge is designed to be discipline-independent while capturing the complexities of real-world engineered systems. It consists of two key problems: (1) quantifying both aleatory and epistemic uncertainties by integrating computational models with real system data, and (2) optimizing control variables to balance performance and risk. Given the presence of conflicting objectives in both problems, multi-objective optimization techniques provide a promising approach for simultaneously addressing these trade-offs. This paper explores the role of multi-objective optimization in UQ and control optimization within the challenge framework.

15:15
Adaptive Gaussian process-based strategies for solving the NASA-DNV UQ challenge 2025

ABSTRACT. This paper describes a dedicated approach to solve the 2025 NASA-DNV UQ challenge problem using adaptive Gaussian process strategies. The uncertainty model is determined through a calibration problem using an optimization approach to identify the aleatory variable joint distribution and the epistemic variable uncertainties. The estimation of the prediction interval for the model output components consists of a quantile estimation problem based on an adaptive Gaussian process strategy. Eventually, the design optimization problems are solved using Bayesian optimization controlling the noise level involved in the estimation of the objective and constraint functions.

15:30
Taking on the NASA and DNV Challenge 2025: Bayesian Calibration and Optimization under Hybrid Uncertainty
PRESENTER: Jan Grashorn

ABSTRACT. This paper addresses the NASA and DNV challenge on optimization under uncertainty, where participants were tasked with calibrating the uncertainty models of aleatory and epistemic parameters of an unknown system using a computational model and synthetic data, and identifying control parameters for different objectives. We present two approaches for model calibration, namely Bayesian optimization and sequential Bayesian updating. Additionally, a reliability-based optimization scheme based on a Bayesian approach and subset simulation is used to tackle a design optimization problem.

15:45
Uncertainty-Aware Optimization in Engineered Systems via Gradient Boosting and Differential Evolution

ABSTRACT. In engineered systems where safety and performance objectives are critical, accounting for various uncertainties-aleatoric and epistemic-is essential. Data sets offering useful information for decision-making can be expensive to collect and, hence, sparse. Distinction between all the underlying stochasticity through efficient data-driven approaches must be informed by selective strategies. In this work, we document our efforts in addressing a set of challenge problems posed by DNV and NASA personnel that require identifying the character and structure of the underlying stochasticity and then addressing reliability, performance, and optimization under uncertainty \cite{agrell2025nasa}. We employ LightGBM for regression tasks, FLAML for hyperparameter tuning, and Differential Evolution for robust optimization using limited test data and simulation results on key response time series.

14:45-16:00 Session 10K: Roundtable/Panel: Risk and responsibility: What can we learn from managing COVID-19?

Panelists: Olof Oscarsson, Svante Aasbjerg Thygesen and Betina Slagnes

Location: K
14:45-16:00 Session 10M: Nuclear safety II (IFE)
Location: M
14:45
A computer-based procedure tool for SMR control room operations
PRESENTER: Hanne Rødsethol

ABSTRACT. There is a growing interest in advanced technologies like small modular reactors (SMRs) to address global energy demands. Computer-based procedures (CBPs) are anticipated to play a crucial role in ensuring safe and efficient control room operations for SMRs. While CBPs have been explored in conventional nuclear power plants, limited guidance exists on their design and implementation for SMRs, especially considering the unique challenges of managing multiple reactors from a single control room. This study presents the conceptual design of the CBP tool HELP and shares initial insights from its application in a human factors simulator study with licensed operators. A human-centered design approach was adopted emphasizing iterative development and rapid feedback from domain experts and an interdisciplinary team. The CPB tool HELP was tested in a simulator study where licensed operators managed six SMRs from a single control room. Participants evaluated the tool’s usability, and a researcher provided observations on how the operators interacted with it. The findings suggest that the chosen design direction is promising. Participants navigated procedures effectively, although transitions between SMR units require further investigation. Future work will focus on refining the design, integrating additional procedures, and testing the tool across various plant types and operator configurations. Furthermore, real-time process data into the tool is planned as an initial step toward automated procedure execution.

15:00
Preliminary hazard analysis for hydrogen production by coupled High Temperature Electrolysis Facilities and Nuclear Power Plants

ABSTRACT. Coupling of High Temperature Electrolysis Facilities (HTEFs) and Nuclear Power Plants (NPPs) is a promising solution for large-scale clean hydrogen production. A preliminary hazards analysis of the system of systems made of HTEF and NPP is presented with reference to a preliminary design in which steam and electricity are supplied by the NPP to the HTEF. The outcomes of the analysis point at the fact that hydrogen leakage, steam leakage and overcurrent events on the HTEF side may contribute to an increase in the risk at the NPP side, in terms of higher probability of Loss Of Offsite Power (LOOP), Main Steam Line Break (MSLB) and Loss of Heat Sink (LHS).

15:15
Application of STPA in Probabilistic Risk Assessment of the Loviisa Nuclear Power Plant

ABSTRACT. Probabilistic risk assessment (PRA) of nuclear power plants requires the use of hazard analysis methods. System-theoretic process analysis (STPA) is a relatively new hazard analysis method based on systems-theoretic accident model and processes (STAMP) causality model. This paper studies the use of STPA in the context of probabilistic risk assessment. This is achieved by conducting a case study of the refueling pool backup cooling system of the Loviisa Nuclear Power Plant in Finland. Results of the case study are compared with an existing risk model of the system. The results of the study indicate that STPA is a promising method for hazard analysis. It can identify all hazard scenarios that were previously identified using multiple techniques, such as failure mode and effect analysis (FMEA) and human reliability analysis (HRA). In addition, the method has been found to be useful in communicating results within a multidisciplinary team of subject-matter experts. However, the incorporation of a new method into the well-established PRA methodology requires further research.

15:30
Analysis of I&C architectures using the GRS analysis and test system (AnTeS)

ABSTRACT. As modern nuclear power facilities often rely on various different instrumentation and control (I&C) systems, analyzing the dynamic behavior of the overall I&C architecture is crucial. Therefore, GRS developed the analysis and test system (AnTeS) which serves as a flexible research, analysis, and evaluation platform. It is a modular platform comprising various tools and methods for investigating I&C technology. It consists of four main modules:

1. AnTeS-SIC: Focuses on safety I&C systems, including real systems based on Teleperm XS hardware/software from Framatome and simulated systems using MATLAB/Simulink. 2. AnTeS-OIC: Pertains to operational I&C systems, featuring real systems based on Siemens Simatic S7 hardware/software and simulated systems via MATLAB/Simulink. 3. AnTeS-PRIO: Deals with priority modules, offering real modules like AV42, SPLM1, and a GRS-developed generic module, along with simulated modules based on MATLAB/Simulink. 4. AnTeS-FIELD: Encompasses process engineering systems, both real (e.g., vessels, sensors, valves, pumps) and simulated (using SimGen, a software developed by GRS itself).

These modules support the following analysis methods, among others:

1. Automatic Impact Analysis An automated extension of FMEA (Failure Mode and Effects Analysis), simulating failures in components to analyze their effects. It replaces traditional FMEA, providing comprehensive support for fault tree analysis (FTA) in AnTeS. 2. Fault Tree Analysis FTA visually maps failure paths in systems. Within AnTeS, it complements automatic impact analysis by providing both qualitative and quantitative risk assessments for system reliability. 3. Monte Carlo Simulation This simulation method models statistical failures to estimate risks. It verifies, replaces, or supplements the fault tree analysis for quantitative results in AnTeS.

This presentation introduces AnTeS and looks at the results of a project in which AnTeS was applied to a number of different model systems, in particular to investigate the effects of redundancy, diversity and functional diversity (qualitatively and quantitatively).

15:45
Dynamic reliability and safety modeling of a molten salt reactor using SysML v2
PRESENTER: Luigui Salazar

ABSTRACT. Molten Salt Reactors (MSRs) are promising next generation nuclear power plants, with their success hinging on rigorous assessments of dependability and safety. This study presents an innovative approach using Systems Modelling Language v2 (SysML v2), a Model-Based Systems Engineering (MBSE) language, to model and analyze MSRs with a focus on comprehensive safety and reliability assessments. The SysML v2 framework facilitates detailed modeling of the MSR's structural and behavioral components—including the reactor core, heat exchangers, coolant loops, and containment systems—while effectively capturing their interdependencies and interactions under both normal and fault conditions. By incorporating failure data, such as failure rates and fault tolerance mechanisms, directly into the SysML v2 model, a holistic representation of the MSR system is achieved.

To enhance the safety analysis, this study integrates the SAFEST toolchain, a state-of-the-art probabilistic risk assessment tool, with the SysML v2 model. SAFEST enables the automatic generation and evaluation of dynamic fault trees (DFTs) based on the SysML model, covering various operational and failure scenarios. This approach facilitates the iterative refinement of safety artifacts, allowing for a seamless transition between model-based systems engineering and model-based safety analysis. Each iteration yields precise dependability metrics, such as system failure probabilities and component importance measures, which streamline the identification of critical components and potential failure pathways.

The outcomes of this integrated methodology provide quantitative insights into the MSR’s safety profile by evaluating the effectiveness of redundancy strategies, pinpointing vulnerable subsystems, and recommending design improvements to enhance reactor reliability. Moreover, the dynamic scenario analysis enabled by this approach supports the evaluation of complex failure sequences and emergency protocols, ensuring a robust assessment of MSR safety under diverse conditions. Ultimately, this method contributes to the development of safer and more reliable MSR designs, thus supporting their deployment as a sustainable and secure energy solution.

14:45-16:00 Session 10N: WORKSHOP: International Workshop on Energy Transition to Net Zero: Reliability, Risk, and Resilience (ETZE R3) II Regulatory framework for energy transition
Location: N
14:45
Assessing the potential of risk-based regulations for emerging maritime technologies
PRESENTER: Kristin Kerem

ABSTRACT. The maritime regulatory landscape has traditionally been reactive, with regulations often shaped in response to past incidents. However, the rise of advanced technologies, particularly unmanned vessels, and the growing imperative for decarbonization, demands a shift toward proactive, risk-based framework – defining the safety levels that must be upheld and comparing new technologies to existing based on risk analysis. This paper explores the benefits and challenges of adopting such risk-based legislation within the evolving field of maritime technology, with a focus on the operation of unmanned vessels in the Gulf of Finland. Through a comparative analysis of current regulatory regimes and the advancements in technology, this study demonstrates how risk-based laws can improve safety, foster innovation and contribute to decarbonization goals. The role of various stakeholders in the implementation and enforcement of these regulations is also examined, highlighting the need for a balanced approach that supports technological integration without stifling innovation. The results of the study show that a risk-based regulatory approach could improve the safety, sustainability and efficiency of emerging maritime technologies while fostering innovation.

15:00
Past Regulation Experiences vs. Contemporary Challenges in Resource-Based Industries and Technologies?
PRESENTER: Janne Hagen

ABSTRACT. Regulation on waterfalls, water resources and hydro-electric power production has evolved over 100 years to reduce negative externalities and secure future generation’s control of these important resources. The heritage of this tradition was also considered when establishing the regulations on petroleum exploration over five decades later. Therefore, in the petroleum offshore industry a holistic regulatory system was established concerning ownership, control and management of development and comprehensive safety regulations. This framework has continuously developed based upon experiences from industry developments and accidents. In contrast, today industries like fish farming and wind farms evolve and expand without a holistic, subsequent control, security and safety regulatory schema. Lack of proper regulation has given us security and safety challenges as, lack of control, loss of jurisdiction, and negative externalities on workforce and environment. How did our predecessors regulate resource-based industries, and to what extent have we achieved similar regulatory control over emerging industries today? What insights do past regulatory experiences offer for addressing today’s challenges? How do global threats to power supply and infrastructure, evolving security concerns, and digitalization-related vulnerabilities impact regulatory frameworks? This paper is inspired by four expert seminars that examined these issues. It introduces a timeline of key regulatory developments, explores the regulatory development process, and discusses potential barriers to effective regulation. Given that regulatory compliance is enforced through auditing and sanctions, this paper also examines the evolution of regulation and governmental supervision, particularly the role of internal control auditing. Additionally, it highlights the growing demand for advancements in auditing methodologies. To navigate an increasingly complex future, we propose a research program focused on safety and security regulation for resource-based industries. This initiative aims to serve as a foundational knowledge base for a long-awaited White Paper on regulation and governmental supervision.

15:15
Intersecting Geopolitics, Energy Security, and Climate Change Adaptation Policies: Norway’s Oil and Gas Dilemma
PRESENTER: Claudia Morsut

ABSTRACT. This paper explores the intersection between geopolitics and national climate change adaptation policies, by addressing to what extent securitising capitalist global enterprises due to geopolitical factors impacts adaptation policies. The paper considers geopolitical shifts from 2015 to 2024, including the war in Ukraine, the sabotage of the North Stream pipeline, and other significant events that have reshaped the geopolitical landscape in Europe. Drawing on Karen Lund Petersen’s recent discussion on the securitisation of capitalist global enterprises (Lund Petersen 2023), this paper examines the oil and gas industry in Norway and its influence on the Norwegian climate change adaptation policies. The paper delves into the meaning of securitisation within the context of Norway’s oil and gas industry, which has become a matter of societal safety and security, a process that can skew policy priorities, potentially slowing down the implementation of adaptation measures. For instance: financial resources may be diverted towards increasing the security of the oil and gas sector, while political decisions may underprioritize adaptation achievements. Norway faces a significant dilemma: how to reconcile its ambitious commitment to becoming a low-emission society by 2050 with its increased substantial role as an oil and gas producer and supplier since the war in Ukraine started. By examining these dynamics, the paper aims to provide a nuanced understanding of challenges and implications for climate change adaptation related to the geopolitical context. In particular, the paper seeks to shed light on how Norwegian policymakers and industrial agents integrate climate change adaptation into national and international security frameworks.

16:30-17:45 Session 11A: Risk analysis and quality
Location: A
16:30
Defining and Assessing Risk Analysis Quality : Insights from Applications of the SRA Risk Analysis Quality Test
PRESENTER: Irem Dikmen

ABSTRACT. The Applied Risk Management Specialty Group of the Society for Risk Analysis (SRA) identified a need to define, characterize, and improve risk analysis quality, specifically its quality in supporting risk management. To address that need, they developed the Risk Analysis Quality Test, the RAQT, a list of 76 questions, each asking if a risk analysis satisfies an aspect of risk analysis quality. The RAQT is both a definition of risk analysis quality, and a spotter of shortfalls, providing a language with which to describe then address possible shortfalls. In this presentation, we will demonstrate how RAQT can improve risk analysis quality on several levels. We will describe applications of the RAQT at three levels: 1) to evaluate risk analysis reporting within a large project; 2) to critique and suggest improvements for describing risk; and 3) as a basis for orienting an organizational culture around awareness and management of risk. Finally, insights arising out of the three applications will be listed and practical implications will be discussed.

16:45
Does the 2024 version of the NORSOK Z-013 standard bridge the gap between risk assessment practice and contemporary risk science knowledge?

ABSTRACT. The relationship between theory and practice is thoroughly discussed in the scientific literature, including the risk field. Previous research has revealed that there currently exists a considerable gap between applied risk assessment and management and contemporary risk science knowledge. In this paper, we take a closer look at the NORSOK Z-013 (2024), which is a recently updated standard aimed primarily at the Norwegian petroleum industry to meet the regulatory requirements for risk- and emergency preparedness assessments. Since the previous version of the standard, there has been a considerable shift in how the Norwegian petroleum authorities define and understand the risk concept, from a traditional probability-based perspective to a broader perspective that highlights uncertainty as a main component of risk. It is thus interesting to scrutinize and discuss some of the new elements related to risk assessment in the revised NORSOK Z-013 standard compared to its previous version. The aim of the analysis is to study how the uncertainty-based perspective of risk is reflected in the changes made and to assess the alignment of the new version with fundamental risk science knowledge. Finally, we offer some suggestions on how developing risk assessment standards and frameworks can further contribute to bridging the gap between theory and practice.

17:00
Towards practical definitions of quality of maritime risk analyses during procurement processes

ABSTRACT. In the maritime context, national authorities and other actors regularly procure risk analyses from external providers. In the public sector this requires the drafting, publishing and evaluating the outcomes of formal calls for tenders. In such procurement processes, the quality of the received proposals is typically highlighted as a key criterion to be used when deciding on the winning bid, alongside other features such as price and the availability of sufficient personnel/other resources. This implies estimating the quality of a risk analysis, before it is carried out. As this is naturally a challenging task, the quality criteria of risk analyses are commonly simplified one way or another, often involving the perceived quality of previously produced studies or simply relying on the provider’s overall reputation. This might be convenient in practical situations where a provider must be selected under time pressure. However, it may present a missed opportunity to ensure best value for money and, in the bigger picture, raise the standard of commissioned risk studies and the field at large. Our contribution builds on the SRA Risk Analysis Quality Test with a specific focus on which tests could be relevant for the risk analysis tendering stage. Based on an initial review by the authors, we propose two lists of key criteria for this purpose: one for drafting calls for proposals and another for evaluating them. Aimed primarily to initiate a focus on this aspect of risk management, the initial lists will be further developed during interviews and workshops with potential end-users in future work.

17:15
Assessment of the Trustworthiness of Grey-Box Models for Condition Monitoring of Industrial Components and Systems

ABSTRACT. In this paper we propose a method to assess the trustworthiness of Grey-Box (GB) models for Condition Monitoring (CM) of industrial components and systems. We consider a GB architecture that leverages the strengths of physics-based White Box (WB) and data-driven Black Box (BB) models by using the BB to correct the WB prediction. Specifically, the BB receives in input the measured signals and the output of the WB. We define trustworthiness in relation to model accuracy and consistency with the laws of physics. The latter is evaluated using Shapley Additive exPlanations (SHAP) to quantify the extent of reliance of the BB on the WB output. The rationale is that if the BB is sensitive to the WB output, the GB is trustworthy because it indirectly embeds the laws of physics. The effectiveness of the proposed method is demonstrated on a synthetic case study which mimics the condition monitoring of an industrial system.

16:30-17:45 Session 11B: Risk communication issues II
Location: B
16:30
Risk communication for natural hazards: constructs and evaluation of effectiveness in scholarship and practice
PRESENTER: Olivia Jensen

ABSTRACT. This paper responds to calls to strengthen the link between risk communication scholarship and risk communication practice (Balog-Way et al, 2020; Boholm 2019) through a study of risk communication evaluation indicators in the domain of natural hazards and climate change-related risks. While risk communication has many goals, this study focuses on the objective of enabling individuals to take risk-informed decisions, a capability which we refer to as having ‘risk know-how’ (Brown et al 2021). Risk-informed decisions involve making judgements based on factual evidence and contributing to public discussion about policies and regulation in domains in which scientific and technical information is characterised by uncertainty (EFSA 2017). Through a review of scholarly literature, this study identifies risk communication effectiveness constructs (audience attention, understanding, capacity to take a decision etc.); their operationalization as specific indicators in the domain of natural hazards and climate change-related risks; and their application through evaluation metrics. We compare the approaches used in the academic literature with evaluation approaches and indicators used by international, regional or national organizations responsible for communication of natural hazard risks to the public, drawing on examples from North America, Europe and Australia. We find a gap between the set of indicators which would be appropriate and adequate to measure risk-informed decision-making, and the indicators and metrics actually used by organisations to monitor the effectiveness of their risk communication activities. This suggests that despite good intentions, organisations are missing opportunities to evaluate and improve the design of risk communications.

16:45
Flood Risk Perception and Preparedness Intention: The Critical Role of Trust and Risk Communication in Flood Risk Governance -Evidence from Wuhan, China
PRESENTER: Lijie Dong

ABSTRACT. Flood risk perception and preparedness intention are critical intrinsic drivers of private flood protection behaviors. These individual-level perceptions are often influenced by external environmental factors. However, there is limited research on how these influences operate within the context of flood risk governance. This study aims to bridge this gap by exploring the factors and pathways that shape individuals' flood risk perception and their intention to prepare for floods. Specifically, the study examines key variables including flood risk perception, flood preparedness intention, trust in public flood protection capacity, and risk communication. The study proposes hypotheses and constructs a theoretical model of the interaction mechanisms among these variables, which is empirically tested using survey data from 1,254 residents in Wuhan, China. Path analysis is employed to analyze the interactive mechanisms among the variables. Results reveal that, 1) flood risk perception has a significant positive influence on the individual’s preparedness intention. 2) Contrary to expectations, trust has a positive effect on flood risk perception, which partly mediates the role of trust on preparedness intention. 3) Risk communication has a direct positive effect on the individual’s preparedness intention instead of risk perception; Risk communication has a strong positive effect on trust. In sum, this study reveals the nuanced role of risk communication and trust in shaping citizens’ understanding of floods, providing valuable evidence for decision-making in flood risk governance.

17:00
A Position on Rethinking HCI Practices in Dynamic Consent: Balancing Privacy, Trust, Safety, and Risk Communication for Enhanced System Reliability

ABSTRACT. As systems in Human-Computer Interaction (HCI) and the Internet of Things (IoT) evolve, traditional consent models are increasingly inadequate to address the challenges of privacy, trust, safety, and risk management. Static, one-time consent mechanisms fail to keep pace with the dynamic nature of data-driven interactions and autonomous devices. This paper explores the potential of dynamic consent as a conceptual framework for enhancing user control, system reliability, and risk management in HCI. Dynamic consent provides a flexible, adaptive approach to consent that allows users to adjust their preferences in real-time, promoting transparency, privacy, and trust while reducing consent fatigue. This is crucial in IoT contexts where systems operate autonomously, collecting data passively with minimal user interaction. Furthermore, the cross-border flow of data presents complexities in consent management, as consent across digital borders must respect different jurisdictional regulations while protecting individual rights. This paper explores the interplay between dynamic consent and key concepts such as digital sovereignty, where individuals maintain control over their digital identity, and consent fatigue, which erodes user engagement, and trust. As data becomes a valuable commodity within data markets and digital value chains, data reuse requires flexible consent models that ensure transparency. It argues that while static consent models undermine privacy and trust, dynamic consent offers a more flexible and adaptive approach. This approach allows users to adjust their consent preferences in real-time, reducing consent fatigue and improving system transparency. Instead of proposing specific solutions, this paper advocates for rethinking HCI practices within the context of dynamic consent, particularly in the area of risk communication. To that end, we address the following Research Question (RQ): How can dynamic consent be framed to balance privacy, trust, safety, and risk management in Human-Computer Interaction?

17:15
Expanding regulatory scope to address climate action
PRESENTER: Robyn Wilson

ABSTRACT. Persistent risk mitigating behaviors must be understood and encouraged to promote a more sustainable and resilient future. Using climate action as an exemplar, even if individuals believe that climate change is anthropogenic and must be addressed, there is typically a gap between their risk perception and behavior. Much research focused on overcoming those gaps has tried to bring future impacts and outcomes closer to the individual, supposedly reducing psychological distance. These approaches yield mixed results and are often not easily extrapolated to new situations. Instead of trying to bring distant outcomes closer to individuals in the moment, we focus on expanding regulatory scope: widening the range of considerations that people account for in their decisions and behaviors to include further off outcomes and possibilities. Through this framework, we assess the relationship between regulatory scope and existing risk mitigation behavior and identify how regulatory scope manipulations influence energization and future intentions. In one series of studies, we test how expanded scope for vegetable-based diets increases energization and intention to reduce meat consumption. In another series of studies, we assess the relationship between expansive regulatory scope and a range of pro-environmental behaviors. We seek to understand if individuals who have broader scope have engaged in and are more likely to engage in sustainable behaviors, especially behaviors that address more expansive concerns. Through a combination of cross-sectional surveys and experiments, we expect the results to provide guidance for a new class of mindset-based interventions. These interventions would encourage individuals to transcend the “me in the here and now” in decision-making processes, a common barrier to engaging in behaviors with short-term costs but long-term gains.

16:30-17:45 Session 11C: Reliability, Risk and Resilience of Cyber-Physical Systems
Location: C
16:30
From Classical to Advanced Risk Methods: Demonstrator for Industrial Cyber-Physical Systems
PRESENTER: Andrey Morozov

ABSTRACT. Modern smart factories, structured as industrial Cyber-Physical Systems, exhibit high levels of reconfigurability and heterogeneity. However, assessing risks in these dynamic environments poses significant challenges. This paper presents a demonstrator designed to simulate modern production lines, illustrating how variations in system configurations impact the balance between production costs and system safety. The demonstrator dynamically showcases both traditional and modern risk assessment methods, including Fault Trees, Stochastic Petri Nets, and Probabilistic Model Checking. It highlights the limitations of classical methods in capturing dynamic risks and the strengths of advanced techniques in addressing complex error chains. Based on these insights, we propose enhancements to current risk models, advocating for a hybrid approach that integrates both traditional and advanced techniques to meet the demands of next-generation industrial systems.

16:45
Cyber-physical studies for Smart Grid resilience
PRESENTER: Irina Oleinikova

ABSTRACT. Integrating renewables, power system infrastructure is becoming more digitally connected to ensure safer, more efficient, and decarbonized future. The challenge is that infrastructure is becoming increasingly vulnerable the more connected it becomes. A geopolitical tension with increased risks of cyber-attacks to critical infrastructure, and importance of security of energy supply shape power system this decade. From the smart grid perspective, energy professionals are ready to offer different solutions to keep the lights on towards a reliable and resilient future of the society. The role of grid critical infrastructure, required data exchange and flexibility utilization, including TSO-DSO coordination, will be discussed from the power system operation and reliability point of view. There is a need for sufficient flexibility for system balancing, congestion management and a need of resilience for facing emergency events while keeping costs affordable. Traditional sources of flexibility are being reduced with a shift away from fossil fuels. Meanwhile in some countries balancing comes primarily from hydropower but might not be sufficient in the future because of increased demand, as well as climate impacts on hydropower assets. Flexibility has traditionally been utilized in the operation stage, for balancing power flows, solve congestions, maintain stability; now, the next level of flexibility can be defined as its full deployment and utilization since the planning stage of the power system, being integrated into procedures starting from the long-term planning and correspondent market mechanisms for procuring and adequately reward the flexibility providers and end-users. Examples of cyber-physical mechanisms for flexible, reliable and resilient smart grid utilization will be demonstrated in the paper. Summarizing key factors and barriers for flexibility utilization will be also discussed.

17:00
Systems Engineering Approach to DfR
PRESENTER: Ryan Aalund

ABSTRACT. Reliability engineering predominantly approaches product development by separating a system into individual components and working bottom-up, emphasizing hardware reliability. As systems become more complex and interconnected, especially with increasing software integration, these methods fail to capture interdependencies and integration points critical to system reliability. A Design for Reliability (DfR) framework solely focused on hardware neglects the intricate dependencies and risks arising from interactions across components.

A systems engineering approach to reliability emphasizes the entirety of the system, ensuring a comprehensive understanding of how different components—hardware, firmware, and software—function together. By examining the system holistically, this approach uncovers hidden vulnerabilities such as cross-system dependencies, cascading failures, and integration point weaknesses that compartmentalized methods overlook. In conventional reliability models, failures are often treated differently across hardware, software, and firmware without recognizing the importance of their interactions. This lack of unified analysis frequently results in missed failure modes caused by combining unique parts that do not arise in component-level assessments. Moreover, by focusing only on individual components, organizations may fail to analyze how these components contribute to the overall system function and whether they meet the customer's operational needs. A systems approach ensures that the customer-facing outputs and functional requirements are prioritized so the final product performs reliably under real-world conditions.

This paper explores case studies in critical and emerging industries, such as aerospace, automotive automation, and IoT, to highlight the limitations of current reliability practices. It proposes a systems-oriented DfR methodology that shifts focus from isolated hardware approaches to one that accounts for system interdependencies and integration points. This framework enhances system-wide reliability by incorporating both hardware and software, alongside modeling, simulation, and cross-disciplinary collaboration, ensuring resilience and addressing customer needs in increasingly complex technological environments.

17:15
Towards Causality Graph Expansions For Local And Global Causal Assessment of Flow Network Models For Analytical System Resilience Explainability
PRESENTER: Ivo Häring

ABSTRACT. Network models of modern systems such as critical infrastructures, systems of systems, or human cyber-physical systems are key for their modelling, understanding, design, and analysis. Examples include electrical, communication, supply and transport networks, smart homes, or physical access systems. Graph, flow, or engineering-physical models allow by now to assess the influence of disruptions of single or more elements at different system levels to an increasing level of accuracy, transiency, and real time. Also, a plethora of metrics are available to assess system overall risk, e.g., system loss or resilience metrics. The present approach employs the concept of causal graphs and their quantification to reveal levels of dependencies of nodes, which can be extended to cover also edges. This is first conducted at the level of two nodes starting with direct causal dependency chains of first order, and then proposed to be extended to causal elementary models for three elements: chain, fork, and immorality. To assess to which degree two arbitrary nodes of the network are linked by a causal chain of first order, for simplicity a linear dependency model between the nodes is assumed, and its parameters are determined assessing the effect of critical possible risk and resilience weighted disruptions. In this way for each causal elementary graph its relevancy for the overall causal network can be ranked. If this is available for all causal building blocks a procedure can be given how to construct the overall causal graph bottom up avoiding cyclic and undirected structures. The proposed approach is described stepwise as well as equations are given for up to causal chains. The scaling of the approach is assessed. Best local causal models as well an overall causal model can be constructed. For an example the causal graph is constructed and discussed using first order causal chains.

16:30-17:45 Session 11D: Natural hazards
Location: D
16:30
Beware of Return Period Maps in Natural Hazard Loss Estimation

ABSTRACT. Two important parts of understanding the risk that natural hazards pose to communities are estimating losses from portfolios of buildings and assessing the probability of loss of critical infrastructure services at the scale of individual homes, businesses, and other facilities. The standard approach to do this is to simulate over the possible hazard event space and then for each simulated event simulate the performance of buildings and infrastructure assets. From there, the infrastructure network performance is modeled, and then building-level infrastructure performance and portfolio-level losses are calculated. However, this is computationally expensive in many cases. Can event-based simulation be replaced with use of return-period hazard maps to reduce the computational burden? This is increasingly being suggested in practice. This paper evaluates this question first from first principles and then through a simple example demonstrating the limits of replacing event-based simulation with return period map based analysis. This paper shows that return period map based analysis can significantly mis-represent risk and losses and should be used only with great caution and only for situations where the radius of influence of the network is small.

16:45
Preparedness of SEVESO establishment for NATECH accident - lessons learned from floods in 2024

ABSTRACT. Natural hazards can cause Natech accidents by triggering dangerous events such as fire, explosions, and toxic releases in industry. As a result of the Natech accidents, a significant loss of life, environment, and property is experienced. Natech accident, which occurred in the industry, can be classified as a major accident according to European regulation SEVESO Directive (European Commission, 2013). Therefore, it is extremely important to pay attention to risk management of Natech accidents. With the expected increase in intensity and frequency of natural events from climate change, Natech risks presenting increasing concern in risk prevention and risk management at local, national and international level (European Commission, 2020). This paper is focused on selected natural hazards–floods, which directly endangered SEVESO chemical plants in 2024 in the Czech Republic. The floods began on Friday, September 13, as a result of the collision of two frontal systems over the Central European area, which brought persistent rain. They affected most of the territory of the Czech Republic, its neighbouring countries, as well as Croatia and Romania. A total of 262 watercourses reached one of the flood degrees (from 1 to 3 level). More than 55 measuring points recorded 100-year water level. The experiences and lessons learned from the floods occurred in 1997 and 2002 on the territory of the Czech Republic, significantly contributed to minor losses to the surroundings as a result of uncontrolled chemical substance releases. This paper discusses the occurrence of Natech risk - floods in the Czech Republic from the perspective of Seveso establishment, gives examples from the past and proposes a way of prevention, preparedness and mitigation. It concludes with recommendations on how to prepare for floods that can reoccur in future.

17:00
Multi-hazard vulnerability dynamics to floods and droughts. An enhanced Impact Chain and Vulnerability Matrix approach

ABSTRACT. In recent years, Europe has experienced extreme hydro-meteorological hazards in quick succession. The powerful floods of 2020-2021 resulted in human casualties and widespread damage across several European countries, while the 2022 drought event marked a milestone in drought risk management at the continental level. Considering the increasing frequency and severity of hydro-meteorological hazards, along with their growing tendency to occur back-to-back, new conceptual frameworks and tools are needed to study the interplay between vulnerability, impacts, and mitigation measures within multi-hazard contexts. This study aims to explore the dynamics of vulnerability in the multi-hazard context of the flood events and drought that impacted Romania in 2020-2022. Vulnerability dynamics is analysed relying on the conceptual framework of augmented vulnerability and through the operational frameworks of enhanced Impact Chains and Vulnerability Matrices, all developed by the authors. The Impact Chain draws on over 220 sources, such as scientific papers, legislative documents, communications from the European Commission, official reports, hydro-meteorological datasets, news reports, and websites. Vulnerability Matrices are developed for the flood hazards in 2020-2022 and the drought hazard in 2022, allowing for comparison in terms of vulnerability type, scale, augmentation by impacts and/or misguided adaptation options, and tackling by mitigation measures. Our findings support the enhancement of multi-risk mitigation strategies, providing key actionable insights for reducing the multi-hazard impacts in countries located at the spatial and temporal intersections of extreme hydro-meteorological hazards. This research puts forth a fundamental thesis: understanding and addressing vulnerability dynamics is crucial in determining whether future compounded impacts remain manageable or contribute to disasters.

16:30-17:45 Session 11E: Quantum Methods in Risk and Reliability
Location: E
16:30
Evaluating Quantum Algorithms: Closing the Gap between Theory and Practice

ABSTRACT. Motivated by its unique capabilities, quantum computation has gained significant attention over the last decade with numerous models and algorithms proposed for dealing with engineering challenges. The field of risk and reliability has also seen a growing interest in this area, with studies exploring Quantum Machine Learning for remaining useful life, Quantum Optimization for condition monitoring in civil structures, and Quantum Inference for enhancing Bayesian network models, to name a few.

However, a common limitation across these works is the lack of thorough comparisons between the proposed quantum algorithms and their state-of-the-art classical counterparts. This is a critical gap that must be addressed not only to reliably evaluate the current state of the field, but also guide its development toward the most promising paths for achieving a quantum advantage.

There are two key challenges to address this gap. First, quantum computation operates on fundamentally different principles than traditional computing, making direct comparisons—such as using the number of iterations—often infeasible. Second, large-scale, error-corrected quantum computers are not yet available, so machine-to-machine comparisons are also not yet possible.

In this paper, we propose a novel methodology to evaluate quantum algorithms against their classical counterparts. Our technique is based on a simple observation: quantum computers do not extend the operations that a classical computer can perform, instead they have the potential to make them more efficient. As such, when large problems are considered, they ought to present a shorter runtime than classical algorithms in order to surpass, in any sense of the word, a classical algorithm.

We validate the proposed methodology by applying it towards the two most relevant quantum algorithms within the field of risk and reliability: the Grover algorithm, for quantum inference, and Quantum Approximate Optimization Algorithm (QAOA), for combinatorial optimization.

16:45
Incorporating Continuous Distributions in Quantum Bayesian Networks for Reliability Assessment

ABSTRACT. Operational demands in industries, such as the energy sector, underscore the critical need for reliable equipment capable of withstanding long-term planning and unpredictable factors. Reliability assessment is important for maintaining productivity and optimizing maintenance strategies, especially in scenarios where data limitations challenge traditional assessment methods. In this context, Bayesian inference has emerged as a dynamic tool to update reliability estimates using data from various hierarchical levels. However, conventional simulation techniques may lack computational efficiency when dealing with the reliability estimation of complex systems, creating opportunities to explore alternative approaches such as quantum computation techniques. Quantum Computing leverages principles of quantum mechanics, such as superposition and entanglement, to try to address these computational challenges more effectively. Previous works have applied quantum Bayesian networks using amplitude amplification methods to the context of risk and reliability, focusing on nodes representing discrete probability distributions. This research aims to enhance this approach by incorporating continuous marginal and conditional probabilities into the analysis, which is particularly relevant for systems that rely on these distributions to model events. We explore the encoding of continuous probability distributions within the amplitude amplification framework, aiming to improve the efficiency and precision of probabilistic inference. Additionally, we apply this methodology to Bayesian networks, benchmarking the performance of quantum methods against classical simulation techniques like Monte Carlo to identify scenarios where quantum techniques demonstrate clear advantages.

17:00
Assessing Steel Embrittlement in Hydrogen Systems with Quantum Machine Learning

ABSTRACT. Hydrogen as a carbon-free fuel, is an attractive and promising energy source with diverse industrial applications. Its efficient storage and transportation capabilities, enabling the use of pipelines, reinforce its role in the transition to cleaner energy. However, a critical challenge in hydrogen infrastructure is hydrogen embrittlement (HE) — a phenomenon where hydrogen interacts with a steel's microstructure, reducing its mechanical strength and ductility. Over time, this degradation can lead to crack formation and structural failures, posing significant risks to hydrogen-based systems. Experimental factors such as pressure and strain rate play a crucial role in HE, forming the foundation for identifying steels better suited for hydrogen supply systems. Predicting and mitigating HE is essential to ensuring the safety and reliability of hydrogen infrastructure. In this context, Machine Learning (ML) offers powerful tools for analyzing tensile test data, incorporating chemical composition, environmental factors, and mechanical properties. ML algorithms can classify steels based on their susceptibility to HE, aiding in the development of more resistant materials and enhancing structural integrity in hydrogen applications. Recent advances have also explored Quantum Computing to further improve ML capabilities. Quantum Machine Learning (QML) explores quantum properties with the potential to improve data processing, particularly in classification tasks. This study applies Quantum Neural Networks to classify steels based on their susceptibility to embrittlement, using a curated dataset of 158 tensile test data points from public reports conducted in various hydrogen environments. Additionally, it explores the potential of parameterized quantum circuits and preprocessing techniques, such as data augmentation, to enhance classification performance. The findings aim to provide insights into the effectiveness of QML for material classification, contributing to the development of more accurate predictive models for HE and improving the safety of hydrogen infrastructure.

17:15
A quantum-physical approach to modelling the failure rate of a two-state component
PRESENTER: Elena Bylski

ABSTRACT. The objective of this approach is to explore the potential of quantum physics and quantum computing in terms of reliability modelling. A first step is to develop a methodology to predict the failure rate of a component by a quantum-physical approach. Two well-known states are assigned to the component: functioning and faulty.

Initially, a distinction is made from the non-Hermitian quantum approach of Lin, Zhu, Chen (2019), which describes the dynamics of state development without reference to a “certain system”. However, our approach differs in the objective of developing a methodology that enables the prediction of the transition rate into a failed state – or simpler, the prediction of a failure rate. The application here refers to a dedicated component, which is a silicon carbide diode.

The following section provides a brief summary of quantum physics fundamentals that are required to calculate the quantum-physical failure rate. The section closes by explanation of the free electron gas model.

The methodology for predicting the transition rate includes the following steps: (1) The quantum numbers of a valence electron bound in the semiconductor material are determined. (2) Based on the free electron gas model, the wave function of the valence electron, the density of states, and the energy of the final state are described. (3) A disturbance is defined through which the valence electron reaches the final state. (4) Fermi's golden rule is applied to the obtained results to calculate the transition rate per time period. (5) The formation of the reciprocal transition rate yields the mean lifetime. (6) If the value of the perturbation is in the permissible range, a statement about the probability of a state change and the mean lifetime of the state can be made based on quantum physics.

The contribution closes with a foresight of further potential developments.

17:30
Integrating Quantum Computing and Machine Learning for Abnormal Events Detection in Offshore Oil and Gas Operations

ABSTRACT. In the oil and gas industry, Abnormal Event Management (AEM) plays a crucial role in ensuring operational safety and efficiency by detecting abnormal events, diagnosing their causes, and taking corrective actions to return systems to a stable state. Given the high risks and complexities associated with offshore oil production, particularly in harsh environments, AEM becomes indispensable. This study focuses on a Brazilian company's offshore oil platforms using the dataset entitled “3W”, which provides realistic operational data from key monitoring points, such as pressure and temperature at various locations. These variables are critical for identifying early signs of failure or malfunction. Different types of undesirable events can be analyzed, including abrupt increases in BSW (Basic Sediment and Water), spurious closure of the DHSV (Downhole Safety Valve), severe slugging, flow instability, rapid productivity loss, restrictions in the PCK, scaling, and hydrate formation in the production line. These events can have severe implications for production efficiency and safety, making their early detection and management vital. In addition to traditional AEM strategies, this work explores the potential of quantum computing (QC) combined with machine learning (ML) techniques to monitor and predict failures in offshore wells. QC has the potential to advance data processing capabilities, which is especially valuable given the large volumes of data in this sector. By leveraging quantum machine learning (QML), specifically Quantum Convolutional Neural Networks (QCNN), the study aims to classify operational data from the oil and gas industry and compare the performance of QCNN with other QML methods, such as Quantum Neural Networks, offering new insights into improving predictive maintenance and failure detection.

17:45
Optimization of Green Hydrogen Distribution Network Using Quantum Algorithms

ABSTRACT. The growing demand for clean and efficient energy sources has driven the use of green hydrogen (H₂V) as a viable energy alternative, requiring highly reliable distribution networks. However, due to the various factors involved in the distribution of H₂V, new optimization methods have been explored, including quantum approaches. Quantum optimization methods utilize principles of quantum mechanics to solve complex combinatorial optimization problems. This work explores the optimization of the location of H₂V production facilities within a distribution network using quantum algorithms, specifically the Quantum Approximate Optimization Algorithm (QAOA) and the Variational Quantum Eigensolver (VQE). The approach includes simulations that aim not only to optimize the location of the facilities but also to test scenarios where some of these them are removed, in order to assess the impact and determine the new optimal network configuration. The results demonstrate the potential of these techniques to solve combinatorial optimization problems, highlighting the promising role of quantum computing in addressing the challenges faced by energy companies in the distribution of green hydrogen.

16:30-17:45 Session 11F: Predictive maintenance
Location: F
16:30
A prescriptive maintenance policy for degrading units in a civil aircraft context
PRESENTER: Nicola Esposito

ABSTRACT. The goal of this paper is to conceive a prescriptive maintenance policy for a deteriorating system which is submitted to an aeronautics-like exploitation cycle. We focus on an application where on-line information about the state of the system is relatively easy to obtain, but performing maintenance interventions is significantly harder. This is reflective of the aeronautics world, where sensors can provide cheap diagnostics of the health state of the system, but on-line repairs/replacements are impossible or very costly. Conversely, at some prearranged times (for example, at night in a hub airport) maintenance is easy to perform. Therefore, we propose a maintenance policy where it is assumed that the system under study is submitted to a fixed working horizon, at the end of which it is systematically replaced, regardless of its state, for a negligible cost. In between this time horizon, real-time condition monitoring is available and, if needed, replacements can be performed at any time, albeit incurring a much higher cost. Moreover, we assume that, at predetermined points in the aforementioned time horizon, dedicated inspections can be performed that reveal the true degradation state of the system (contrarily to the on-line condition-monitoring which only assesses if the system is failed or not) and, based on the obtained measurements, the usage rate of the system can be changed. The rationale of this idea is that a “prescriptive action” is much less invasive than a replacement and can be performed on-line even outside of the prearranged maintenance epochs. This action will then influence the future evolution of the degradation process with the aim of optimizing the overall exploitation of the system while mitigating failure risk. The optimal maintenance policy is defined by optimizing the long-run average reward rate. The lifetime of the unit is defined by using a failure threshold model.

16:45
Practical application of predictive maintenance - solving the challenges

ABSTRACT. Industry is eager to harvest the potential of predictive maintenance (PdM). At the same time, only a fraction of the methods proposed in the past decade is actually applied in practice. This paper will identify the most prominent barriers for practical application of PdM, which are mainly related to the quality, relevance and availability of data. After that, three solution directions will be presented. The first solution aims to properly match ambition level with the available data and knowledge. The second set of solutions circumvents the lack of data by using physical models in addition to data. The third set of solutions addresses alternative ways of collecting data, including experimental test benches, experiments on fielded systems, and using numerical (simulation) models to generate data. Finally, the required standardized registration of (failure) data will be addressed.

17:00
Data-Driven Predictive Maintenance of Spare Parts for Smart Manufacturing
PRESENTER: Parisa Niloofar

ABSTRACT. The rise of artificial intelligence (AI) and Industry 4.0 has led to a growing interest in predictive maintenance strategies, which offer benefits like reduced downtime, increased availability, and improved efficiency. This paper explores data-driven predictive maintenance of spare parts at a smart manufacturing company, based on AI methodologies to enhance efficiency and reduce downtime. The success of a smart manufacturing company is partly attributed to its advanced production facilities, particularly the precision injection moulds used for producing detailed and consistent parts. Injection moulding involves melting plastic and injecting it into a mould under high pressure. These moulds consist of many critical spare parts, such as gate bushes and inserts, which are prone to wear and tear due to intense pressures and temperatures. Failures in these small parts can halt production and affect efficiency. This study highlights the limitations of deep learning models due to insufficient data and the need for explainability and interpretability of models due to interaction with non-technical personnel. Also, results show that tree-based classification models, particularly Random Forest and XGBoost, perform best, with test accuracies of 69.59% for gate bushes and 69.23% for centre units. This investigation advances the manufacturing company’s predictive maintenance capabilities, offering insights for future AI-driven maintenance optimization, leading to reduced costs, enhanced efficiency, and improved health and safety standards.

17:15
Data-Driven Maintenance Optimization for Unit with a Bivariate Deterioration Process

ABSTRACT. We consider a single-unit system with two condition indicators, i.e., the system deteriorates according to a bivariate deterioration process. We assume that the deterioration process, including its parametric form, is unknown. Instead, we assume that a condition-based maintenance policy has to be specified only based on limited condition data. This makes the approach fully data-driven.

More specifically, we assume that condition data of K runs-to-failure is available. For each run-to-failure, the two conditions are measured periodically, until the system is failed. We use logistic regression to estimate the failure probability for each system state (x1,x2), and based on this we determine in which system states to carry out preventive maintenance. We compare the resulting policy to the oracle policy under the assumption that the exact deterioration process is known, and analyze how the performance of our approach depends on the amount of data that we have.

16:30-17:45 Session 11G: Anticipatory Behavior for Safety in Human-Autonomous Agent Interactions
Location: G
16:30
Capturing variations of successes from failure events to entrench resilient performance identification

ABSTRACT. Resilient individual and organizational performance is akin to the boxing rope-a-dope technique. The purpose of adopting this technique is to lean on the boxing rope to increase the distance between oneself and the opponent on attack to reduce the impact of the punches. Similarly with individual and organizational resilient performance, sometimes success is not only about avoiding a problem (anticipation), improving a current state or minimizing the loss and risk (coping) but of restoring a performance potential (adaptation). This distinction is often buried when we analyze events and consider outcomes as either a failure or success. What may count as success is contingent on the situation and there is the need to broaden the perspective by which we assess individual and organizational outcomes as successes or failures. We propose and highlight how redefining organizational outcomes, from the dominant, either failure-or-success approach, to an approach that highlight various forms of human agency in coping and adaptation will enrich our understanding of organizational issues. We raise this discussion and illustrate our approach using incidents drawn from the nuclear industry event reports revealing successes embedded in evident failure events. This speaks to the need to recognize resilient capabilities, when displayed during organizational events.

16:45
Assisting anomaly detection in maritime shore-based operator work
PRESENTER: Gesa Praetorius

ABSTRACT. This paper will report findings from a study focused on an approach to enhance risk identification and mitigation in maritime traffic information services. A human-centered design approach was used to develop and evaluate a concept on how to support shore-based operators in the task of providing timely and necessary information to ensure safe movements and mitigate the risk of a collision or grounding. After an initial analysis of incident reports from 2009-2019, two online workshops with Vessel Traffic Service (VTS) operators were held to explore how experts identify, classify and act upon information available within the current system. The workshops also explored what information operators would seek from an automation, as well as how and when information should be presented to them. In the concept development phase, two specific scenarios (grounding, collision) were used to discuss what an automation could potentially act upon and how the information should be communicated to the shore-based operator. In a final step, the concept was evaluated within a demonstrator workstation setup. Four shore-based operators participated in the evaluation. After the simulation the operators were first interviewed separately and then in a group. The results show that the participants were overall positive towards the automation being able to support their work. They felt that it generated meaningful suggestions for risk mitigation, which could be used to generate a wider timeframe for an operator to act. However, the operators also raised concerns towards responsibility, communication needs and the ability to understand what an automation basis its assessment on. The importance of closed-loop-communication, context and expertise were also emphasized. Overall, while the support in anomaly detection and the identification of risk mitigating measures is appreciated, a refinement of the current concept is needed to become meaningful in the eyes of the operator.

17:00
Balancing Automation and Human Oversight: Design Implications for Safety-Critical Systems
PRESENTER: Mina Saghafian

ABSTRACT. As advanced autonomous technologies and artificial intelligence (AI) proliferate across safety-critical sectors, they bring both unprecedented opportunities and significant challenges, often described as automation’s double-edged sword. Recent literature highlights the shift from a technocentric to a human-centric focus in designing human-automation interactive systems, aligning with the EU AI regulation's emphasis on Human Oversight. However, as Levels of Automation (LoA) and system complexity increase, maintaining human involvement, control, and the ability to intervene becomes increasingly difficult. Ensuring observability, predictability, and directability of autonomous agents is crucial to achieving transparency in design as a step towards meaningful human oversight. This paper examines the concept of human oversight, its implications for design, and its role in balancing automation’s advancements with the need for human control. Drawing from the MAS (Meaningful Human Control) project, we reviewed twelve articles that explicitly reference oversight, analyzing their contributions to human oversight design principles. Our findings reveal gaps and underscore the need for stronger integration of human oversight to ensure the safety and sustainability of advanced autonomous systems.

16:30-17:45 Session 11H: Disaster risk management
Location: H
16:30
Business Continuity Management in public sector organizations – from component, to system, to society

ABSTRACT. Business Continuity Management (BCM) is gaining increasing attention as an approach used to strengthen an organization’s capability to maintain critical functions in the face of disruptions or crises and facilitate recovery following such events. While BCM originates from the private sector, it has gained more attention also among public sector organizations. This paper presents findings from an interview study, aiming to explore the approaches used to adopt BCM in different types of Swedish public sector organizations and the challenges entailed. More specifically, the study firstly sheds light on factors affecting how BCM-related information is shared between sub-units of public sector organizations and how the units’ individual BCM practices are aggregated from sub-unit level to the level of the organization as a whole to provide an overall risk picture, influencing the possibilities of maintaining critical societal functions. In this way, the paper explores how BCM operates at multiple levels—from the individual components within an organization, to the overall system, and ultimately to its impact on society: from component, to system, to society. The results confirm that BCM is an approach that only recently has gained increased attention and use among Swedish public sector organizations. Several respondents highlight the need to institutionalize BCM within their organizations as a way of obtaining increased effectiveness. The approach to adopting BCM appears to be influenced more by the size of the organization than by the specific type of public sector organization. Challenges related to aggregating BCM-related information are primarily framed as a governance issue rather than a technical concern about data consistency. Finally, the findings of the paper show that respondents describe their current approaches to aggregation as unstructured, which suggests a need for further research aiming at exploring and testing ways to enhance their BCM practices.

16:45
Strategic foresight in disaster risk management: Practices across Europe
PRESENTER: Ingeliis Siimsen

ABSTRACT. Strategic foresight as a structured tool for exploring plausible futures can help to better anticipate and prepare for change as well as to make sense of uncertainties (OECD, n.d.; Cutter, 2013). However, strategic foresight has not been implemented to its full potential in the field of disaster risk management as the conceptualisation of disaster risk is often probabilistic and past-oriented (Riddell et al., 2020).

Our study aims to create a more comprehensive understanding of the uses of strategic foresight by exploring the existing practices and lessons learned across different European disaster risk management systems. We have done this by conducting semi-structured expert interviews with representatives from different EU member states, holding an international webinar with risk assessment and foresight experts in January 2024, and carrying out desk research on official reports and documents concerning strategic foresight and disaster risk. The interviewees included representatives from Finland, Ireland, Denmark, Luxembourg and Romania who work for either ministries or government agencies responsible for disaster risk management and civil protection.

Our results indicate that the existing practices of using strategic foresight in disaster risk management in European countries often lack a clear methodological foundation as foresight and risk assessment methods are often blended. Amongst the most commonly used methods are scenario-building and horizon scanning. One of the main gaps highlighted by participants is the lack of resources to carry out extensive foresight processes. Interdisciplinarity and cooperation between different governmental as well as non-governmental agencies are seen as factors that support the implementation of foresight. Our study maps the existing foresight practices and provides recommendations for disaster managers going forward. The lessons learned from this study could be beneficial to various institutions responsible for disaster management in Europe and beyond that are planning to implement foresight methods in their work.

17:00
Factors Influencing Perceived Disaster Management Efficacy in Saudi Public Hospitals
PRESENTER: Shahad Alshehri

ABSTRACT. This research critically examines disaster risk management (DRM) in Saudi Arabia’s healthcare sector, highlighting the essential role healthcare workers (HCWs) play in disaster preparedness, response, mitigation, and recovery. Saudi Arabia’s unique challenges, including diverse geography, regional conflicts, and the annual Hajj pilgrimage, make it crucial to understand HCWs' perceptions of DRM effectiveness. Gaining deeper insights into these perceptions can identify areas for improvement and enhance healthcare resilience during disasters. This study addresses a significant research gap by exploring HCWs’ views on DRM in public hospitals. Semi-structured interviews were conducted with 24 HCWs across four major regions—Central, Eastern, Southern, and Western—to explore how organizational culture, leadership, personal experiences, and geographical factors shape DRM perceptions. Thematic analysis revealed that effective leadership, team coordination, resource management, and continuous training are vital for strengthening DRM practices. However, gaps in infrastructure, resource shortages, and communication breakdowns remain significant challenges. HCWs' personal experiences, including their exposure to crises such as the COVID-19 pandemic, further shaped their perceptions, influencing their readiness to engage in disaster responses. Additionally, geographical disparities played a critical role, with rural areas often lacking advanced resources and urban centers, despite better resources, facing challenges from high population densities and diverse patient demographics. The study concludes that addressing these gaps—through leadership improvements, better resource allocation, infrastructure upgrades, and ongoing training—can significantly enhance DRM in Saudi Arabia’s public hospitals. These findings also hold broader implications for healthcare systems globally, especially in regions facing similar challenges. Policymakers are encouraged to implement targeted interventions and promote a culture of proactive preparedness to strengthen healthcare resilience during disasters.

17:15
Understanding Place and Peripheralization in Rural Disaster Management: Insights from Northern Sweden
PRESENTER: Sophie Kolmodin

ABSTRACT. This presentation highlights the importance of relational concepts for understanding disaster management (DM) work in rural areas. Specifically, we turn to the theoretical concept of "relational place" to illustrate how places derive meaning from their relationships with other places, advocating for a deeper understanding of how these dynamics influence practitioners’ experiences in addressing DM-related challenges. Despite extensive research on DM, place is often treated as a neutral backdrop rather than an active agent that shapes professionals’ work. To analytically explore the role of place in DM, we draw on interviews with DM professionals from four municipalities in northern Sweden, characterized by their expansive geographic areas, declining populations, and histories of economic austerity. By employing the concept of relational place, we demonstrate how DM professionals perceive laws and regulations as not adapted for their place and their work as diverging from societal norms of DM organization. We also examine how feelings of peripheralization—as a process rather than a static condition—are linked to the social, financial, and political aspects of DM efforts. Through this presentation, we aim to enrich the discourse on DM and provide insights into the unique challenges faced by practitioners in rural contexts, emphasizing how these challenges intersect with their understandings of place.

17:30
Balancing risk and protection: Exploring the interactions between the preventive and reactive systems in Swedish land use planning

ABSTRACT. As society evolves, so do the risks we face. Emerging challenges like sustainable development and climate change adaptation intensify these risks within an interconnected, complex environment with conflicting objectives. Effective risk management in this context requires balancing risk and protection by integrating preventive land use planning with reactive emergency response systems. Integrating systems requires mutual feedback among the actors to enhance understanding and broaden perspectives in risk assessments, thereby contributing to a more holistic approach. This study explores the prevalence of risk-related feedback among Swedish land use planners, rescue services, and crisis management professionals through a questionnaire survey. The findings reveal imbalances in feedback, risk-related knowledge, and perceived gaps between the preventive and reactive systems. Despite these challenges, incentives exist for more uniform collaboration. Enhanced coordination, communication, and awareness of the benefits of a holistic approach could strengthen the practical utility of risk analyses and improve preparedness for societal changes.

16:30-17:45 Session 11I: Miscellaneous II
Location: I
16:30
Machining of the Fan Abradable Seal and its Impact on Thrust, Performance and Reliability of Aero Engines
PRESENTER: Jose Pereira

ABSTRACT. Aviation embodies a dynamic sector marked by a continual pursuit of progress, where the integration of innovative technologies and the refinement of more efficient techniques directly contribute to advancing the aeronautical industry. Companies operating within this realm are unwavering in their dedication to delivering increasingly reliable engines, emphasizing critical factors such as safety, quality, and mechanical efficiency. Strict adherence to performance parameters in the test cell to the engine approval process is imperative, aligning with the standards set by discerning customers and regulatory bodies. Maintenance, Repair, and Overhaul (MRO) companies sometimes face problems related to engine performance during tests in the overhaul process. This highlighted the need for a study on improvements in the process related to thrust parameters. This study analyzes the match grinding process between the abradable seal and the fan blades and shows how to ensure the engines meet the required thrust limits for approval. The dimensional relationship between these components serves as a machining reference, intending to achieve minimal clearance. This strategic approach optimizes the utilization of airflow responsible for thrust, thereby enhancing engine performance, efficiency, and reliability. The methodological approach involves a case study of the match grinding process, including a quantitative analysis of results collected before and after applying the process. The study indicates that the reduction in surface roughness and control of minimum clearance result in a more efficient utilization of airflow. This leads to reduced turbulence and parasitic airflow, culminating in a significant 71% improvement in thrust for the evaluated engines compared to pre-process results. The proposed process allows repair stations to produce engines with quality, reducing rejection risks in testing and rework. The results obtained in this study validate the efficacy of the match grinding process as a strategic initiative for improving the thrust performance of aeronautical engines.

16:45
Practical Risk Management for Aeroengine Maintenance: An Industry 5.0 Approach

ABSTRACT. This study presents a comprehensive framework for risk identification and management in the aeronautical engine overhaul process. By integrating a robust risk assessment methodology with the Operational Safety Management System (SMS), this approach enhances quality, ensures regulatory compliance, and prioritizes risk mitigation efforts.

The proposed methodology leverages Bayesian Belief Networks (BBN) and Fuzzy Logic to conduct a thorough probabilistic risk analysis. This enables the identification of critical risk factors and the prioritization of preventive and corrective measures. Additionally, the study explores the potential applications of Industry 5.0 concepts to further enhance risk response and decision-making.

The study's findings demonstrate the effectiveness of the proposed framework in optimizing risk management for aeronautical engine maintenance. By integrating risk assessment with SMS, organizations can significantly reduce the likelihood of engine failures and improve overall operational safety. The approach aligns with industry standards and regulations, making it a valuable tool for aviation professionals.

In conclusion, this research contributes to the advancement of risk management practices in the aeronautical industry. The proposed framework offers a practical and effective solution for identifying, assessing, and mitigating risks associated with engine overhaul processes

17:00
An inverse Gaussian process with bathtub-shaped degradation rate function in the presence of random effect and measurement error
PRESENTER: Nicola Esposito

ABSTRACT. The degradation rate function of real-world degrading technological systems (here intended as the derivative of the mean function) often exhibits a three-phasic bathtub shaped evolution, characterized by a first accommodation phase where it decreases, a second phase where it is almost constant, and a third catastrophic phase where it increases. The vast majority of the literature pertaining to monotonically increasing stochastic degradation processes has proposed models that can accurately describe just one or two of the three aforementioned phases. In this paper, we propose a new inverse Gaussian process with bathtub-shaped degradation rate which can model simultaneously all three phases, accounting for the presence of measurement error and unit-to-unit variability in the form of random effect. Along with the proposition of the model, we also formulate and adopt an expectation-maximization (EM) and particle filter algorithms, which allow to quickly and efficiently retrieve Maximum Likelihood estimates of model parameters from a given dataset of noisy measurements. The particle filter is also adopted to perform Remaining Useful Lifetime predictions. The key feature of the proposed model is its ability to perform accurate lifetime predictions even in the case where available degradation measurements have been collected over the course of the early phase alone. The performances of the model are demonstrated by applying it to a set of real degradation data of MOS Field-effect Transistors.

17:15
Clustering for learning from safety-related undesired events: Application to the iron and steel industry

ABSTRACT. Safety-related undesired events can cause different kinds of workers’ injuries and fatalities. Learning from incidents is a key step in safety risk management, which guides the exploitation of information for implementing effective safety-related decision-making processes. To accelerate the overall process and mitigate the impact of potential human biases, Machine Learning (ML) techniques may be adopted. However, available sources of safety incident reports frequently collect brief unstructured narratives with significant missing data, which are also phrased in a no standardised structure and language. In such a context, relying only on outcomes provided by ML techniques is risky, highlighting the need for human intervention to ensure meaningful results. For such reason, this paper proposes a multi-step approach integrating a hierarchical clustering and subject matter expert evaluations for learning from incidents. The proposed approach has been applied to examine undesired events happened in the iron and steel industry, i.e., one of the most hazardous industries in the world, where a multitude of risks can potentially give rise to a wide range of accidental scenarios. A set of 24 clusters were identified, providing insights into relationships among consequences, number of events, and operating conditions.

16:30-17:45 Session 11J: The NASA and DNV Challenge on Optimization Under Uncertainty II
Location: J
16:30
An Integrated Uncertainty Quantification and Optimization for solving the 2025 NASA-DNV Challenge

ABSTRACT. This paper presents a methodological framework for tackling the NASA and DNV Challenge on Optimization Under Uncertainty. The challenge requires designing and calibrating an uncertainty model using limited empirical data and optimizing design variables under uncertainty. We propose an integrated approach based on Bayesian experimental design, emulators, efficient computational tools, and advanced calibration techniques. Parametric and non-parametric uncertainty models are compared, and calibrated using strategies incorporating likelihood-free KNN and discrepancy-based filtering methods, imprecise probability and likelihood-based ABC inference using Transitional Markov chain Monte Carlo. Uncertainty-based optimization is also performed by different approaches, including grid search, genetic algorithms, and two-level stochastic optimization using Bayesian techniques supported by surrogate models. The framework refines the uncertainty model by systematically updating the distributions and selecting optimal experimental conditions to enhance learning efficiency. Our results highlight the efficacy of the approach in balancing performance, reliability, and risk-constrained objectives that are generally applicable in UQ-driven decision-making problems.

16:45
Latent space-based Bayesian approach to the NASA and DNV challenge 2025
PRESENTER: Sangwon Lee

ABSTRACT. This paper presents a latent space-based Bayesian methodology to tackle the NASA and DNV 2025 challenge on uncertainty quantification and design optimization. First, a Variational Autoencoder (VAE) is trained to investigate the distribution of the aleatory variables. From its latent-space representation, we conclude that a two-dimensional Gaussian distribution is suitable for modeling these uncertainties, thus enabling a data-driven calibration approach. Posterior estimates are obtained through a Bayesian updating procedure based on a multi-variational autoencoder (MVAE), effectively aligning simulation outcomes with observed data. Subsequently, tight prediction intervals are computed by extensive Monte Carlo simulations, demonstrating coverage of the calibration results. For the design optimization problem, a Bayesian optimization framework is employed to solve three separate tasks. In the performance-based design, the control variables are optimized to maximize an expected performance measure under the calibrated uncertainty model. In the reliability-based design, the worst-case failure probability across epistemic parameter variations is minimized using an adaptive Gaussian process modeling strategy. Finally, an ϵ-constrained design is performed using dual-GP surrogates for the objective and constraint functions, thereby balancing performance enhancement and failure probability control.

17:00
Sampling-Based Possibility Theory for Engineering Analysis Under Uncertainty: Inference, Prediction and Optimization
PRESENTER: Tom Könecke

ABSTRACT. This contribution is a response to the 2025 NASA and DNV challenge on optimization under uncertainty. Three typical engineering problems in the form of parameter identification, forward propagation of uncertainty, and optimization are addressed. The framework of possibility theory is outlined and applied to the problems of the challenge. Given the nature of possibility theory, the results provide a rigorous and deliberately cautious perspective on the challenge problems. Accordingly, this approach is expected to be among the more conservative responses, offering a robust and well-substantiated analysis of uncertainty. The analysis is implemented through a sampling-based approach, returning statistically valid confidence distributions with all prior information explicitly stated.

17:15
Data-driven Model Updating Solution for the NASA and DNV Challenge 2025 on Optimisation under Uncertainty with Flow-based Neural Networks
PRESENTER: Tairan Wang

ABSTRACT. Uncertainty quantification (UQ) remains critical in addressing complex engineering challenges, especially in safety-critical systems where scarce data and mixed uncertainties prevent robust decision making. This paper presents a data-driven model updating framework to address the NASA and DNV Challenge 2025 on Optimisation under Uncertainty using the invertible normalising flow-based neural networks, which emphasises high-dimensional systems with limited observational data and hybrid aleatory-epistemic uncertainties. Our methodology explicitly deals with the aleatory and epistemic uncertainties separately through a two-step model updating framework based on a preliminary sensitivity analysis. The aleatory variables are calibrated first globally and then the epistemic variables are calibrated locally. To process the time series response data, multihead transformer is adopted as the conditional network in the normalising flow-based model updating framework, which can summarise the complex data into fixed-length vector. The following design optimisation problems are tackled by the Particle Swarm Optimisation (PSO) with a Fully Connected Neural Networks (FCNNs)-based surrogate model. This work bridges machine learning with classical UQ methodologies, offering a practical pathway for safety-critical system design under aleatory-epistemic uncertainties.

17:30
Bayesian Uncertainty Modeling and Risk-Aware Optimization for Unknown Systems
PRESENTER: Karan Baker

ABSTRACT. This study explores uncertainty classification and modeling, differentiating between aleatory and epistemic uncertainties. Aleatory uncertainty arises from inherent randomness and is commonly represented using random variables, while epistemic uncertainty stems from a lack of precise knowledge about a parameter’s true value. Addressing both types is crucial for constructing accurate uncertainty models, which must account for the physical nature of parameters and the available data. The research is motivated by the NASA and DNV 2025 challenge on optimization under uncertainty. To estimate probability densities for both uncertainty types, the study employs Bayesian Inference, which provides a structured approach to updating beliefs about uncertain parameters as new data becomes available. In the design optimization phase, the study utilizes the Shapley value concept to systematically address the subproblems. By fairly evaluating the contribution of each variable before the optimization process, this method enhances resource allocation and decision-making. The derived control inputs are optimized to meet various task-specific objectives, ensuring robust performance.

17:45
Tackling the NASA and DNV 2025 UQ challenge : an Approximate Bayesian Computation framework for surrogate-based optimization under uncertainty
PRESENTER: Gatien Chopard

ABSTRACT. Performing an optimization task on a complex system can be challenging when input variables are not completely known and when there is inherent randomness in the system response. The first step is therefore to gather information to reduce these uncertainties. In this paper, we use a rejection algorithm based on approximate Bayesian computation to infer input distributions from a limited number of output observations. We define informative metrics to estimate the likelihood of each input variable using a computer model of the real system and variance-based sensitivity analysis. Further, we identify optimal control parameters accounting for both performance and probabilistic constraints. We utilize neural network surrogates to efficiently approximate key relationships, evaluate failure probability, and enable gradient-based optimization approaches.

16:30-17:45 Session 11K: Social and ethical concerns
Location: K
16:30
Characterising resilience to radiation emergencies in armed conflicts: societal and ethical issues
PRESENTER: Catrinel Turcanu

ABSTRACT. The current geopolitical situation has triggered a strong societal call to reconsider nuclear emergency preparedness, response and recovery (EPR&R), particularly in the context of armed conflict situations. Existing research has examined the technical, organisational, legal, economic, social, psychological and ethical dimensions of nuclear EPR&R, along with protection principles to limit the negative impacts on humans and the environment. It has shown that nuclear EPR&R is challenging even in peacetime, due to potentially large-scale and long-term social, environmental and economic impacts. The complex and hostile environments characterising armed conflicts introduce additional challenges and vulnerabilities. A key question is what lessons can be drawn from past and current experiences - both nuclear and non-nuclear fields- to improve the protection of people and the environment, and limit, to the extent possible, the detrimental effects of a radiological emergency during war time. Building or enhancing the capacities to withstand, respond to, adapt and recover from such events aligns with the concept of resilience. The European RRADEW project (Resilience to radiological events in wartime, 2024-2026) aims at enhancing nuclear EPR&R systems by developing methodological and technological approaches to characterise, assess and strengthen resilience to radiation emergencies in war or armed conflicts situations. This contribution examines This contribution examines societal and ethical considerations for applying the concept of resilience in nuclear EPR&R at individual, community and national level. It is based on: - A review of scholarly literature addressing two strands of empirical studies: disaster resilience in war or armed conflict situations; and resilience to nuclear emergencies; - Lived experience from the war in Ukraine; - Feedback from three expert workshops. Acknowledgments: RRADEW is funded as part of PIANOFORTE, the European Partnership for Radiation Protection Research, which received funding from European Union’s "EURATOM" R&I programme, under grant agreement 101061037.

16:45
The Future of Tourism for People with Disabilities: Scenario Planning to Examine Critical Uncertainties

ABSTRACT. The future of tourism for people with disabilities (PWD) is fraught with uncertainty. On the one hand, at $58 billion worldwide, PWD who travel constitute a lucrative market; many Western countries also have legal protections to ensure PWD can be accommodated when traveling. On the other hand, PWD, one in six worldwide, face significant physical, information, attitudinal and systemic barriers. There are concerns over what constitutes appropriate supports, inconsistent definitions of disability and lower income levels among those with disability that limit their ability to travel. As the population among those most likely to travel ages, the disabilities community will grow, and these challenges will become more pronounced. The IRGC framework identifies scenario planning as a process to address such complex variables with such uncertain outcomes. In autumn 2023 in Halifax, Canada and again in winter 2025 in Glasgow, Scotland we used the Intuitive Logics method to structure scenario planning sessions with accessible tourism stakeholders, including people with disabilities, members of the tourism industry, academics and government representatives. The scenario exercises identified factors that drive the sectors and different plausible futures to which the organizations must react. The scenario sessions explored critical uncertainties and the underlying causes of social and organizational vulnerabilities. We identified criteria by which to evaluate new programs and policies in light of these uncertain futures. Solutions lie in recognizing the rights of people with disabilities, developing a culture of respect and committing to continuous improvement. While individual businesses—mostly SMEs in tourism with limited resources—have an important role to play, solutions also lie in community-wide changes that engage business, government, the not-for-profit sector and the citizenry as a whole, including those with disabilities, and incorporating the learning from people with disabilities into the sector.

17:00
IR5.0 Technologies Assessment on Supporting the Inclusion of People with Disabilities in the Workplace
PRESENTER: Carlo Caiazzo

ABSTRACT. People With Disabilities (PWDs) continue to endure daily prejudice and limited accessibility in both physical and virtual environments, despite the great social efforts to reduce human physical, mental, intellectual, or sensory impairments. New approaches for inclusion to PWDS can be highly successful in this regard, particularly when viewed from an occupational perspective. With a new production paradigm that prioritises human-centricity and resilience over productivity and efficiency, Industry 5.0 supports the redesign of manufacturing environments for PWDS. In the literature, there is a lack of guidelines for the design of inclusive workstations for workers with disabilities using innovative manufacturing assisted technologies. These can be implemented and adapted to suit the needs and diversity of people by implementing an inclusive approach and opening new opportunities for both people with disabilities and companies. This research analyzes specific KPIs for PWDS in manufacturing to evaluate (i) the capability to successfully include workers with physical/cognitive disabilities in complex manufacturing processes by using emerging assistance systems, (ii) the variation of safety, ergonomics and wellbeing of disabled workers working in inclusive workplaces, and (iii) how Industry 5,0 main technologies can support in closing the gap between operators with and without disabilities in terms of production performance.

17:15
Burning Inequities: Comparative Analysis of Socio-economic Drivers for Post-wildfire Resource Allocation in the Southwestern US States

ABSTRACT. Wildfires significantly threaten the southwestern US states because of climate change, topography, vegetation, and anthropogenic interferences. While wildfire-prone regions in the US are more likely to be populated by higher-income groups, this fact overshadows the existence of thousands of low-income underrepresented individuals residing in the same regions, lacking resources to prepare for and recover from wildfire damages. Given that rural and disadvantaged communities are often the most susceptible to climate disasters, equitable resource allocation and efficient wildfire management policies play a pivotal role in enhancing the resilience of these wildland-urban interface (WUI) communities. However, state-level and local policies for wildfire management significantly differ across the US states, driven by wildfire exposures, demographics, budgetary priorities, and political agenda. Although there is a growing literature on wildfire management, there are limited studies that analyze state-level similarities and differences related to equitable wildfire resource allocation. To address this gap, this study aims to investigate the key socio-demographic factors such as educational attainment, income, race, and ethnicity associated with post-wildfire resource allocation for the six southwestern US states (California, Nevada, Utah, Arizona, Colorado, Arizona, New Mexico), and compare/contrast the related underlying inequities across these states. Data on wildfire incidents and socio-demographic information is collected from multiple publicly available sources from 2015-2022. A library of data-driven interpretable machine learning models is implemented to evaluate the county-level social inequities in post-wildfire resource allocation across the states. Our preliminary results highlight that the underrepresented/disadvantaged WUI communities (higher proportions of lower income, less education, disabled, elderly population, Black and Hispanic populations) are disproportionately impacted by wildfires as opposed to their wealthier counterparts. Their sufferings are further worsened because of inadequate and inefficient post-wildfire resource allocations. The outcomes of this study will better inform strategic decisions and policymaking for equitable wildfire management, thereby enhancing the marginalized WUI communities’ resilience.

17:30
‘Doing’ household preparedness: utopian imaginaries as preparedness

ABSTRACT. Household preparedness has, against a backdrop of increasing political turbulence in Europe, come to gain political significance. Governmental information campaigns stress the importance of individual preparedness, whilst policy documents emphasize the importance of enhancing household preparedness for the sake of a credible total defense. Preparedness campaigns and total defense policies are infused with responsibilisation, making preparedness into a personal matter, and leads to subjectification processes infused with different power dynamics. This actualizes the importance of knowledge into how individuals understand and enact preparedness. Drawing on intersectional risk theory, we argue that household preparedness practices are infused with gendered understandings and that the ‘doing’ of preparedness simultaneously is a way of ‘doing’ gender. However, the ‘doing’ of preparedness are also infused with utopian imaginaries of society and nature. In this paper, we discuss the performativity of preparedness and the significance of utopian imaginaries for household preparedness

16:30-17:45 Session 11L: Safety, Reliability, and Security (SRS) of Autonomous Systems II
Location: L
16:30
Real-Time Controlled Safety Metric for Use in Autonomous Systems of Safety Relevance Using the Example of the Operation of an Autonomous Inland Waterway Vessel
PRESENTER: Dirk Söffker

ABSTRACT. The idea of understanding safety- or reliability-related variables as an output variable of a continuously operating control loop and thus regulating them has been known and formulated for about two to three decades and can nowadays also be found in automation systems, e.g., to extend the service life in the case of unpredictable, future performance requirements. Up to now, classical, deterministically determined and known fixed relationships have been used to exploit the relationship between, for example, realized stress and actual service life to achieve specific goals such as remaining service life. In this article, statistically known correlations are used, e.g., for the data-based determination of relevant, safety-related variables, to adjust operating variables online, i.e., continuously in real time, in such a way that minimum safety and/or reliability requirements for those variables are maintained. As a relevant, important, and practice relevant application example, the determination of objects in the path of vessels is used, the reliable determination of which is essentially determined both by sensor quality and by data-based estimation methods used. Both are significantly influenced in their capabilities by environmental conditions. Based on a Probability of Detection reltion statistical measurement data for the overall behavior, the driving speed, and thus the resulting braking distance are automatically adjusted so that the safety requirements are controlled. The methodology provides the basics for the establishment of classic risk regulations as proof of the maintenance of safety in automated vehicles.

16:45
Meaningful human control in digitalization, automation/AI, and remote oversight
PRESENTER: Stig Ole Johnsen

ABSTRACT. Meaningful human control (MHC) of safety critical systems is a important goal as digitalization, automation/artificial intelligence (AI), and remote oversight are implemented. In the EU/AI regulation, the concept of human oversight is introduced, especially for safety critical operations. MHC and human oversight are challenging because they depend on human strengths and weaknesses, system design, knowledge and training, and organizational factors like responsibilities, staffing, and work processes. MHC is more useful than human oversight because it ensures that systems, technology, and organizational structures are designed to keep humans in control of safety-critical operations, thereby preventing disasters. However, to be useful, MHC needs to be defined and specified. This paper aims to define MHC by addressing three key areas: design, operations, and learning. Key design issues for MHC include adopting a system approach, using human-centred design best practices, conducting task analysis to manage cognitive workload, creating consistent interfaces for quick situational understanding, designing alarms to support situational awareness (SA), and establishing work processes that promote shared SA across teams. Key operational issues include ensuring safety, managing change (MoC), addressing error traps and training, and maintaining physical and mental conditions to enable MHC in all situations. In a critical situation, we observe that it can take 10 minutes to observe, understand and act correctly in crises. Main issues in learning from accidents must be to identify root causes including poor concepts/design and trying to understand reasons for human SA and actions. We have used “Human Error” as a starting point for analysis. Learning and understanding should drive change and improvement in governing values, prioritizing learning over blame.

17:00
A systematic review of risk metrics for AI and autonomous systems

ABSTRACT. Supervisory risk control (SRC) is a concept that enables risk-aware decision-making, enhancing the safety and intelligence of autonomous systems. Autonomous systems enable operations that support and exceed human performance, but new types of risks are introduced, e.g., due to mission complexities and challenges with situation awareness. There is a wide variety of autonomous systems, both crewed and uncrewed, operating in low to high degrees of autonomy, and systems may switch in between these. The foundation of SRC is constituted by risk assessment and control system design, as well as artificial intelligence (AI). One or more risk models are integrated with the mission planning and/or guidance layer of the autonomous control system. A challenge is, however, how to measure the risk in a way that represents both safe systems and operations, and that can be utilized by the control system. Furthermore, the human supervisor also needs information about the risks to support situation awareness. Hence, risk metrics that sufficiently integrate spatial and temporal information, evaluate “instantaneous” and "long-term" risk, as well as the consider the effect of uncertainty are needed. Therefore, the objective of this paper is to provide an overview of existing metrics for measuring risk and evaluate their usefulness for autonomous systems and operations. The paper also suggests potential directions for further research and development in the area.

17:15
Towards a framework for evaluation of spatial uncertainty for risk-based robotic decision-making
PRESENTER: Klaus Ening

ABSTRACT. Subsea Inspection, Maintenance and Repair (IMR) interactions on underwater Oil & Gas infrastructure can have severe consequences in case of failure. Currently, these interactions are mainly carried out using Remotely Operated Vehicles (ROVs) with attached robotic arms where operators assess the situation and make decisions. To allow for increased autonomy in operations on routine objects (valves, wires, hoses, tools), the ROV has to detect the objects and their pose before manipulation tasks can be performed. These tasks typically involve risks, and therefore it is desirable to estimate the probability of operation failure in order to provide decision-support to human operators during the mission.

In this paper, we propose a framework using machine learning with a Gaussian Naive Bayes Classifier to estimate the failure-probability of robotic tasks based on the objects’ spatial uncertainties. As the uncertainty input-feature we use the 6 DOF standard deviation of the object’s pose-estimate.

We show how prediction accuracy improves over time and how well the predictions match actual failure rates. We run 1000 simulated pickand-place operations with different uncertainties and discuss how our method can improve decision-support during operation. We also include a small dataset collected by a 3D camera from real-world objects, test the transferability of simulation results to these data and a pose-estimation algorithm, and examine the impact of data quality.

16:30-17:45 Session 11M: Advanced methods for the risk assessment and management of nuclear power plants
Location: M
16:30
Prediction of Critical Heat Flux in Vertical Tubes by Physics-informed Neural Networks
PRESENTER: Ibrahim Ahmed

ABSTRACT. The safety of thermohydraulic systems with two-phase flow is directly related to the Critical Heat Flux (CHF), which characterizes the transition from nucleate boiling to film boiling with a significant reduction of heat transfer efficiency. CHF prediction is crucial in nuclear power plants (NPPs), where thermohydraulic margins are critical for safe operation. Recent efforts to improve CHF prediction in vertical tubes have increasingly relied on data-driven approaches using Artificial Intelligence (AI) and Machine Learning (ML) techniques. Nevertheless, purely data-driven models often lack intrinsic physical information, limiting their broader acceptance for practical applications in safety-critical systems like NPPs. In this work, we explore the use of physics-informed neural networks (PINNs) for CHF prediction in vertical tubes. The Westinghouse (W-3) empirical correlation, an empirical CHF correlation developed by Westinghouse Electric Company for water-cooled reactors, is employed as the physical model integrated into the learning process of the PINN. Specifically, two different forms of physical loss function for PINN are formulated. The first form is based on simple differences (SD) between the predicted CHF from the model and the CHF calculated using W-3 correlation; the second form is based on partial derivatives (PD) of the W-3 correlation computed with respect to the input parameters. The developed PINN models are validated using experimental CHF data from the US Nuclear Regulatory commission (NRC), provided by the Working Party on Scientific Issues and Uncertainty Analysis of Reactor Systems (WPRS) Expert Group on Reactor Systems Multi-Physics (EGMUP) task force on AI/ML for Scientific Computing in Nuclear Engineering projects, promoted by the OECD/NEA. The results indicate that the predictive performance of the proposed PINN models exceeds those of the Look-Up Table (LUT) and purely data-driven deep neural networks, confirming the benefit of integrating physical knowledge into the learning process for enhancing accuracy and reliability of the prediction model.

16:45
Risk Modeling and Optimization Using Machine Learning Algorithms for PSA Applications

ABSTRACT. According to the latest reports from the International Atomic Energy Agency[1], the global nuclear power plant (NPP) fleet is aging, requiring enhanced risk estimation and safety management tools. Additionally, international organizations are promoting the use of artificial intelligence to improve NPP operation and safety, as highlighted by the U.S. Nuclear Regulatory Commission in its Strategic Plan for 2023-2027[2]. This context presents an opportunity to incorporate artificial intelligence and machine learning techniques into risk-informed applications. This work presents a methodology that integrates a metamodel for predicting Core Damage Frequency (CDF) with an optimization method using genetic algorithms to reduce CDF, thereby enhancing plant safety. The proposed metamodel predicts CDF using reliability parameters related to component failure modes and test intervals (TI) of standby equipment as explanatory variables. Traditionally, TIs are set as fixed values grouped by periodicity (e.g., weekly, monthly, annually). This study introduces additional levels—higher and lower than standard values—to assess their impact on NPP risk, using a Fractional Factorial Design to efficiently generate a representative dataset. Several metamodels were trained and evaluated, with the best-performing one selected to replace conventional Probabilistic Safety Assessment (PSA) models. The genetic algorithm was then implemented to find the optimal combination of TI values, minimizing the mean CDF. The use of the metamodel drastically reduces computational effort compared to resolving large event and fault trees, which is needed to speed up the computational time. The results demonstrate that this methodology enhances NPP safety by providing a powerful tool for risk management. Future developments could include analyzing the impact of maintenance on reliability parameters, and consequently, on the CDF.

[1] International Atomic Energy Agency (2023). Nuclear Safety Review. Vienna, Austria. [2] Dennis, M., Lalain, T., Betancourt, L., Hathaway, A., & Anzalone, R. (2023). Artificial Intelligence Strategic Plan: Fiscal Years 2023-2027.

17:00
Numerical Solution of the Fokker-Planck Equation for the Overflow Probability of a Radioactive Near Surface Repository by the Crank-Nicolson Method: Preliminary Results
PRESENTER: Antonio Alvim

ABSTRACT. In an earlier paper we discussed the analytical solution of the Fokker-Planck (FP) equation for evaluating the overflow probability for the near surface repository of Abadia de Goiás, Brazil (for 137Cs storage). Some preliminary considerations were approached in this reference. However, a formal approach for validating the results needed to be formulated. In this sense, we face the problem in this paper by approaching the solution for the overflow probability by numerical methods. This numerical solution is to be compared to the one published earlier, which was obtained by an analytical approach based on Trotter’s method. An implicit numerical method was used, namely, the Crank-Nicolson method, which is known to be numerically stable and is widely used to solve partial differential equations, like the heat equation. The FP equation numerically solved here has an advective term and a diffusive term and is known as the forward Kolmogorov equation. A discussion is performed on the initial and boundary conditions to solve the FP equation in order to obtain the probability density needed for calculating the overflow probability (in this sense, contrary to the analytical solution, the numerical solution does not need to be truncated because it starts at t = 0 with the defined initial condition). This latter depends on the repository institutional control period and, as on earlier published work on this subject, the institutional control period is varied from 5 years to 50 years. This wide range variation interval is justified by the fact that the repository design considers initially an institutional control period equal to 50 years. The numerical results agree in terms of magnitude orders, with the analytical ones published earlier.

17:15
Innovative solutions for the management and safety of radioactive waste

ABSTRACT. Radioactive waste is generated by all activities associated with nuclear energy production, including nuclear power plants and the fuel cycle, as well as research and development activities. In addition, smaller but significant amounts of radioactive waste come from other areas, such as medical diagnosis and treatment, production quality control and scientific research. In Italy, radioactive waste is classified according to its physical state and radioactivity levels, as defined by the Decree of 7 August 2015. Each category of waste requires specific treatment and conditioning processes to minimize its volume and make it suitable for short-, medium- or long-term storage, final disposal or disposal in accordance with legal standards. The management of this waste requires careful attention to protect workers, the public and the environment from radiological risks. At the national level, waste management is regulated by Legislative Decree no. 101/2020 and its amendments, together with the ISIN Technical Guide no. 33. Furthermore, the Italian standard UNI 11918:2023 outlines the technical specifications for the safe management of radioactive waste from medical, industrial and research activities. This work aims to explore the key technological, regulatory and safety aspects of the management of radioactive waste generated in hospitals, industries and nuclear plants. It will cover current practices for waste classification, technologies for treatment and volume reduction, as well as requirements for safe storage or disposal.

17:30
Optimization of Maintenance Planning in Nuclear Power Plant Security Systems Using Cuckoo Optimization Algorithm

ABSTRACT. Optimizing performance is one of the main goals in science and technology. However, it is a significant challenge, as it is usually constrained by safety requirements and budget limitations. As a result, optimization algorithms utilizing artificial intelligence to enhance the performance of industrial systems have been developed over the last few decades. Due to advances in computing and the proven effectiveness of artificial intelligence methods, there has been a growing application of these methods in the nuclear sector. This has encouraged the publication of studies that cover everything from the design phase to maintenance policies and life extension strategies. A nuclear power plant is typically designed for a 30-year service life, during which the reliability of system components, including main and auxiliary safety systems, is ensured by high safety standards and management policies such as maintenance, inspection, and testing. As the end of this period approaches, plants can choose between decommissioning or extending their service life, often opting for life extension to operate for 40 to 60 years, based on feasibility studies that assess component performance and the need for replacements. Maintenance is crucial but can increase system downtime. Therefore, planning for redundant trains, which allow maintenance while the system continues operating, is fundamental. Backup systems must also maintain high levels of reliability to avoid failures when needed. This work developed an optimization tool using Probabilistic Safety Assessment (PSA) and the Cuckoo Optimization Algorithm (COA). Applied to a simplified model of the High-Pressure Injection System of a Pressurized Water Reactor (PWR), the tool aims to increase component reliability and reduce costs and downtime associated with maintenance. COA was chosen for its optimization efficiency, and the fitness function focused on minimizing unavailability and maintenance costs. The results of this work demonstrate an improvement of up to 60% compared to literature.

17:45
Legitimizing small modular reactors in Sweden
PRESENTER: Henrik Rahm

ABSTRACT. The potential of Small Modular Reactors (SMRs) is currently explored worldwide. SMRs as technology is still under development but is at the same introduced to the public. At this stage, when much is still uncertain and unknown, media is important. The benefits and risks related to this new technology articulated in media potentially shape different actors' expectations and attitudes towards the technology. The focus of this study is the introduction of SMRs in news media in Sweden. The Swedish government has changed the nuclear energy policy from decommissioning to revival with more nuclear power including SMRs until 2034. This study is focused on risks and benefits reported in media, actors participating in shaping the image of SMRs and how they legitimize SMRs in a Swedish context. To this end a corpus analysis of the reporting on Swedish national press of SMRs from 2020 to 2024 is conducted and a qualitative text analysis of selected news articles. Legitimation analysis is used for problematizing positions of the actors, explicit and implicit perspectives and embeddedness using the concepts of rationalization, authorization, moral evaluation and mythopoesis (narratives used for legitimation purposes) (van Leeuwen 2008). The analysis shows an increased reporting focusing on benefits rather than risks of SMRs. The reporting is characterized by fuzziness, both in varying terminology and embedding SMRs in various contexts which makes it difficult to understand what SMR is and will become. The qualitative analysis contributes with an understanding of different actors' perspectives on SMR, such as which characteristics are emphasized, how risks and benefits are discussed, how promises are made, which values are considered important and how they relate to and create their position in relation to other actors. Conflicting expectations and values are identified.

16:30-17:45 Session 11N: WORKSHOP: International Workshop on Energy Transition to Net Zero: Reliability, Risk, and Resilience (ETZE R3) III: System level technology assessment for energy transition to net zero
Location: N
16:30
Ensuring the reliability of the net-zero energy system pathways: a Swiss case study

ABSTRACT. The goal of achieving net-zero greenhouse gas emissions by 2050 requires several key trends in the energy transition, including the integration of decentralized renewable electricity production, the electrification of demand, and the coupling of energy sectors. Despite the clear goals, it remains uncertain how these will be achieved in the future. Therefore, a lot of attention has been dedicated to the development of energy pathways. Regardless of the pathway, the future power system will be characterized by a rise in the peak demand and an increase in the net load variability. It is therefore important to study whether the future net-zero power system will meet the reliability (adequacy and security) requirements, and how they need to adapt to meet those requirements.

In this work we study the power system adequacy and security for net-zero energy system pathways in Switzerland. For this purpose, we utilize a set of energy transition pathways as outlined in Sanvito et al., 2024. We analyze those pathways focusing on system adequacy and security. We quantify system adequacy using the insufficient ramping resource expectation metric (Lannoye et al., 2012). This metric computes the likelihood of having insufficient flexibility in the power system (due to insufficient power, energy or ramping capacity) to accommodate variations in net load under normal operations. Additionally, we study system security with the cascading failure model used in Stankovski et al., 2022. This algorithm simulates the effect of grid outages and computes the demand not served following each simulated contingency. We compare the values for both metrics across the different pathways, and with the reference values of today’s power system. We also investigate whether the system requires additional flexible resources and where would be the optimal locations for those units. With this work, we contribute to the planning of a reliable net-zero power system.

16:45
Towards Net-Zero: How Taiwanese Manufacturers Implement Low-Carbon Innovation
PRESENTER: Ya-Ting Kuo

ABSTRACT. How the manufacturing industry transforms through low-carbon innovation is the main key industry for Taiwan to reach net-zero carbon emissions. Low-carbon innovation in all its forms is critical, including product, process, marketing and organizational innovation. Product innovation focuses on the use of environmentally friendly materials, while process innovation improves energy efficiency and reduces emissions. Marketing innovation enhances a company's green image by communicating sustainability to consumers, and organizational innovation involves structural changes to integrate low-carbon goals at a strategic level. By embracing low-carbon innovation, manufacturing companies can respond to domestic and international regulatory pressures and gain a competitive advantage in sustainable markets.

This study obtained data from 377 Taiwanese manufacturing companies through probabilistic sampling in 2022. It shows that high-carbon industries such as steel and chemicals prioritize organizational and process innovation to manage carbon emissions. In contrast, non-high-carbon companies such as electronics and consumer goods focus on marketing innovations to meet consumer demand for environmentally friendly products. These findings highlight the need for tailor-made low-carbon strategies based on industry characteristics. For high-carbon industries, policy incentives can drive organizational change, while for non-high-carbon industries, the focus should be on consumer engagement and marketing.

The findings of this study can provide companies with specific low-carbon innovation paths in the face of global net-zero policies and market pressures, helping them enhance their competitive advantages in the sustainable development market. In addition, policymakers can design more targeted policy tools based on industry characteristics, such as providing financial subsidies to high-carbon industries or encouraging non-high-carbon industries to promote green marketing.

17:00
Optimizing Green Infrastructure for Flood Mitigation and Enhancing Disaster Resilience - Hoboken NJ Case study

ABSTRACT. As climate change intensifies the frequency and severity of storm events, urban areas like Hoboken, New Jersey, face heightened risks of flooding. With traditional infrastructure increasingly insufficient for managing stormwater, there is an urgent need for sustainable alternatives that can mitigate flooding while enhancing disaster resilience. Green Infrastructure (GI) offers a promising solution by leveraging natural systems to reduce stormwater runoff and provide long-term environmental benefits. This study aims to evaluate the effectiveness of three GI strategies—Stormwater Infiltration Planters and Street Trees (ROWs), Rain Gardens, and Permeable Interlocking Concrete Pavers—in mitigating flood risks in Hoboken. Using a Storm Water Management Model (SWMM) to simulate various storm scenarios, the study assesses these GI options based on key metrics including stormwater runoff reduction, cost-effectiveness, water storage potential, and useful life. In addition to the technical analysis, public support and feasibility are crucial factors in ensuring the successful implementation of these strategies. By optimizing combinations of GI methods, this research seeks to provide a comprehensive framework for improving urban flood resilience and minimizing the economic and environmental impacts of flooding. The findings will contribute to informed decision-making for urban planners and policymakers, offering a pathway toward more resilient and sustainable urban environments.

17:15
Strengthening Supply Security in Future Economic Crises: Towards ICT-based Solutions for Distribution Problems

ABSTRACT. The multiple, complex geopolitical and economic crises of recent times (COVID-19 pandemic, gas shortages due to the Russian invasion to Ukraine, etc.) highlight the need for national governments to increasingly address shortages and the resulting scarcity of critical goods and services, as well as their allocation to specific population groups. Traditional ration coupon systems (e.g., ‘food stamps’) developed after World War II proved to be inflexible and incompatible with a modern information society.

The Austrian research project ‘e-Panini’, involving public administration, federal states, interest groups, academia, and corporate partners, is addressing the necessary technical, legislative, societal, and organisational prerequisites for a ration coupon system based on information and communication technology (ICT).

Our design allows for continuous supply-side coordination of food, hygiene products, medicines, and other critical everyday goods. Additionally, our architecture is also able to integrate inventory management systems of major distributors such as grocery retailers, drugstores, and pharmacies to obtain precise data on the spatial availability of critical goods. Moreover, a differentiated and inclusive overview on the requirements within the population is possible, e.g., based on age groups, gender, occupation, health intolerances, or personal and cultural dietary habits, taking into account their daily calorie requirements and specific needs.

On the end user side, we envision a mobile application-based system, to simplify rollout and increase acceptance. Yet, alternative analogue participation possibilities in case of blackouts or for less tech-savvy citizens are foreseen. While the use of ICT significantly increases flexibility, it also brings additional legal and technical challenges, such as data protection, the risk of blackouts, pre-configuration for high-capacity demands, and offline isolated operation with subsequent data synchronisation.

In this article, we propose a conceptual architecture model for an ICT-based ration coupon system, discuss the content and technical challenges and present possible solutions to address these issues.