View: session overviewtalk overview
10:45 | Risk Quantification for Disruption and Adaptation of Systemic Orders ABSTRACT. This paper provides theoretical and practical insights for addressing systemic risk in dynamic and uncertain environments across a range of application domains, with a focus on systemic orders. The paper offers a mathematical framework to characterize systems in terms of their ordered components—such as locations, assets, policies, and projects—and to analyze how these systems are disrupted and restored over time. Disruptions from environmental factors, market shifts, technological obsolescence, and geopolitical changes disturb the established orders, and the framework captures the distinct phases of orders as systems evolve following disruption and through trajectories of response and recovery. Analysis of these phases offers a structured view of how systems evolve and adapt under stress. An underlying construct of this approach is interduality theory, which supports competing interpretations of the foundational terms of risk analysis. The paper describes several applications including water resource management in arid regions, the effects of hurricanes on hydrological basins, and logistics challenges at maritime container ports. The paper uses expert elicitations, sensitivity analyses, mixed-integer programming, and geospatial modeling to quantify disruptions and transitions across phases. By modeling system states in their progression through phases of reordering, this paper extends the theory and foundations of risk analysis, complementing existing probabilistic risk analyses, scenario impact analysis, interval uncertainties and other familiar techniques. |
11:00 | A new approach on systemic risks from the systemic vulnerability perspective PRESENTER: Iuliana Armaș ABSTRACT. Systemic risk has recently emerged as a hot topic in Disaster Risk Reduction, attracting attention from several major research initiatives. As vulnerability represents the most influential and predictable component in the risk equation, systemic vulnerability also stands at the core of systemic risk. Nevertheless, the meanings of both systemic risk and systemic vulnerability remain unclear and represent subjects of debate, which require further investigation. In this study, we develop a Systemic Vulnerability Model to enhance our understanding of systemic risk, taking a potential earthquake of over 7 MW in present-day Bucharest as a case study. Such seismic events have impacted the capital of Romania in 1940 and 1977; Bucharest being the most earthquake-prone and vulnerable capital in the EU. The model draws on our previous conceptual framework for analysing the augmentation of vulnerability due to hazard impacts and misfiring adaptation options, and on our enhanced version of Impact Chains. It relies on in-depth structural equations and multiple regressions, and it is validated through a robust validation procedure. Key results show that vulnerability acts as both a passive (subject to change) and active (driving change) agent. It can initially contribute to seismic impacts, get augmented by them, and continue to reinforce these impacts and others afterwards. Also, vulnerabilities can slow down or hinder the implementation of adaptation measures. These vulnerability-impact and vulnerability-adaptation option interactions captured by the enhanced Impact Chain, shape the systemic risk associated with earthquakes. Considering the findings from the model, we propose a new definition of systemic vulnerability: the persistent core of vulnerability that perpetuates across time and space, regardless of mitigation efforts and societal advancements. This definition highlights the centrality of systemic vulnerability for systemic risk and represents a starting point for developing more nuanced definitions and models of systemic risk that factor in vulnerability dynamics. |
11:15 | Reviewing Game Theory and Risk and Reliability in Infrastructures and Networks PRESENTER: Kjell Hausken ABSTRACT. Game theory and risk and reliability analysis are reviewed confined to multiple targets, infrastructures and networks. Players maximize utilities in static or repeated games with simultaneous or sequential moves, assuming complete or incomplete information, and determining equilibria or minmax solutions. The terminology is to consider targets or assets, which when interdependent constitute networks with nodes and links. Risk analysis is incorporated through applying risk attitudes, incomplete information, stochastic analysis, and probability theory related to failure or inoperability of targets and moves by nature. Defense and attack in reliability systems specify how e.g. series and parallel systems may operate or fail. System survivability is considered as an alternative to reliability. Multiple-target attacker-defender games at one or multiple levels are analyzed. The defender may protect targets individually or collectively in an overarching manner, while the attacker needs to break through all the protection layers to succeed. Interdependence between targets accounts for how failure of one target may impact other targets, which impacts the players’ resource allocation, and their strategies for how to protect and attack. More specific research on electric power grids and transportation concludes the review. Reflections are provided on strengths, weaknesses, opportunities, and future research. Sixteen earlier reviews have focused on games related to risk analysis, reliability, security and related topics, insufficiently focusing on risk analysis infrastructures and networks. Seven of the 16 reviews focus on defender-attacker games with various focus areas, i.e. Bier and Tas (2012), Hausken and Levitin (2012), Hausken (2024), Hunt and Zhuang (2024), Seaberg et al. (2017), Guikema and Aven (2010), Bier (2020). Nine reviews on cyber security are Amin and Johansson (2019), Etesami and Basar (2019), Hausken et al. (2024), Roy et al. (2010), Pala and Zhuang (2019), Do et al. (2017), Sedjelmaci et al. (2019), Hausken (2020), and Kott et al. (2014). |
11:30 | A Proposal for an Effective Fault Tree Diagram Layout PRESENTER: Marielle Stoelinga ABSTRACT. Fault Trees (FTs) have proven particularly successful in engineering disciplines to identify and quantify risks for assessment. Due to the only short descriptions, FTs are mainly a visual language and transcends national and language boundaries. To further reduce the risk of misunderstandings with FTs, we have tightened the layout guidelines and adjusted the FT elements accordingly. During our research, we kept the three aspects of efficiency, applicability, and functionality in mind, allowing FTs to communicate as correctly, quickly, and easily as possible to make our societies even safer. |
11:45 | Using Simplified Metrics for Cost-Benefit Analysis (CBA) and Pareto Optimality in Physical Security Concepts PRESENTER: Thomas Termin ABSTRACT. Critical infrastructures (CRITIS), as the backbone of our society, must be safeguarded against attacks through effective security measures. Since implementing such measures often entails significant costs, it is essential to provide tools that enable operators to make well-informed decisions based on objective analyses. A sound decision, from the operator's perspective, balances the costs of investing in security measures with benefits such as risk reduction. Quantitative metrics are a widely used tool in CRITIS risk assessment, valued for their ability to deliver objective, comparable, and reproducible results. However, these metrics can be challenging for users and decision-makers to manage, especially when quantitative data is unavailable or in instances where only a rudimentary assessment is requested. A simpler alternative is scoring, which categorizes security contributions using expert knowledge. Yet, due to the inherent uncertainty of scoring, it becomes crucial to determine the conditions under which cost-benefit analyses (CBA) can yield results comparable to those of quantitative assessments. This paper builds on prior work by Termin et al. (2024, a) and Witte et al. (2024), demonstrating how scoring-based assessments of physical vulnerability can be adapted to assess potential attack paths within an exemplary series-connected barrier topology. This approach aims to identify Pareto-optimal configurations of security measures. Ultimately, it is expected that this straightforward scoring-based methodology will assist users in optimizing physical security concepts more effectively. |
Participants: Jan Hayes, Jean Cristophe LeCoze, Nick Pidgeon and Teemu Reiman
10:45 | An optimised recovery approach for interdependent infrastructure PRESENTER: Tom Logan ABSTRACT. Our infrastructure is increasingly interconnected, which offers many benefits to society yet also brings complexity and vulnerabilities. Natural hazards regularly threaten these connected infrastructure systems, causing widespread outages in dependent utilities. After severe events, infrastructure operators often must instinctively prioritize repairs based on initial impacts or the volume of complaints. In the intense post-disaster timeframe, it is also unclear what other utilities are non-operational due to a dependence on upstream services, such as electricity or water. Building on an existing interdependent infrastructure network model, we have developed an optimization model that addresses the complexities of our interdependent infrastructure and provides a recovery strategy based on realistic constraints experienced by infrastructure operators. We apply the model to a real-world test case of severe flooding on electrical power, water supply, and wastewater networks. For this scenario, the model maximizes the number of households that have service for all three utilities, including constraints such as the number of repair crews for each utility per timestep. We initially map the direct and cascading outages, followed by the recommended optimized strategy for repairing the interconnected systems over time. This model provides insight into the objective prioritization of repairing assets for emergency management and utility operators. Its flexible constraints and objective function allow for broader application across different scenarios, aiding in emergency planning and guiding investment decisions to address infrastructure weaknesses. |
11:00 | Impacts of Climate Change on interdependent Critical Energy Infrastructure: Spotlight on the 2021 Texas Winter Storm and Energy Crisis PRESENTER: Ricardo Tavares da Costa ABSTRACT. In this presentation, we explore the impacts of climate change on critical energy infrastructure, focusing on direct and cascading effects across multiple dimensions of the energy sector. The 2021 US winter storm and Texas energy crisis demonstrates how a particularly extreme weather event had severe consequences, exacerbated by systems’ interdependencies, in a region unaccustomed and unprepared to respond to such effects. As we examine this case, we will argue that the non-stationarity introduced by climate change, in the form of both meteorological variability and extreme weather, poses a significant challenge to the resilience of energy systems, in particular as we transition towards more weather-dependent energy generation. A challenge that must be mitigated if we are to harness the full potential of renewable energy. |
11:15 | Optimizing resilient net-zero energy systems under climate uncertainties: a new methodological perspective PRESENTER: Enrico Ampellio ABSTRACT. The ongoing transition pathway towards sustainable energy leads to growing penetration of renewable resources, solar and wind in particular. Therefore, energy security will be more exposed to the risks deriving from uncontrollable natural phenomena, such as fluctuating and extreme weather events fueled by climate change. In this context, designing energy systems that are adequate and resilient to climate-related uncertainties becomes paramount. Sector-coupled and distributed energy systems are commonly modeled as Mixed-Integer Linear Programming (MILP) with granular temporal and spatial resolution, solved for cost-effective planning and operations under constraints. Due to the computational burden, uncertainty quantification is usually scenario-based and limited to robust approaches. Statistical metamodels like Polynomial Chaos Expansion (PCE) are affordable over just a few parameters under restricted decision options. To avoid oversimplifications while maintaining tractability and insightfulness, fast-scaling yet accurate methods are needed. A new perspective enables trustworthy decision-making in the presence of complex uncertainties: system-informed high-dimensional mapping of variability as a function of both design options and parameters. To fulfill this ambitious task, a multi-level method based on Kriging adaptive surrogates has been developed. It is applied to optimize the net-zero transition pathway of the European energy system under any climate and weather conditions. Irradiation, precipitation, and wind effects are combined on all scales to encompass the full range of possible realizations, according to historical data and future projections. Thanks to the metamodel, results are drawn considering millions of combinations among climate-induced weather events over a multi-year timeframe. |
11:30 | Assessing Natural Gas Pipeline Vulnerabilities to Floods: Exploring Hydrological Behavior and NaTech Risk PRESENTER: Francisco Filipe C. L. Viana ABSTRACT. Climate change is intensifying precipitation levels, leading to more frequent and severe hydrological events that pose significant risks to critical infrastructure, particularly natural gas pipelines. These pipelines are highly vulnerable to flooding, which can trigger Natural Hazard-Induced Technological Disasters (NaTech) by causing material releases and operational disruptions. While some studies have addressed pipeline vulnerabilities, this research focuses on the hydrological behavior of floods to better assess the hazards and risks they pose to pipeline systems. This study proposes a systematic framework that combines hydrological analysis with technical assessments of pipeline vulnerability, aiming to capture the complex interactions between flood dynamics and pipeline infrastructure. By incorporating probabilistic and stochastic modeling, the framework will provide a deeper understanding of flood risks, specifically how factors like flood intensity and duration impact pipeline integrity. This approach will enhance risk assessment strategies by considering both the direct and indirect consequences of flood events. A pilot project in Brazil will apply the proposed model to real-world flood scenarios, validating its effectiveness in assessing vulnerabilities and guiding risk-based decision-making. The study aims to offer practical insights for improving the integrity of natural gas pipelines in the face of increasingly frequent climate-related threats. This research highlights the importance of addressing the specific hydrological behaviors of floods to enhance the preparedness and risk management of critical infrastructure systems. |
11:45 | Resilience to Drought in the Aegean Sea insular districts: Factors that affect its management PRESENTER: Zoe Nivolianitou ABSTRACT. Drought and climate change are closely interconnected leading to shifts in weather patterns by increasing the frequency and severity of droughts in many regions. Rising temperatures enhance evaporation rates, reduce soil moisture and affect water supplies, altering precipitation patterns that result in prolonged dry spells, impacting agriculture, water resources and ecosystems. This is particularly true in the small and arid Aegean insular districts that have always been associated with the mitigation of drought risk. As droughts become more intense, the vulnerabilities of insular communities are amplified threatening their sustainability and that of the hydrological systems with which they interact leading to water scarcity, which can lead to food insecurity, economic challenges, and social unrest. On this ground, the authors of this paper argue that resilience building strategies at the local level should be based on systemic thinking, taking into account impact feedbacks. Social-Hydrological Systems (SHSs) (Sapountzaki and Daskalakis, 2016), a special category of SESs, provide a fit-for-purpose spatiotemporal unit and building block for local resilience plans/strategies versus drought. SHSs consist of social, institutional, water-works, energy and hydrological subsystems that mutually interact and coevolve. Based on their recent research into drought resilience plans for insular Social-Hydrological Systems (SHSs), the authors emphasize on specific interconnected elements (e.g. Risk-Informed Interventions, or Data Accessibility) to address their existing resilience potential. This is achieved through a specially developed questionnaire to get the feedback from local authorities, water supply companies and/or water consumers’ associations. The collected responses point to a list of crucial factors for creating comprehensive and effective drought resilience strategies and deciding on the most appropriate interventions/actions on SHSs’ components, those that eliminate negative systemic repercussions. REFRENCES Sapountzaki, K. & Daskalakis, I., (2016) “Transboundary resilience: The case of social-hydrological systems facing water scarcity or drought”, Journ. Risk Research 19 (7), 829-846. |
10:45 | Resilience Engineering - Theoretical and Practical Reflections on a 20-Year Journey PRESENTER: Ivonne Herrera ABSTRACT. In 2004, within Resilience Engineering, resilience, was understood as ability of a system to sustain required system function prior, during and in the aftermath of an adverse event. By 2024, this understanding had evolved to view resilience as a “verb” not a property, related to the ability to perform under varying conditions and able to respond to both disturbances and opportunities. Perceptions of what resilience is, what it does, what it applies to and how it can be fostered have diversify across disciplines, application domains and communities of practice. The definition has widened and nowadays is both subject to practitioners’ work in different industries, academic studies and policy initiatives. For example, the EU strategic plan for research and innovation 2025-2027 stress the importance of ensuring resilience in the face of various risks and crises and European Directive for critical infrastructures addresses resilience of critical entities covering 10 sectors. This paper critically discusses on the challenges and opportunities that arise from the wide application of the term “resilience”, particularly within the resilience engineering community. Then, through a focused literature review, experience from research projects and other initiatives, the evolution and contribution of resilience engineering to safety science are mapped. The paper further investigates fundamentals, concepts, methods and practical applications. The main objective is to provide a critical overview on progress achieved and challenges in terms of impact to both theory and practice. Based on the discussion, general recommendations for future research and practice will be provided. Here is where we see resilience engineering contribution: We cannot be certain about what the future brings, but we can be certain on more uncertainties, turbulence, new opportunities, events happening at diverse scales affecting each other, continuous change and limited resources. We see an urgent need to revise, reframe, synchronise and invest in adaptive capacities. |
11:00 | Co-producing knowledge for uncertain futures: Bayesian Networks in participatory environmental risk management and resilience-building PRESENTER: Annukka Lehikoinen ABSTRACT. Effective strategic environmental risk management must address uncertainty and complexity, necessitating advanced methods to support scenario development with incomplete knowledge. This talk introduces a participatory modeling approach using Bayesian Networks (BNs) to build understanding of the connections between acute socio-environmental disruption events and their long-term strategic consequences, ultimately enhancing the resilience of municipalities. The approach is demonstrated through a case study building on a chemical transportation accident scenario in an urban environment. In collaboration with city representatives, we co-designed a locally relevant starting-point scenario and provided evidence-based inputs as a BN model. Participants were then guided to extend the causal prognosis from acute impacts to the city’s long-term strategic goals, allowing them to create as rich a scenario space as they desired. Their task, facilitated by the researchers, was to formulate causal pathways connecting acute harm variables to the indicator metrics representing the city's long-term strategic goals. An algorithm was developed to translate the participants’ mental model into a computational BN, which provided them with various possible situational pictures as well as diagnostic insights into potential leverage points for effective strategic risk management. Through collective deliberation of the system's analytical outputs provided by the BN, participants identified realistic resilience-building actions, including anticipatory pre-accident measures, as well as acute-phase and post-incident interventions. Our findings highlight the potential of participatory BN modeling as a valuable tool for improving risk management strategies. We argue that a prognostically co-produced Bayesian network (moving from causes to effects) and its diagnostic use (inferring from effects to causes) can bring to light what we call 'reflexive unknowns': observations that could only be articulated due to the emergent systemic mechanisms, which would have remained unknown without the BN model and the collective work involved in constructing it. |
11:15 | Systemic Risk Analysis and Resilience Enhancement : A Network-Based Model for HILP Events in Venice. PRESENTER: Margherita Maraschini ABSTRACT. While global interdependence has improved living standards, it has also increased vulnerability to systemic failures: High-Impact Low-Probability (HILP) events are becoming more likely as interconnected societal-ecological systems enables cascading risk dynamics which amplify the effects of initial triggers. Traditional risk-based approaches struggle to account for the complex interactions of such phenomena, necessitating a holistic methodology to identify cross-sectoral vulnerabilities and enhance resilience. The AGILE project addresses this by developing a systemic, risk-agnostic framework to understand and manage HILP events. The proposed methodology is structured into three progressively detailed tiers. The first tier provides a general description of the systems, mapping the critical functions and their interconnections, as well as creating a guideline framework for table-top exercises. The second tier focuses on identifying interdependencies and feedback loops between critical functions to assess single points of failure. Lastly, the third tier creates a network model able to analyse systemic performance and identify cascading dynamics. This paper focuses on the application of the methodology to the metropolitan city of Venice. Venice is modelled as a network where nodes are transportation and communication infrastructure, ecosystems, households, economic activities, and the links represent the system dependencies. Quantifying the links presents significant challenges, necessitating the integration of available data, expert knowledge, and, where feasible, machine learning and artificial intelligence techniques. By correlating graph metrics with risk variables, the network analysis highlights critical elements that could amplify system failure (e.g., the authority and closeness of a node correspond to the exposure and vulnerability of its associated critical function). Simulations are employed to model how risks might spread across the network, providing valuable insights into potential impacts and strategies for strengthening resilience. Ongoing efforts focus on defining and refining Venice’s systemic network, aiming to better understand and mitigate vulnerabilities to bolster urban resilience against future HILP events. |
11:30 | The role of interdependent contexts in accident progression PRESENTER: Yifei Lin ABSTRACT. Understanding the level of independence of risk controls in a system is essential when conducting risk assessments (RAs). RAs are often influenced by multiple external and internal contexts. On 25th May 2021, a catastrophic failure occurred at Callide C power station in Queensland. The technical and organizationally focused investigation reports, indicated that the failure resulted from a top-down flow from decisions made at the stakeholder level, including altered operational strategies, asset management practices, and cost cutting. These decisions affected the corporate and organizational levels, ultimately impacting how risk management was conducted, how risk assessments were performed, and how risk data was collected, stored and monitored. Needing to explicitly consider the multiple stakeholder, organizational, and informational contexts highlights the challenge of recognizing and integrating system interdependencies in risk management decision-making. We applied a network analysis and graph theory-based approach to the Callide Unit C4 accident reports, to 1) visualize the accident by segmenting the reports into different events and linking them together to form a directed graph; 2) perform a constrained robustness analysis on this network to identify system vulnerabilities; 3) illustrate the cyclical relationships between system components. We demonstrate the use of network analyses to better understand how context behaves as an influencing factor affecting interdependence among the components and their controls, and ultimately benefits the hazardous industries. |
11:45 | Resilience Assessment of Transportation Infrastructure in the Northwest Atlantic Corridor PRESENTER: José C. Matos ABSTRACT. This study conducts a resilience assessment of the northwest Atlantic corridor, with a particular focus on the road routes between Portugal and Spain. The transportation infrastructure in this region is frequently challenged by natural disruptions, including forest fires and slope collapses, which can lead to significant traffic disruptions. Utilizing mesoscopic simulation techniques, we model the impacts of these disruptions on road networks. Our simulation framework evaluates the robustness and adaptive capacity of the transportation routes under various scenarios of disruption. The results highlight key vulnerabilities and potential mitigation strategies, providing crucial insights for enhancing the resilience of the transport infrastructure. This research advances the understanding of disaster management and infrastructure resilience in transnational transportation networks, emphasizing the importance of robust infrastructure planning and risk assessment in mitigating the impacts of natural disruptions. |
10:45 | Advanced data augmentation method for SOH prediction in lithium-ion batteries within hybrid systems PRESENTER: Soufian Echabarri ABSTRACT. In recent years, data-driven models have emerged as a powerful tool for predicting the State of Health (SOH) of lithium-ion batteries, offering high accuracy and significantly reduced development times. However, in hybrid systems where the battery is often inactive while the fuel cell provides the majority of the power, the availability of battery data becomes severely limited. This data scarcity presents a critical obstacle to achieving reliable SOH predictions. To address this challenge, we propose a novel data augmentation approach that integrates Time-series Generative Adversarial Network with a Gated Recurrent Unit to enhance data availability and improve prediction accuracy. The proposed approach is tested and validated through several real industrial datasets. A comparison study with conventional data augmentation methods is also investigated. The results consistently show that the proposed approach outperforms all competing methods, showcasing its superior capability in augmenting data for lithium-ion batteries. These findings highlight the effectiveness of our approach in enhancing predictive accuracy and robustness, making it highly suitable for real-world battery applications. |
11:00 | Safety Analysis of Human Machine Interactions in Remotely Operated Maritime Autonomous Surface Ships PRESENTER: Muhammad Irsyad Hasbullah ABSTRACT. Commercial deployment of Maritime Autonomous Surface Ships (MASS) is on the verge of becoming reality. Remote control centres (RCC) are developed, which however involve human operators to monitor and control MASS operations. Human-Machine Interactions (HMI) impact the MASS safety and robust decision-making, whereas technologies advancement along with emergencies, such as connectivity loss, human errors, and algorithm failures, cause additional. The study aims at developing systematic method for HMI mapping and appraisal combining the Systematic Theory Process Analysis (STPA) with Model-Based System Engineering (MBSE), targeting to minimise human error and risks in remote operations. A reference system that consists of human operators, interfaces, autonomous sub-systems and navigation system is analysed considering the case study of a remotely controlled vessel supervised by the RCC operator. The results reveal potential safety issues due to faults or errors and their causes. For instance, human error caused by distraction and inadequate training lead to ineffective decision making. This study establishes the foundation for developing human-informed design solutions, and robust remote operating systems for future RCC for autonomous ships, ensuring seamless and efficient interactions between humans and machines. |
11:15 | A Domino Effect-Driven Knowledge Graph for Large Language Model-Based Risk Identification in Natural Gas Pipeline Operations PRESENTER: Mingyuan Wu ABSTRACT. In light of the hallucination issue frequently encountered by large language models (LLMs) in risk identification, a domino effect–based approach is introduced for constructing knowledge graphs that represent risk events, contributing factors, and corresponding mitigation strategies. These knowledge graphs serve as external knowledge bases for LLMs, supported by carefully designed prompt words to enhance retrieval and reasoning capabilities. A System-Theoretic Process Analysis (STPA) of natural gas pipeline operations was employed as a case study to evaluate the effectiveness of this method in improving the risk identification performance of LLMs. The findings indicate that the knowledge graph–based retrieval-augmented generation (RAG) approach significantly reduces the occurrence of hallucinations in LLM outputs, thereby increasing the precision of STPA. This approach presents a novel avenue for utilizing LLMs in risk identification tasks for complex industrial systems. |
11:30 | Condition Monitoring on High-Voltage Circuit Breakers with Explainable AI Guided Fault Diagnostics PRESENTER: Chi-Ching Hsu ABSTRACT. High-voltage circuit breakers (CBs) are key assets for ensuring safety and reliability in power transmission systems. Therefore, monitoring their condition is essential to verify their functionality. Previous research has explored fault detection and diagnostics of CBs based on vibration and acoustic signals, framing the problem as a supervised learning task, typically relying on artificially introduced faults with known ground-truth labels. In real-world scenarios, however, fault types are typically unknown, making supervised learning impractical. To overcome this challenge, we propose a novel unsupervised CB fault detection and segmentation framework based on vibration signals, requiring only healthy data for algorithm development. This framework detects deviations from the healthy data distributions and segments faulty samples. Subsequently, faulty samples are further diagnosed with an explainable artificial intelligence (XAI) approach. We introduce diagnostics matrices derived from max pooling operations on attribution maps generated using Integrated Gradients. These matrices enhance the interpretability of fault segmentation results from unsupervised clustering, providing valuable insights into the distinguishing features of different fault conditions. The key contributions of this work include: first, proposing an unsupervised fault detection and segmentation framework with only healthy data required during training; and second, developing an unsupervised XAI-guided fault diagnostics approach to assist domain experts in identifying potential fault types or faulty components without the need of ground-truth labels. A case study using an experimental dataset from a high-voltage CB demonstrates the feasibility of the proposed approach. Vibration and acoustic data were collected in a controlled laboratory environment under both healthy and various faulty conditions including different damper kinematic viscosity and spring tension conditions, showcasing the method’s effectiveness in fault detection, segmentation, and diagnostics, where fault types can be inferred from the diagnostics matrices using XAI approach. |
11:45 | The Rescue Terrain Exposure Scale PRESENTER: Håvard Mattingsdal ABSTRACT. Rescue missions, whether conducted on the ground, at sea, or by air, often involve patients located in hazardous terrain, each presenting unique challenges that can endanger rescue personnel. There is limited understanding of the frequency and extent of the risks faced by rescue personnel during such missions. Establishing a standardized approach for assessing and registering on-site risk exposure could aid in better risk mitigation and enhance safety for rescuers. To address this, we propose the Rescue Terrain Exposure Scale (RTES), a system that categorises rescue terrain into three categories - simple, challenging or complex - drawing on methods from the established Avalanche Terrain Exposure Scale. RTES evaluates technical terrain characteristics, required safety measures on-site, and risk severity as markers for assigning a single rating. The severity of risk is determined by the potential consequences of an uncontrolled or unwanted event occurring on-site. Simple terrain is characterised by low-risk severity, where it is safe to operate unaided (e.g. mountain terrain with a steepness less than 30°, a low avalanche risk or permissive water conditions). Challenging terrain presents a medium-risk severity, requiring active use of soft defenses as additional on-site safety measures (e.g. use of basic mountaineering skills in terrain with steepness greater than 30°, rescue swimming in semi permissive or swift water). Complex terrain entails high-risk severity and necessitates the use of both soft and hard defenses as additional safety measures (e.g. movement in mountain terrain with a steepness greater than 40° and attachment to an anchor or fixed line, or rescue swimming in hostile water). Standardized methods for registering rescue terrain exposure might contribute to improved safety for both professional and volunteer rescuers through 1) enhanced strength of knowledge in risk assessments of procedures and equipment, 2) support to decision makers when dimensioning rescue services and training requirements. |
10:45 | Condition-Based Maintenance for Large-Scale Fleets under Multiple Constraints: A Constrained MDP Model with Primal-Dual Solution PRESENTER: Zehui Xuan ABSTRACT. Managing maintenance activities for large-scale fleets, such as wind farms with numerous wind turbines, presents a significant challenge in condition-based maintenance. In addition to the curse of dimensionality inherent to optimizing dynamic decisions for large systems, prior research has primarily concentrated on individual modeling challenges, such as limited maintenance resources or overall system performance requirements, without fully addressing the need for a comprehensive solution that accounts for both dimensions. In this article, we propose a novel approach in the context of condition-based maintenance planning that integrates all three critical factors: system scale, resource limitations, and performance constraints. Specifically, we develop a constrained multi-agent Markov Decision Process (MDP) model to tackle the maintenance planning problem for a multi-component system, and we solve it using a Primal-Dual algorithm. The system includes more than 50 components with known transition dynamics. At each time step, the planner must decide whether to replace each component, balancing limited maintenance resources with stringent availability requirements. The goal is to find an optimal policy that minimizes the expected discounted maintenance cost while adhering to these constraints. Finally, we compare our method's performance against baseline approaches, demonstrating its ability to achieve superior trade-offs between cost and constraint satisfaction. |
11:00 | UAV Swarm Coordination for Flood Area Coverage in Populated Regions using Reinforcement Learning PRESENTER: Sophia Schorer ABSTRACT. Flooding is one of the most prevalent natural disasters worldwide and is increasingly recognized as a consequence of climate change. Floods cause substantial economic damage and, moreover, endanger human lives. We present a Deep Reinforcement Learning-based approach, using a centralized Proximal Policy Optimization (PPO)-based agent to coordinate a UAV swarm for the systematic identification of locations with a high likelihood of human endangerment. The agent acts adaptively based on the real-time coverage state, which is crucial for effective inspections of affected areas under a time constraint. We incorporate flood locations and areas of interest—defined by damaged infrastructure—into the decision-making process. We also present a method for extracting relevant data from satellite imagery, based on a previous flood event in the Ahr Valley in Germany in 2021. Our results demonstrate increasing effectiveness in the coverage of diverse flood scenarios. Further advancements are needed before real-world deployment, but the collected data could ultimately be crucial for planning rescue operations and mitigating human risks, especially during the initial disaster response phase. In addition to optimizing coverage efficiency, we highlight key operational risk factors in UAV swarms, such as unpredictable environmental conditions, communication disruptions, and energy constraints, which are essential considerations for ensuring reliable UAV swarm performance in real-world flood scenarios. |
11:15 | Joint optimization of short-term scheduling and maintenance for microgrids via deep reinforcement learning PRESENTER: Jian Zhou ABSTRACT. Microgrid has attracted more and more attention, which largely attributes to its operating independence of providing backup power for customers when main power grid outages occur. In practice, power generation and storage units within the microgrid are subjected to random shocks from environment and inherent degradation during microgrid operation. It can significantly damage the performance and reliability of the microgrid, which increases the risk that customers’ electricity demand cannot be satisfied by microgrids. Therefore, a framework that jointly optimizes microgrid short-term scheduling and maintenance measures is proposed. In this framework, proactive maintenance and short-term scheduling of units in the microgrid are leveraged, which facilitates the prevention of unit failures and improves power supply resilience. It is assumed that a microgrid remains operational as long as k out of a total of N units in the microgrid are functional. By optimizing unit maintenance tasks and adoptively adjusting unit operation according to real-time system conditions, the proposed framework helps to minimize power supply interruptions and the total costs associated with microgrid operation. This paper uses a deep reinforcement learning algorithm to solve this dynamic and stochastic problem. Based on real data and synthetic data, a case study is conducted to demonstrate the effectiveness of the proposed framework in enhancing the reliability of microgrid and power resilience under varying operational and environmental conditions. |
11:30 | Towards AI trustworthiness assessment framework for railway applications PRESENTER: Asma Ladj ABSTRACT. While artificial intelligence (AI) has the potential to significantly enhance performances of railway transportation and mobility, ensuring its trustworthiness and safety remains a serious challenge. Indeed, the deployment of AI systems in general, and in railway particularly, gives rise to multidisciplinary concerns spanning ethical, social, economic, and technical dimensions. This paper aims to establish a comprehensive framework for assessing AI trustworthiness in the railway sector, in the light of the EU AI Act. Specifically, it explores the parallels between railway risk assessment and AI trustworthiness assessment, while adapting the definition of risk to encompass the AI-related risks, and extending the analysis activities to consider additional trustworthiness attributes. |
10:45 | Maximum likelihood estimation of probability for impact Resistance of safety guards PRESENTER: Fabio Pera ABSTRACT. The article examines the testing method described in Annex B of the ISO 14120 standard for assessing the impact resistance of machine guards. The typical testing practice involves firing a single shot from a ballistic cannon at the guard, with sensors measuring the projectile's velocity before and after impact. However, meeting the guidelines for this test presents challenges, including the difficulty of identifying and hitting the "weakest point" of the guard and ensuring the projectile strikes the surface in a perpendicular manner. The key findings, derived from a five-year collaboration between two research institutions, focus on analyzing uncertainties inherent in these standardized testing methods. Two statistical distributions, Logistic and Gaussian, are employed to process the data. The traditional approach of creating a histogram before calculating the cumulative distribution function (CDF) was found inadequate because it reduces the number of data points available for accurate curve fitting. To improve this process, the Probit method, already used in the AEP 2920-2016 standard, is introduced as a more effective regression technique for the Gaussian distribution. A comparison is made between results from different regressions, focusing on discrepancies in the tails of the curves, where the divergence between models becomes more significant. The article also discusses methods for estimating the statistical dispersion of test results. Specific examples of trials carried out at the INAIL laboratories in Monte Porzio Catone are provided, showing the application of these methods in practice. These experiments were part of a joint research initiative between the University of Perugia and the Department of Technological Innovations and Safety of Plants, Products, and Anthropic Settlements (DIT). By presenting this research, the article seeks to address the practical limitations of standardized tests and suggests alternative methods to improve accuracy and reliability of machine guard impact resistance evaluations. |
11:00 | Safety of machinery - Proposals for an improved workpiece clamping in machine tools and its implementation in product safety standards ABSTRACT. The precise merging of the clamped tool and workpiece is crucial to ensure precision and reproducibility in production. Modern CNC (Computerised Numerical Control) technologies enable automated control of these machines, allowing complex parts to be manufactured with high precision and efficiency. This process is critical in various industries, including automotive, aerospace, medical and many others, where high-precision metal parts are required. The safety of workpiece clamping is of crucial importance in metalworking and other manufacturing processes. Improper workpiece clamping can lead to dangerous situations that can cause both personal injury and property damage. Various safety precautions are taken to ensure the safety of workpiece clamping, including the use of safe clamping devices, regular maintenance and inspection, operator training and adherence to safety guidelines. CNC machines may also be equipped with sensors and safety devices to detect unusual vibrations or movements and automatically shut down the machine in the event of a problem. The reliability of workpiece clamping when turning without support elements is particularly dependent on the clamping force applied and the process loads on the lathe chuck. It is therefore essential to correctly determine the process-dependent minimum clamping force required and to ensure that it is maintained during the process. The latter is addressed by innovative sensory jaws (‘iJaws’, Röhm GmbH), which make it possible to record and control the clamping force in the clamping and turning process. Instructive regulations exist for determining the clamping force of jaw chucks. As part of VDW research projects, the interactions between the process, workpiece and clamping system were analysed in more detail with the help of innovative measurement concepts using sensory jaws, and improved instructional specifications were derived. |
11:15 | Numerical investigation of bending-critical eigenmodes and stable operating conditions in the utilization of slim tool extensions: The influence of resonance and nutation phenomena ABSTRACT. State-of-the-art machining of complex integral workpieces requires the utilization of slim tool extensions (STE) in machine tools. Previous experimental investigations indicate that resonant excitation of STE at their bending-critical eigenfrequency can result in complex plastic deformation and subsequent failure. The observed mode of deformation does not necessarily correspond to the actual excitation. However, the significantly increased eccentricity of STE mass causes an abrupt increase in the accumulated kinetic energy of potential fragments released in the event of a resonance catastrophe. This exceeds the retention capacity of standard safety guards by orders of magnitude. A novel approach to resolving this issue is to induce defined failure in STE by means of constructive measures. Therefore, preliminary knowledge of the exact loading conditions is essential. This paper presents findings from numerical analyses with specific focus on the interrelationship between subcritical and supercritical excitation of STE, structural damping, and the rotodynamic phenomenon of synchronous and asynchronous nutation. It is proposed that the interaction of these phenomena has critical influence on the transition from stable to unstable operating conditions. Modal analyses and frequency response analyses on finite element models are conducted to determine bending-critical eigenmodes along with their respective eigenfrequencies of STE and tool holders. The insights gained throughout these analyses are then applied to a transient simulation of run-up experiments, thus facilitating a more profound comprehension of the system behavior in the range of bending-critical eigenmodes and the associated limits of safe operation. The results obtained provide substantial evidence that, beyond the mere phenomenon of resonance, failure of STE due to bending-critical excitation is caused by a complex resulting state of stress that cannot be attributed to mere bending. Furthermore, it could be demonstrated that a distinct relationship between this state of stress and geometric and material influences can be established. |
11:30 | Derivation of an Updated Aging Curve for Polycarbonate Vision Panels Used as Safeguards in Machine Tools ABSTRACT. Safety regulations for machine tools require an adequate protection of operators from potential hazards, such as ejected workpiece or tool fragments. To meet these requirements, machine tool manufacturers implement different safety measures. A key component of these measures are polycarbonate (PC) vision panels, which provide essential protection while allowing the operator to observe the machining process. Newly manufactured PC is characterized by its high ductility allowing it to absorb impacts with considerable kinetic energy from ejected parts in the event of an accident. However, exposure to cooling lubricant (CL) water mixtures cause the onset of specific degradation processes, which result in the accelerated aging of the material. The resulting degradation progressively reduces the ductility of PC, limiting its long-term effectiveness as a safeguard. This behavior is accounted for in the design of PC vision panels by using an aging curve defined in international standards. Nevertheless, the aging curve has become outdated due to the implementation of the REACH regulation, which has caused a substantial change in the composition of CL. This paper provides an in depth analysis of the aging behavior of PC exposed to a REACH compliant, mineral oil based and boron free CL. Three separate aging experiments with elevated temperatures and CL concentrations were performed. The PC specimens were analyzed by means of FTIR spectroscopy, GPC and impact tests. The primary aging mechanisms identified are hydrolysis and aminolysis. Additionally, a new aging curve is proposed, mapping the experimental results to common machine tool conditions by applying the Arrhenius equation, accounting for first-order reaction kinetics. The aging curve allows, for the first time, to consider the aging of PC vision panels with REACH compliant CL. Furthermore, the established procedure serves as a framework for developing additional aging curves for other CL, such as ester-based formulations. |
11:45 | AI-Driven Safety Systems: Reducing Risk in Complex Workplaces and High-Stakes Task PRESENTER: Francesco Di Paco ABSTRACT. Complex workspaces involving workers, machines, and tools often present residual risks due to both intentional and accidental interactions between these elements. Common incidents include operator misuse of machinery, such as bypassing safety features or removing protective devices. Moreover, the absence of specific auxiliary equipment can further increase hazards. Traditional safety approaches typically rely on written warnings in manuals, with limited technical solutions to address these risks. This study employs a prototypal machine assembly, combining a robot and a multimodal lathe, to evaluate and reduce risks in a complex workspace. The area is divided into predefined zones based on tasks and operator presence, allowing for targeted risk assessments before and after integrating IoT sensors like RFID tags and Computer Vision (CV). These sensors, coupled with Artificial Intelligence (AI), are incorporated into standard safety systems compliant with the Machine Directive. The research demonstrates how deploying sensors to monitor specific tasks can significantly reduce known hazards. Multiple sensors with different technologies are used to prevent Common Cause Failures (CCF), creating a more reliable and redundant safety system. By integrating traditional safety measures with AI-enhanced sensors, the study offers a proactive solution to mitigate operator errors and machine malfunctions, leading to a safer working environment. This research is part of the ongoing “Sistema smart integrato basato sull’intelligenza artificiale per la gestione della sicurezza degli operatori in processi di produzione (AISAFETY) - BRiC 2022 ID 40” project, co-funded by INAIL (Italy). |
10:45 | Addressing Systemic Risks for National Critical Functions PRESENTER: Jonas Johansson ABSTRACT. The resilience of national critical functions is fundamentally rooted in the safety and security of a large set of interdependent critical infrastructures, that are supplying vital services to the society. In a changing climate and a radically shifting European security context, a myriad of threats and hazards have the potential to disrupt services and lead to severe societal consequences. As these infrastructures are highly interdependent, both functionally and geographically, disruptions can lead to cascading effects across infrastructure sectors and geographical borders. National risk and vulnerability assessment and governance efforts for national critical functions must however address these challenges in a data scarce and a methodology sparse environment to provide actionable recommendations to policy- and decision-makers. In a European context, such needs for assessments are for example clearly outlined in the Critical Entities Resilience Directive (EU 2022/2554), requirements for National Risk Assessment for Disaster Risk Management in EU (EU 1313/2013), and the Flood Risk Directive (EU 2007/60). There are hence fundamental research gaps that must be addressed for nations to advance their ability to effectively manage critical infrastructure risk and resilience. The aim of the oral presentation is to outline these fundamental challenges and suggest promising and actionable approaches to address these gaps. The research is rooted in a collaborative research project between Department of Homeland Security (DHS), USA, and the Swedish Civil Contingencies Agency (MSB). Here a functional network approach is suggested to support cross-sector management and governance efforts, which provide deeper dependency insights and reveal critical cascading impacts that can occur due to the interdependent nature of national critical functions. This perspective complements and draws upon more traditional research traditions, related to critical infrastructure, supply chain risk management, and security of supply, towards improving nations’ ability to manage systemic risks of interdependent national critical functions. |
11:00 | Using standards in risk management regulations: a Swedish case study ABSTRACT. Risk management in land-use planning often boils down to practical decision-making situations, such as deciding on safe distances between residential buildings and other types of developments and dangerous goods transportation routes or hazardous industries. The practical approaches to managing risk in land-use planning vary across countries, ranging from prescriptive regulations on managing risk to non-standardized approaches requiring managing risks without detailing how to do it in practice. The current paper aims to contribute to the discussion on which regulatory approach is preferred by applying the current knowledge base on using standards in risk management to a specific case of recently published government recommendations for managing risk in land-use planning in Sweden. The approach of this paper is to compare the Swedish regulator’s recent recommendations with a set of key aspects that should be considered when assessing the use of standards in risk management regulations. It is concluded that a hard regulatory approach is primarily favorable for non-complex land-use planning decision situations where conditions are well-known. A soft approach is more beneficial for complex decision situations characterized by significant uncertainties and an unfamiliar risk canvas. Reviewing the Swedish guideline, it can be concluded that the soft, output standard-type guideline intended for use in all land-use planning situations does not incorporate the current body of knowledge in the field. |
11:15 | Risk Beyond Doctrine - An Empirical Study of NATO’s Risk Management Integration in Planning ABSTRACT. This study builds on previous research into the integration of risk management within NATO's authoritative planning and decision-making documents. This paper focuses on how risk management activities are incorporated or omitted in NATO headquarters’ planning processes during large-scale NATO exercises. Despite having incorporated risk at different areas of NATO doctrine and introduced the risk management framework of ISO 31000 there are inconsistencies in how NATO documents provide guidance on risk management as part of planning processes. The extent to which this has practical implication warrants another investigation. This study draws on data from three NATO headquarters involved in large-scale exercises. Through the privileged access of an insider researcher, the study includes analysis of a dataset consisting of 16 internal instructional documents, eight interviews, and extensive observational data captured during the planning processes. This dataset offers a comprehensive view of how risk management was incorporated in planning within those headquarters, including explicit internal instructions, varied perceptions of risk, and first-hand observations of how risk was incorporated into planning from multiple perspectives. The initial review of the data reveals a variance in how risk is understood, analysed and managed as part of planning of military operations. Not only is there variance between the different headquarters approach to risk, there is also variance in approach to risk internally in each headquarters. All of which can be challenging when these various perspectives are tasked to communicate their identified and analysed “risks” into the same planning process. The findings demonstrate challenges in achieving coherency of risk management across large multinational organizations. Furter research should investigate if there are similar challenges in NATOs decision-making processes and how this affects executive level decision-making. |
11:30 | Using Portfolio Decision Analysis to Select Reinforcement Actions in Infrastructure Networks PRESENTER: Joaquín de la Barra ABSTRACT. Critical infrastructure networks, such as railway networks, provide essential services whose continuity must be secured. Towards this end, we combine multi-criteria portfolio decision analysis with Probabilistic Risk Assessment (PRA) to construct portfolios of reinforcement actions that contribute cost-efficiently to the attainment of objectives that represent the network's services. Our model admits a range of assumptions about the relative importance of these objectives through incomplete information about the weights associated with the corresponding criteria. It also helps identify which portfolios of reinforcement actions perform best with regard to these objectives at different budget levels. We illustrate our model with a study on the reinforcement of switches at a railway station, which connects several origin-destination pairs with different volumes of planned traffic. If one or more switches are disrupted, some connections may be lost, and the corresponding traffic volume will be affected. We formulate an additive multi-criteria utility function such that the weight of each criterion reflects the planned traffic volume for the corresponding connection. PRA algorithms are used to assess the reliability of these connections. The results help identify the switches where the reinforcement actions should be implemented when the aim is to maximize the station's performance, as measured by the expected enabled traffic volume between the origin-destination pairs. |
11:45 | Operational disaster management exercises and the paradox of un/securitisation PRESENTER: Stefan Kaufmann ABSTRACT. With increasing regularity, several European countries as well as the European Union has since 2004 organised large-scale operational exercises for transnational cooperation on disaster management. The scenarios of these exercises include terrorist attacks, often involving chemical or biological agents, major accidents in transport and industry, extreme weather, floods, pandemics, cyber-attacks, infrastructure collapses such as large-scale power outages, and occasionally, multiple crisis situations. As such, the exercises can be seen as markers of evolving collective perceptions of communal threats across Europe. That said, a common denominator is that the simulated disasters are of the low-probability/high-impact type. In addition, there is a shift in the focus of security strategies and tactics, from prevention to preparedness and resilience. The latter, with the caveat that prevention will never be enough to avoid potential crises and disasters. In other words, it is no longer a matter of operationally rehearsing major emergencies in a controlled environment, but of venturing into the unknown. The range of what is considered bad luck, fate, the unexpected, or simply beyond awareness, is shrinking. In this meaning, security moves are accompanied by a growing notion of un/insecurity. With a view to the Copenhagen School (Wæver/Buzan), we will in this paper explore to what extent EU-organised large-scale exercises accelerate a “securitisation” process resulting not from a speech act, but from the practice of security management itself. |
10:45 | Methodology for Risk Assessment in Technology Incorporation in the Oil & Gas Industry PRESENTER: Danilo Colombo ABSTRACT. The Oil & Gas sector is continuously seeking new technologies aimed at increasing well productivity, reducing costs, and operational risks. They also seek to expand their exploratory opportunities, working on those that are still technically or economically unfeasible. In addition to the challenges directly associated with technological development, there is an important issue related to the incorporation of new technology: on one hand, there is a desire to apply the new technology as soon as possible, anticipating the capture of its benefits; on the other hand, there is the risk of not having it available by the intended date, delaying production and leading to losses that may exceed the promised benefits. This issue is exacerbated by the high lead time required between contracting and the availability of the necessary technologies. The risk of readiness can be reduced by considering contingency routes. However, if this strategy is not carefully crafted, it may come at the cost of significantly reduced expected benefits. Utilizing new techniques and computational tools to develop dynamic models that assist decision-making is one of the main approaches to adapting to constant changes and the inherent complexities of technological development and technology incorporation. These models allow for a more accurate assessment of risks and opportunities, contributing to the effectiveness and efficiency of innovation processes. Furthermore, the implementation of robust metrics and probabilistic representations of risks enables better management of project portfolios, aligning them with the strategic objectives of organizations and aiming to maximize returns on their investments. It is noteworthy that companies have generated large volumes of data on the evolution of maturity and risk metrics, characterizing the dynamics of their developments, which can be used in project analyses. This paper presents a methodology to support decision-making during the planning, development, and incorporation of new technologies. |
11:00 | Innovative Reliability with Client Partnership in Electrical Intelligent Well Completion PRESENTER: Ahed Qaddoura ABSTRACT. For decades, the oil and gas industry has relied on hydraulic controls in their production systems to develop its offshore fields. However, the industry is now entering a new era: the electrification of wells. the reliability of all-electric components must be validated throughout entire system lifecycle, spanning many years. SLB and Petrobras partnered to develop, in Brazil, a portfolio of technologies that enables all-electric intelligent well completion. New innovative reliability strategy method was applied to demonstrate reliability into products. The interactive steps of the strategy include generating requirements for safety objectives and verifying that these objectives are met through mitigation plans. Evaluates functions and the design of systems performing these functions. The reliability assessment is also based on FMECA identifying the main failure modes,and listing critical components. Sequentially, reliability block diagrams for all sub-systems were plotted, producing accurate reliability estimates and highlighting critical components. The reliability allocation analysis was used to properly set the reliability targets for each sub-system and For the specific undesired top events for Well completion, we focused on four top events: Loss of entire system, Loss of integrity , Do not change position when remotely controlled and Permanent loss of telemetry and electric supply This step complemented our reliability block diagrams, allowing us to choose probabilistic approaches that model the occurrence of basic FTA events. The probability calculations for basic events and failure distributions were derived . The model of an FTA basic event included parametric probability distributions The reliability aggregation model, it was obtained from the top FTA events and the main failure modes of each subsystem, demonstrating the final probability density function of each ranked top event. In summary reliability strategy is main key deliverable of new product development to meet and exceed client expectation in deleveling reliable electrical well completions systems |
11:15 | Applying Bayesian Reliability Demonstration Testing to Downhole Electronics: An Effective Approach for Performance Validation PRESENTER: Pankaj Shrivastava ABSTRACT. The increasing reliability and performance requirements of permanently installed downhole completion tools is necessitating the development of advanced approaches for assessing and demonstrating reliability. Traditional reliability testing methods often rely on frequentist approaches which require a large sample size to achieve statistically significant results. For satisfying high reliability targets, demonstration using classical reliability demonstration test (RDT) approaches are practically challenging primarily due to large sample size (cost, test execution complexity, etc.). This paper explores an application of Bayesian Reliability Demonstration Test (BRDT) as an advanced methodology for evaluating system reliability in real-world conditions. Bayesian method leverages prior reliability to optimize samples required to satisfy high reliability target. The proposed framework leverages a non-parametric Bayesian statistical technique based on beta distribution to combine prior information such as historical performance data or expert opinions, with sample data (empirical test results) to update reliability estimate. In a non-parametric context, prior reliability data only influences sample size and acceleration factor impacts the test time. This paper further examines the acceptance criterion for prior reliability data with the goal of using a prior reliability distribution that reflects a reasonable level of uncertainty or conservatism as it helps to avoid overconfidence. By updating beliefs about reliability performance through a dynamic posterior distribution, the Bayesian RDT results in sample size which is significantly lower than a classical RDT especially when the prior reliability is comparable to target reliability. A case study involving a permanently installed downhole electronics module is presented to demonstrate the application of this methodology, showcasing its potential to improve reliability estimates, reduce testing costs, and enhance confidence in system performance. The paper highlights the advantages and limitations of Bayesian RDT in ensuring that systems meet stringent reliability criteria, contributing to more efficient and effective product development cycles. |
11:30 | Methodology for Integrating Partially Relevant Data in Reliability Assessment of Oil & Gas Offshore Equipment Under Development PRESENTER: Lavínia Maria Mendes Araújo ABSTRACT. In the development of new equipment for offshore oil and gas operations, obtaining accurate reliability estimates is often challenging due to the lack of specific data. A key source of information in this context comes from historical performance data of similar equipment. However, since these existing devices may differ in design or operate under conditions unlike those for which the new equipment is intended, this data cannot be treated as fully relevant without risking bias in the analysis. To address these limitations, we propose a novel methodology that integrates partially relevant data using Bayesian inference and a weighted likelihood approach. The pipeline introduces a relevance factor that quantifies the similarity between the new equipment and existing systems, thereby reducing reliance on expert judgment. The relevance factor is calculated through a comparative analysis of operational contexts, which includes variables such as well design (type of well, operational fluid, and well direction), operational requirements, and specific characteristics of the equipment design, including materials and geometry. To measure the similarity between these variables, we employ different techniques depending on the type of data (discrete or continuous). The resulting relevance scores are then integrated into a Bayesian framework. As a practical application, we analyze the case of an open-hole expandable packer, using partially relevant data from similar equipment with variations in design and performance history from different operators. The results show that applying the relevance factor leads to a more conservative reliability distribution, highlighting the need for additional equipment-specific testing to reduce uncertainties. By minimizing subjectivity in estimating the relevance factor, this methodology enhances the objectivity of reliability assessments and supports more informed decision-making during the early stages of equipment development. |
11:45 | Reliability assessment of a well completion equipment considering incremental design modifications PRESENTER: Eduardo Menezes ABSTRACT. Reliability assessment of long useful life equipment and devices is a key challenge for the O&G industry. The lack of failure data and the high costs involved in testing components make reliability estimation a very difficult task. Additionally, for new technological developments, a series of design modifications are introduced during the process, which need to be incorporated into the reliability. This work provides a framework to deal with the reliability assessment of new completion equipment, considering the several TRLs of the process and the inclusion of design changes. A new production packer is presented, considering three designs with incremental changes. Initially, the packer with an expandable sleeve is considered. Then, the feedthrough of hydraulic control lines is analyzed in terms of their impact on reliability. Finally, an anti-burst mechanism has been introduced in the design. The whole process relies on a structured approach, beginning with the Failure Modes, Effects, and Criticality Analysis, passing to the elaboration of a fault tree with distributions for its basic events, and closing with the Bayesian update of the parameters by aggregating data from multiple sources. Besides the incremental design changes, this work also showcases the reliability estimation for new technology considering different application scenarios. |
10:45 | A Data-Driven Approach to Develop a National Derailment Risk Model PRESENTER: Xiaocheng Ge ABSTRACT. Railway derailments pose significant risks to safety, infrastructure, and operations. A national quantitative risk model for railway derailments is essential for assessing these risks and their impacts on rail operations. Such a model is critical in the decision-making process for implementing preventive mechanisms and mitigating the consequences of derailments. One key factor influencing the severity of a derailment's potential consequences is the lateral deviation of the train, which can significantly affect the extent of harm to passengers, the workforce, and the public. This lateral deviation is influenced by various factors, including the railway's topography and nearby features, such as the crashworthiness of rolling stock, the presence of key assets (e.g., line-side equipment or embankments), and the frequency of trains on adjacent tracks. This paper presents the development of a comprehensive derailment risk model that integrates multiple high-accuracy, granular data sources to assess derailment likelihoods, estimate the post-derailment trajectory of train cars, evaluate the escalation of derailments under various conditions, and calculate potential outcomes, referred to as "injury atoms" in the approach. The model incorporates infrastructure data (such as track geometry, route location, and number of tracks), operational data (e.g., train type, frequency, and loading of passengers or hazardous materials), and historical data (including derailment records, types of derailments, and injury statistics). By accounting for a broader range of factors and offering localized insights, the proposed model outperforms traditional derailment risk assessments. It can be adjusted to real-time operational changes, which dynamically enhances its precision. Case studies illustrate the model’s effectiveness in the safety assessment of derailment incidents, providing railway infrastructure operators with a valuable tool for improving safety and optimizing maintenance strategies. This approach offers a significant advancement in data-driven rail safety management and has the potential for adaptation to other transportation sectors for risk mitigation. |
11:00 | Towards applying STPA to autonomous railway systems – a hierarchical safety control structure for GoA2 train operations PRESENTER: Abhimanyu Tonk ABSTRACT. The automation and digitalization of railway systems promise significant improvements in operational efficiency, but it also introduces new challenges in maintaining the safety level. Indeed, as automation progresses through various Grades of Automation (GoA), it is essential to preserve the high safety targets by effectively managing the interaction between human operators and technical systems (i.e., automated driving systems, automated train operation). Leveraging systems and control theories, which view safety as a control issue and an emergent property arising from interactions among system components, we aim to apply the Systems-Theoretic Process Analysis (STPA) method to identify potential hazards resulting from these interactions. STPA, based on the Systems-Theoretic Accident Model and Processes (STAMP), offers a systematic approach well-suited to managing the complexities and dynamic operational scenarios of automated systems. A key element of STPA is its hierarchical control structure, which defines how safety constraints are enforced within the system. In fact, any hazard analysis conducted using the STPA approach is only as effective as the quality of its underlying control structure This paper presents and discusses the design of a hierarchical control structure for semi- automated train operations (GoA2) in European railways, consisting of two levels: (i) the organizational level, responsible for safety management and policy enforcement, and (ii) the technical system level, responsible for driving, operational controls, and feedback mechanisms. This structure clarifies the control relationships (controllers and controlled processes) between diverse technical and organizational actors and defines the associated information flows. Additionally, it helps identifying the modifications introduced at each level as a result of automation, ensuring a clear understanding of how these changes impact the railway system safety. |
11:15 | How lucky was that? Development of a model to quantify the element of chance in avoiding train accidents ABSTRACT. The Rail Safety and Standards Board (RSSB) is a world leader in modelling the safety risk arising from the operation and maintenance of rail networks. Over the past few years RSSB has been asked to help with urgent and complex questions that require modelling of situations that go beyond what is covered in our existing modelling capabilities, and which therefore are not well suited to address these questions promptly and robustly. To address these challenges a programme of work has been set up to update and significantly enhance our current risk‐modelling capabilities. One of the projects under this programme has looked at developing a model to assess the likelihood and severity of an actual event resulting in an accident if it had occurred in slightly different circumstances. This model has so far been applied to signal passed at danger (SPAD) events (work is underway to further develop it so that it can be applied to others). To make these estimates a likelihood model was first built to understand the probability of a collision occurring using the characteristics of the trains involved in a SPAD event. This considers the time window during which the SPAD could realistically have occurred. It then considers how the event would have unfolded had the SPAD occurred at different times within that window. For each ‘scenario’ the model calculated whether there was a collision or not which was then used to give an estimate of the overall likelihood of a collision in the time window. Our presentation would aim to showcase this novel risk modelling and analysis technique which is measuring the degree of chance (or luck) in events not resulting in an accident. |
11:30 | A POMDP-based approach for obstacle avoidance in autonomous trains PRESENTER: Mohammed Chelouati ABSTRACT. Autonomous trains must operate in highly dynamic environments where ensuring safety remain a significant challenge. Unlike human operators who can intuitively assess and respond to potential risks, autonomous systems require continuous, real-time evaluation of their surroundings in order to make safe decisions. A key aspect of this is dynamic risk assessment, which is essential for managing risk in unpredictable conditions [1]. This paper presents a safety assurance framework that uses Partially Observable Markov Decision Processes (POMDPs) to provide real-time collision risk management for autonomous trains. The POMDP-based model presented in this work extends the original model developed in [2], which used a simplified state space to demonstrate the feasibility of risk-based decision-making for autonomous trains. In this updated version, we increase the number of states within the same adaptive grid map framework to capture more complex operational scenarios. The adaptive grid map dynamically adjusts according to the system’s evolving state space and obstacle proximity, offering more precise risk estimations. By incorporating a greater number of states, the model enhances granularity in risk assessment while maintaining a reasonable balance between accuracy and computational complexity. This extension allows for a more detailed understanding of potential risks in various environmental conditions without sacrificing real-time performance. This approach is specifically applied to the anti-collision function in autonomous trains, demonstrating its capacity to maintain safety in uncertain environments. The results emphasize the need to balance model complexity and decision-making efficiency while ensuring that autonomous systems can manage real-time risks effectively. This framework is crucial for ensuring the safety of autonomous railway systems in all their operational conditions. |
10:45 | Towards the operationalization of mission-centric frameworks for cyber security risk management in the defence sector PRESENTER: Federico Mancini ABSTRACT. Information and communication technology (ICT) has long been envisioned as a potential force multiplier in military operations. Cyber has even been recognized as a full-fledged domain of operations alongside air, ground, space and maritime. Armed forces that are not able to embrace this change and readily leverage new ICT technology to achieve information and operational superiority, might be at great disadvantage in future conflicts. At the same time, it is critical that the increased operational effect that new technology might bring, does not come at the cost of unacceptable security and safety risks. To support these complex cost-benefit assessments, various mission-centric frameworks for cyber security have been proposed over the last two decades. They all seek to give guidance and tools for eliciting security requirements based on the risk of losing mission critical capabilities through ICT compromises. This is in contrast with a more classical ICT-centric approach, oftentimes in the form of strict compliance-based checklists. Still, although the underlying principles guiding mission-centric frameworks seem to be well-understood and accepted, there seem to be some fundamental hurdles toward making them operational. We shed light on challenges and how to overcome some of them based on the experiences of the Norwegian and Canadian military research institutions with developing such frameworks. Key findings were: To identify and assess the criticality of ICT systems for mission success, it is necessary to model the relationship between military missions and the technical functions enabled by ICT systems in an way appropriate for specific national needs. A crucial success factor is to establish a partnership with the Armed Forces and engaging key stakeholders throughout the process. Operationalization requires collection and structuring of large amounts of data; hence a flexible supporting tool is needed. |
11:00 | Learning from Safety Culture to Optimise Cybersecurity Culture ABSTRACT. The cybersecurity threat landscape continues to evolve, accelerated by recent advancements in Artificial Intelligence capabilities, and major changes within the modern work environment. The ease of access to and use of sophisticated generative AI tools such as ChatGPT create opportunities to create fake audio, video and text that are extremely realistic and difficult to distinguish from human-generated content. At the same time, the digitalisation and decentralisation of the modern workplace, as well as the substantial rise in remote working because of the COVID-19 pandemic, have considerably increased the potential for security breaches. There is growing awareness within the cybersecurity industry that protection against threats depends on more than complex IT infrastructure and tools. People are still considered to be the cause of most cybersecurity breaches, but people may also be the key to building a successful cybersecurity defence. As the threat landscape changes, it has become evident that the most common approaches to address cybersecurity (e.g., awareness training, simulated phishing attacks, requirements for password updates), will not be sufficient. Successful implementation of cybersecurity measures that are sustainable over time requires both a human-centred approach and the adoption of a cybersecurity culture mindset within the organisation. This paper presents a hypothesis about how high hazard industries, such as nuclear power, have faced comparable threats, which led to the widespread adoption of the safety culture concept. The paper considers how organisations could learn from the implementation of safety culture to support adoption of a sustainable, human-centred cybersecurity culture. |
11:15 | Growing pains: cyber situation awareness in expanding ecosystems PRESENTER: Nicole Van Deursen ABSTRACT. In The Netherlands we see many organisations joining forces to collaborate on cybersecurity. Participants of such collaborations share knowledge about risks and cyber threat intelligence to increase the collective resilience against cyber threats. In this context, the term situation awareness (SA) is often thrown about. This has increased ever since the evaluation of the first European Directive on Network and Information Security (NIS1) stated that there is a low level of joint situation awareness for cybersecurity in Europe [1]. Joint SA for cybersecurity involves multiple cybersecurity teams in collaborative decision-making about risks and threats. Moreover, with the updated European Directive (NIS2) rapidly approaching, national, regional and sectoral cybersecurity collaborations are preparing for an expansion of their ecosystems, as the NIS2 applies to many more organisations. Some of the entities entering existing information sharing ecosystems may not have mature cybersecurity operations or risk management. These entities will become part of cybersecurity information sharing communities and might not be able to effectively create SA from the information that is presented to them, as their security teams may lack experience. In our presentation, we will discuss how a better understanding of the term (joint) cyber SA based on Endsley’s [2] model of SA supports the challenge of a growing amount of cyber security teams entering an existing collaboration on cyber security. We will present a case study from the education sector where several institutions were newly integrated into an existing community of cyber information sharing and how that affected joint SA, decision making and cyber risk management. [1]https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52020PC0823 [2]Endsley, Mica. (1995). Endsley, M.R.: Toward a Theory of Situation Awareness in Dynamic Systems. Human Factors Journal 37(1), 32-64. Human Factors: The Journal of the Human Factors and Ergonomics Society. 37. 32-64. 10.1518/001872095779049543. |
11:30 | An Intelligent Algorithm for Edge Server Deployment Based on the N-1 Security Criterion PRESENTER: Jiankai Wang ABSTRACT. The study proposes an edge server deployment method based on the widely used N-1 security criterion in power systems to improve the security and reliability of edge computing systems in the event of single-point failures. The N-1 security criterion requires the system to remain operational without triggering broader system issues in the event of any single equipment failure. This paper designs redundancy mechanisms and backup server schemes to ensure that even if an edge server fails, its workload can be quickly and seamlessly transferred to a backup server, thereby avoiding negative impacts on service quality, especially in terms of latency and performance. This method effectively reduces the security risks that could arise from single-point failures in edge computing systems. Simulation results show that, compared with traditional server deployment methods, the N-1 security criterion-based approach performs significantly better in terms of system reliability, stability, and fault tolerance, substantially improving the security and service continuity of edge computing systems. Additionally, considering that the random nature of the initialization phase in traditional K-Means clustering algorithms may lead to instability in the final results and that servers may face overload issues, this study further proposes an improved K-Means algorithm. By optimizing the selection of initial cluster centres’ and adjusting the clustering process, the new algorithm more effectively reduces communication latency and balances the load between servers. Experimental results indicate that the improved K-Means algorithm outperforms existing algorithms, including DBCA, K-Means, Top-K, and Random algorithms, in terms of reducing communication latency and achieving load balancing. Moreover, the deployment strategy based on the N-1 security criterion significantly enhances system robustness and security, ensuring stable system operation in the event of a single edge server failure. |
11:45 | Space systems as critical infrastructure - A soft systems approach PRESENTER: Christine Große ABSTRACT. Technological developments have enabled private and public actors to access near-Earth space. However, recent disruptions to space systems in countries on NATO's eastern border highlight cybersecurity concerns as different actors, involved in geopolitical conflicts, aim to gain influence in space. Space security has therefore become a major concern for societal protection. Space systems are both critical infrastructure and important sub-systems of other critical infrastructures (e.g. GPS and weather forecasting) dependent on space technology (Fidler, 2018). However, unlike other critical infrastructures, space systems are relatively unexplored from a security perspective (Gheorge et al., 2018). This is a serious knowledge gap as tensions are increasing due to geopolitical conflicts and risks of a militarisation of space technology. In parallel, increasing commercialisation and interconnectivity of space services with important services rise challenges for societal resilience. By facilitating a better understanding of the relationships between these systems, this paper provides insights into digital risks involving serious societal consequences and areas for enhanced space security work. The paper presents a system model based on a synthesis of the current research front discussed in national and international research literature, reports, and studies. A framework of four perspectives (Große, 2023) guides the systematic analysis to characteristics cybersecurity issues and to emphasise relevant areas of security in space systems. The soft systems methodology is used to develop the model useful for actors in the emergency response sectors to increase the understanding of systemic dependencies of space systems and strengthen the resilience in critical infrastructures and services. Based on the analysis and the system model, future research needs to strengthen cybersecurity in space systems are exemplified. The presented study is a direct response to previous research indicating a need for measures to protect and create resilient development in space. References: |
10:45 | The management of network assets in powers systems: an international standard for a risk-informed decision-making process PRESENTER: Thomas Guillon ABSTRACT. Power system network operators face essential investment decisions in the context of the energy transition and the aging of equipment installed decades ago. Asset management is the favored approach for realizing value from assets and making transparent and consistent decisions aligned with organizations' objectives, balancing costs, risks, and asset performance. It relies on a corpus of fundamental principles and organizational requirements to achieve objectives with the ISO 55000 standards. Quantitative approaches with cost-benefit analyses are most awaited to justify renewal investments and risk mitigation measures such as preventive replacement and conditional replacement on inspections or monitoring. However, risk analysis is based on calculation and expert judgment, which rely on assumptions, models, and data that are not always validated so that results can mislead decision-makers. In order to support asset management stakeholders in their analyses and decisions, the IEC TC 123 is dedicated to the management of network assets in power systems and proposes an international standard on a Risk-Informed Decision-Making (RIDM) process. Based on the recent advances in risk science, this document intends to support and inform asset managers, decision-makers, and stakeholders about establishing the relevant risk management approach consistent with the asset management objectives and decision-making criteria of the power network organization. This presentation outlines the work leading to developing the RIDM process and the steps that ensure transparency, consistency, and traceability of risk-informed decisions for managing the risks of power systems' network assets. It highlights the importance of giving weight to the relevant risk management strategies depending on the context and providing guidance on adequate risk assessment techniques at each stage, as well as essential information and warnings when performing a risk analysis, including the strength of knowledge supporting assumptions, models, and parameters. Finally, this work illustrates how standardization could transform academic research into industrial practice. |
11:00 | Assessing the impact of market clearing on power system security PRESENTER: Andrej Stankovski ABSTRACT. The ongoing decarbonization of the electricity system, the increasing demand due to electrification, and the adoption of intermittent power sources have exposed the power system to unprecedented challenges. Many power-flow models have been developed over the years to assess system security. These models analyze system security by setting the system to stable operating conditions via cost-optimal power flow (OPF) and, subsequently, introducing outages. However, driven by cost minimization, the OPF provides an ideal, albeit unrealistic clearing of the generating units, disregarding the complex interactions between market participants. System security, therefore, may be overestimated. To address this gap, we present a market-clearing model for the security assessment of power systems. The model can solve several optimization problems, namely, economic dispatch, unit commitment, and social welfare maximization. Finally, a re-dispatch optimization module adjusts the dispatch of cleared units based on the system's security constraints. The market model is soft-linked with Cascades, a cascading failure model used for security analysis and expansion planning in power systems. We have tested this framework on the IEEE-118 bus system with 3 independent control zones. The results show that the market clearing, solved as welfare maximization, has a notable impact on system security. Remarkably, compared to simulations with OPF, the market clearing results in increased line failures and demand not served. We show that the primary cause for this behavior is the higher utilization of cross-zonal trading capacity by the market participants. The findings can help identify critical scenarios overlooked in classic OPF-based security simulations. Moreover, operators can use this information to properly allocate reserves and perform efficient expansion planning strategies, preparing the system for changes in the generation mix and market regulations. |
11:15 | Expansion of production capacity and vulnerability in the electrical connection to the National Interconnected System: case of a refinery focused on diesel. PRESENTER: Thais Lucas ABSTRACT. Given the growing demand for petroleum derivatives and the expansion of production, an investment of approximately US$ 1.5 billion is planned for 2027 to expand a refinery in Brazil, with a focus on diesel oil production. This scenario brings the need to ensure the reliability and safety of the refinery's electricity supply, which is connected to the Sistema Interligado Nacional (SIN) through the Abreu e Lima substation. This substation plays a crucial role in the continuity of the electricity supply, transforming and distributing energy, and must be prepared to operate safely and efficiently. In this context, it is vital to conduct contingency simulations to assess the electrical system's security requirements and identify potential vulnerabilities, especially considering the projected increase in production. The analysis of these contingencies, along with the consideration of critical scenarios in the production process, will allow the identification of points of failure in the energy supply, ensuring greater system robustness. Based on reports from the system operator and other planning organs of the SIN, this study proposes to evaluate the vulnerability of the Abreu e Lima substation through power flow simulations. These simulations will be carried out for both the current refinery scenario and the future scenario after the expansion project is completed. The goal is to provide a quantitative basis for planning a more resilient Brazilian electrical system, helping to identify vulnerabilities and propose solutions that ensure the security and robustness of the energy supply, which is essential for the operation of the interconnected system. |
11:30 | Power-to-Gas and risk management in operation: lessons learned from a 1 MW industrial demonstrator ABSTRACT. Power-to-Gas is the process by which electrical energy is converted into chemical energy, in gaseous form. Power-to-Gas primarily relies on electrolysis, producing hydrogen (Power-to-H2) from electricity and water. Electrolysis can be supplemented by a methanation step, allowing hydrogen to react with CO2 to produce methane (Power-to-CH4). As these are still emerging technologies, the management of reliability, maintenance, and safety of these installations must face specific issues, which are still poorly documented. Jupiter 1000 is an industrial demonstrator of Power-to-Gas, commissioned in 2019 by GRTgaz. The installations include two electrolysis technologies and one methanation technology. For the first time in France, the megawatt scale was reached to produce “green” hydrogen. One of the objectives of the project is to demonstrate the feasibility of this type of process and to share the first feedback to promote the development of the industrial Power-to-Gas sector. This communication specifically presents the feedback and results of studies from Jupiter 1000 about reliability, maintenance, and safety for this type of installation. Among the main conclusions, we can highlight the following key issues for the development on the Power-to-Gas in Europe: - Re-industrialisation in Europe, with suppliers of equipment and services (including maintenance) adapted to hydrogen, and with a sufficient number of specialist engineers and technicians with the proper skills; - Improving the reliability of hydrogen equipment, electrolysers, compressors and other mechanical systems; - Creation of a specialty in operating safety and risk management for hydrogen production, transport, storage and operating systems; - Development of solutions for the safety of hydrogen systems, in particular for controlling leaks; - Continuing R&D efforts on hydrogen, notably to increase knowledge of dangerous phenomena, materials, measurement and detection. |
11:45 | Electric and Hydraulic Data Fusion with Heterogeneous Graph Neural Networks for Reliable Real-time State Forecasting in Pumped-Storage Hydroelectricity PRESENTER: Raffael Theiler ABSTRACT. Pumped-storage hydroelectricity (PSH) is the most widely used technology for large-scale energy storage. Consequently, hydroelectricity plays an active role in controlling the grid's power frequency, which often requires operation under dynamic conditions leading to rapidly changing system states. To enhance system reliability, forecasting these states is pivotal for ensuring reliable power grid operation, understanding sensor and machine conditions, and detecting anomalies and faults. Hydropower system states depend on the intricate interdependencies between the electrical and hydraulic subsystems, making state forecasting particularly challenging. However, the electromechanical energy conversion in generators and pumps tightly couples these subsystems. Therefore, interdependent information in hydraulic and electrical sensor data should be fused across PSH subsystems to improve state forecasting accuracy. While conventional model-based approaches are slow and only focus on individual subsystems, data-driven methods leverage the fact that the subsystems can each be characterized by underlying networks represented as graphs and enable real-time analysis. Operating on these graphs and treating all sensor data as homogeneous time series, graph neural networks (GNNs) have been applied previously to system state forecasting and sensor fusion. However, the electrical and hydraulic subsystems follow vastly different physical principles and exhibit distinct system dynamics, which traditional (homogeneous) GNNs fail to approximate effectively with a single neural network. In this study, we introduce heterogeneous GNN for hydropower state forecasting. Unlike homogeneous GNNs, our method models subsystems and system boundaries using individually parametrized, jointly trained functions while maintaining effective sensor data fusion from the PSH's subsystems by operating on a unified, system-wide heterogeneous graph. We evaluate our method on a real-world data set from a Swiss hydropower plant, comparing its performance to homogeneous GNNs. We show that leveraging the system's heterogeneity leads to demonstrably improved state forecasting performance and enhanced generalizability compared to the homogeneous case. |
10:45 | The N2O production from ammonia and hydrogen as fuels for shipping purposes PRESENTER: Zoe Nivolianitou ABSTRACT. Ammonia and hydrogen are considered as promising alternative fuel options for shipping purposes (Alnajideen et al., 2024, Aneziris et al., 2023). However, the complete passage to these new fuels for the power generation on ships is not straightforward (Zanobetti et al., 2023). Accordingly, the EU-funded SUPERALFUEL project is analysing in details the sustainability of this transformation. A useful indicator for the environmental sustainability is the emission of nitrogen oxides NOx, which is over 95 % from anthropogenic sources. Some of the released nitrogen would ultimately resolve into nitrous oxide that would offset at least some of the climatic benefits aimed by switching maritime shipping fuels. Indeed, this oxide has a global warming potential 300 times greater than CO2 (Ravishankara et al., 2009). Using the thermo-kinetic code KIBO developed at University of Bologna (Pio et al., 2022, 2024) this work will evaluate the amount of N2O produced by the fuels cited above for shipping purposes; these calculations will give clear indications on the effect of this pollutant. References Alnajideen, et al., 2024. Ammonia combustion and emissions in practical applications: a review. Carbon Neutrality. 3, 13. Aneziris O., Koromila I.A., Gerbec M., Nivolianitou Z., Salzano E., 2023. A Comparison of Alternative Cryogenic Fuels for Regional Marine Transportation from the Perspective of Safety. Chem. Eng. Trans., 100, pp. 25 – 30. Pio, G., Dong, X., Salzano, E., Green, W.H., 2022. Automatically generated model for light alkene combustion. Combust Flame. 241, 112080. Pio, G., et al., 2024. Detailed kinetic analysis of synthetic fuels containing ammonia. Fuel. 362, 130747. Ravishankara, A.R., et al., 2009. Nitrous oxide (N2O): the dominant ozone-depleting substance emitted in the 21st century. Science. 326, 123-125. Zanobetti, F., Pio, G., Jafarzadeh, S., Ortiz, M.M., Cozzani, V., 2023. Inherent safety of clean fuels for maritime transport, Process Safety and Environmental Protection, 174, 1044-1055. |
11:00 | A Case Study of System Reliability and Availability of Blue Hydrogen Production PRESENTER: Xueli Gao ABSTRACT. The transition toward a low-carbon future has positioned hydrogen as a critical energy carrier, with blue hydrogen emerging as a bridge between conventional fossil fuels and cleaner alternatives. Blue hydrogen is produced by reforming natural gas with carbon capture and storage (CCS) to reduce CO₂ emissions, representing an intermediary solution between grey and green hydrogen. However, ensuring the reliability of the complex systems (e.g. the hydrogen production systems) is critical for economic feasibility, operational safety, and environmental sustainability. This report analyses the system reliability of blue hydrogen production technologies, evaluating the challenges in reliability modelling and assessment specific to these systems. It addresses key issues such as the integration of multiple technologies, data limitations, operational risks, and the performance of critical equipment. Through this analysis, the study highlights the importance of robust reliability engineering frameworks towards the challenges of blue hydrogen systems. |
11:15 | Modelling of polymer electrolyte membrane electrolyzer degradation and reliability PRESENTER: Salim Ubale ABSTRACT. Hydrogen production using electrolyzers can contribute to reduction in global emissions. A Polymer Electrolyte Membrane electrolyzer (PEME) splits water into hydrogen and oxygen, offering advantages in dynamic operation that enables rapid responses to fluctuations in power input and operating conditions. This reduces start-up time and allows immediate hydrogen generation. Ensuring the reliability and safety of PEMEs is critical for efficient hydrogen production. Degradation and failures of electrolysis cells can lead to hydrogen crossover, posing safety concerns and corrosion which reduce gas diffusion and conductivity, affecting performance. Although PEMEs have a lifetime of 40,000-60,000 hours, availability remains low due to frequent operational downtime and maintenance. This paper proposes a Petri net (PN) model which, in addition to reliability assessment, considers the degradation and maintenance processes of the stack. PNs are suitable for modelling complex, concurrent systems, making them ideal for capturing the dynamic interactions within the electrolyzer. By capturing such interactions, the PN approach is used for modelling both normal operations and potential failure scenarios. Such an approach can aid the hydrogen industry in making better asset management decisions, improving electrolyzer availability and safety. It can also inform the risk assessment process, enabling strategic investments in reliability and operational efficiency. |
11:30 | A Framework for Transforming Process Control System Data from a Hydrogen Fueling Station into HyCReD Data PRESENTER: Cristian Schaad ABSTRACT. Reliability data for hydrogen infrastructure components is essential for developing Quantitative Risk Assessment (QRA) for these technologies, which in turn is necessary for a safer deployment and expansion of the hydrogen market. However, there is currently a lack of hydrogen component reliability data available for these systems, thus limiting the usefulness of insights obtained from these QRA. The Hydrogen Component Reliability Database (HyCReD) has been proposed as a tool for reliability data collection and as a source for future QRAs. In this paper, we develop a digital tool that automatically processes data coming from Process Control System (PCS) in a hydrogen fueling station, detects the relevant failure events for hydrogen systems during its operation, and then logs the event information into HyCReD. To build this tool, we first categorized the station components in hydrogen service, their specific failure modes, and the specific failure mechanisms that are relevant to a QRA. Then, we identified the data available in the station PCS and the methods available for diagnosing the relevant failure events. The resulting tool is divided into three steps: (1) PCS data collection through an API, (2) data analysis for the detection and diagnosis of new failure events, and (3) logging that event into HyCReD. Finally, we discuss the potential for expanding the detection and diagnosis to more complex failure modes present in a hydrogen fueling station. This digital tool is set for implementation and validation on an experimental hydrogen fueling site. The goal for this digital tool is to be applicable to every kind of hydrogen fueling station and to be extendable to similar hydrogen technologies. |
10:45 | Search and rescue in the Arctic: The role of vessels of opportunity PRESENTER: Bjørn Ivar Kruke ABSTRACT. The Arctic is characterized by multiple factors that make maritime search and rescue (SAR) in this area a complicated activity. A series of maritime incidents and accidents in the Arctic illustrate the harsh and difficult operating conditions in the region. This demanding context creates challenges both for ships in distress, but also for the overall SAR operation. Large distances and remoteness in the region entail cross-national cooperation to carry out successful maritime rescue operations, in addition to the invaluable response of the so-called “vessels of opportunity”. This study brings the role of vessels that just happen to be in the area in search and rescue operations in the Arctic to the forefront. More precisely, the aim of the work is to investigate how the role of vessels of opportunity is included in international agreements and international maritime search and rescue exercises. Data is collected through document analysis of 4 key international agreements related to maritime search and rescue, 21 exercises and deeper analysis of five of these exercise reports. Findings show that vessels of opportunity are not mentioned explicitly in international agreements for search and rescue. However, specific agreements, like IMO Polar Code and SOLAS, have direct and practical impact regarding how vessels prepare for unwanted maritime incidents. The study also shows that inclusion of vessels of opportunity is frequently used as a scenario in tabletop exercises, and cruise ships are typically given this role. Real incidents also show the role vessels of opportunity have as an on-scene coordinator (OSC), coordinating a search and rescue operation within a specified geographical area. Questions of training, coordination and common terminology with other rescue actors arise. Nevertheless, the use of vessels of opportunity shows the importance of international cooperation on search and rescue in the Arctic. |
11:00 | Community-based Polar Bear Risk Perception and Preparedness in Spitsbergen ABSTRACT. Under the ongoing warming of the Arctic, land and icescapes and their corresponding biome alter. This shifts the distribution and behaviour of polar bears southwards and towards alternative food sources while at the same time human activity has intensified as the Arctic has become more attractive and accessible. Ultimately, this leads to more human-bear interactions that can have negative, or even lethal consequences for one or both parties involved. Proper polar bear safety training and knowledge about polar bears is at the heart to establish a harmonious coexistence that is safe for both parties. Circumpolar but also on a local scale, there is often lack of coherence when it comes to safety training and polar bear knowledge and awareness. This study conducted survey research at Longyearbyen, Spitsbergen to obtain an understanding how locals and visitors of the area experience polar bear safety practices. This is important for developing and maintaining an effective risk management, by amongst others, finding weaknesses in current training, and understanding how fear influences human-polar bear interactions, preparedness and decision making. |
11:15 | Enhancing Emergency Preparedness in Mass Gatherings through Crowd Simulation and Cascading Effects Analysis PRESENTER: Jiayao Li ABSTRACT. This study emerges from a vital collaboration with the firefighters in France (i.e. Service Départemental d’Incendie et de Secours, SDIS), which undertakes the critical role of emergency services in firefighting, providing emergency assistance, and preventing accidents and disasters, particularly in the context of mass gatherings. The primary purpose of our research is to enhance preparedness and proactive response planning during large-scale events through a comprehensive crowd simulation and analysis approach. Our approach unfolds in two layers. The first layer, modeling and assessment, focuses on creating models of different urgency levels of projected scenarios, including their cascading effects. It aims to assess the agility and effectiveness of various emergency response strategies and their capability to mitigate such cascading phenomena. The second layer, analysis and decision-making, is dedicated to identify critical thresholds where medical responses might be overwhelmed. It seeks to optimize the allocation of medical resources and improve response times, ensuring agile and efficient emergency handling. A practical application of our approach was demonstrated at a major music festival held on the former air base of Cambrai, France. We employed agent-based modeling to develop a crowd simulation that fulfills two expectations: visualizing and monitoring crowd flow within confined spaces, and anticipating the impacts of current and future risks and their cascading effects on both the crowd and emergency responses. This study implies in refining proactive planning for large-scale events and improving readiness to respond to emergencies. |
11:30 | Spatiotemporal Crime Analysis for Risk Management Using the Non-Stationary Moving Average Method PRESENTER: Lukas Pospisil ABSTRACT. This paper presents a novel approach to spatiotemporal crime analysis, tailored for risk management and safety applications, by introducing the Non-Stationary Moving Average (NSMA) method. The NSMA method extends the standard Moving Average technique by incorporating non-stationarity, addressing the dynamic nature of crime patterns. By combining temporal smoothing with spatial clustering through the K-means algorithm, this approach enables the identification of distinct crime clusters and provides insights into temporal trends. The proposed methodology is formulated as a multicriteria optimization problem, balancing the objectives of spatial clustering and temporal regularization through a regularization parameter. The problem is solved using a subspace algorithm, similar to the approach used in K-means, which alternates between optimizing cluster centers and cluster moving averages. Applied to real-world crime data from the Czech Republic, this method demonstrates its potential to improve resource allocation and decision-making in crime prevention. The NSMA method contributes to advancing the fields of spatiotemporal analysis and risk evaluation, offering a versatile tool for addressing complex urban safety challenges. |
11:45 | Recommendations for a significant reduction in construction accidents in Norway PRESENTER: Stig Winge ABSTRACT. Despite many preventive measures introduced to reduce accidents in the Norwegian construction industry the last decade, accident rates (fatal and non-fatal) have been relatively stable. The main purpose of this study was to identify and suggest key topics and recommendations to significantly reduce construction accidents in Norway. Material and methods were expert interviews, expert workshop, and examination of various research, reports and documents. The research resulted in eight overall recommendations for reducing construction accidents on a national level: (1) more targeted measures towards incident concentrations, (2) improve risk-reduction practices, (3) integrate safety in all construction phases, (4) improve safety culture, leadership and participation, (5) improve safety competence among key actors, (6) coordinate guidance materials, (7) reduce time pressure and production pressure, and (8) strengthening the national organizing, coordination and financing of safety work. The results can be useful for other countries and industries aiming at reducing accidents on a national level. |
14:45 | The Independence dilemma in Accident investigations ABSTRACT. A good high level accident investigation is both in literature and by practitioners considered to be best performed in a way that a allows for a systematic approach in a nonlinear process. In this paper I look at the independent Norwegian investigation board for health services (NIHB). The NIHB is an independent government agency, and the mandate of NHIB is to investigate serious adverse events and other serious concerns involving the Norwegian healthcare services. The approach of the NIHB is systems theoretical and the Board itself decides which unwanted events in or relating to the Norwegian health services that it wants to investigate. Currently NHIB employs a wide range of investigative methods, their reports are public and they do investigate the unwanted events with the goal of national learning. The broad approach and the independence of the NIHB have recently received scrutiny by academic and the Norwegian health services, and the paper enter into discussion with those concerns and raises one main question: How do we evaluate the quality and effect of accident investigations on the systems level intended for an entire sector? The paper scrutinizes the accident reports of the NIHB, discusses their effect on the sector and isolate 3 dilemmas: 1. The dilemma of systematic or adaptation after an event, 2. the dilemma between objective truth and construction of truth, 3. the dilemma between investigations of responsibility and investigation of causes. The methodology used in the paper is a scoping review, using publicly available documents. |
15:00 | Ontology-Driven Integration of System-Theoretic Process Analysis and Model-Based Safety Analysis for Comprehensive Safety Assessment PRESENTER: Max Chopart ABSTRACT. This work aims to improve safety assessments in complex systems by presenting an ontology-driven approach to integrating Model-Based Safety Analysis (MBSA) and System-Theoretic Process Analysis (STPA). The ontology serves as an interface with two main goals: (1) to filter STPA scenarios and identify failure-based cases for MBSA, and (2) to automatically translate the filtered scenarios into MBSA-compatible feared events (i.e., “observers”). By bridging STPA’s hazard identification methodology with MBSA’s failure-based analysis, this approach offers a more efficient and comprehensive framework for system safety and reliability assessment. Even though the improvement of overall safety is a goal shared by both STPA and MBSA, their approach to doing so is different. STPA provides a broad hazard identification framework capturing a range of scenario types, including non-failure cases driven by systemic, human, and organizational factors. In contrast, MBSA focuses on the study of failure propagations. These methodologies are thus complementary and can offer a more complete picture when used together; however, achieving such integration requires a systematic approach to filter and map relevant information between them. In order to categorize STPA scenarios and eliminate those unsuitable for MBSA, the created ontology serves as a common framework by ensuring that only scenarios appropriate for MBSA's failure-based approach are transferred. The ontology lessens the need for human intervention improving accuracy and enabling more thorough fault propagation analyses by automatically converting compatible STPA scenarios into MBSA's format. Early findings show that this strategy improves the effectiveness of the safety assessment process while also streamlining the integration of STPA and MBSA. The ontology-driven interface enables the utilization of both methodologies' strengths through a methodical and structured mapping, providing a unified framework that can be tailored to the intricacies of contemporary safety-critical systems. |
15:15 | IP Protection using Simplification and Masking for Model-Based Safety Analysis (MBSA) Model Exchange PRESENTER: Julien Vidalie ABSTRACT. Model-Based Safety Analysis (MBSA) is a growing method for performing safety analysis. It offers a closer integration with system modeling environments compared to traditional RAMS approaches. MBSA has proven particularly effective for assessing the safety of complex systems. However, in extended enterprise projects, its adoption can be challenging due to the exposure of sensitive information embedded within the models, which may be subject to intellectual property (IP) protection. This includes detailed insights into the system being modeled, its internal management, and its reconfiguration processes. To address these concerns and enable continued use of MBSA in collaborative projects, models shared between companies must differ from those used internally. We introduce two key activities—simplification and masking— to transform the original model while maintaining the necessary level of detail for effective collaboration. These activities regroup diverse pre-existing model transformation techniques, allowing models to range from "white boxes," where most details are accessible, to "black boxes," where only minimal information is shared. Simplification is the process of reducing the complexity of a model. This process involves eliminating unnecessary details and focusing on essential behaviors, thereby optimizing calculations and improving the overall usability of the model. Masking refers to the practice of concealing certain details or aspects of a model to protect intellectual property. This process ensures that proprietary information remains confidential while still allowing for collaborative work on a project. In this paper, we discuss and illustrate the use of simplification and masking for exchange of MBSA models. We discuss the possible tradeoffs between IP protection and assurance of correct results. In addition, we show that effective communication between suppliers and integrators is essential to ensure that the shared models comply with all safety-related project requirements, while respecting IP constraints. |
15:30 | Making Systems of Systems Orchestration Safer PRESENTER: Julieth Patricia Castellanos Ardila ABSTRACT. Orchestration, an approach to service composition, has emerged as a promising solution to integrate independent constituent systems (CS) in a System of Systems (SoS). However, safety in SoS orchestrations remains unexplored. In this paper, we introduce SOSoS (Safe Orchestration of Systems of Systems), a process that utilizes the System-Theoretic Process Analysis (STPA) steps extended with the features proposed in the software product line engineering (SPLE) approach to cope with safety in the inherent SoS variability. We also demonstrate SOSoS in action by considering a case study from the construction domain. As a result, we define SoS-level safety constraints that could lead to actionable technical recommendations for making systems of systems orchestrations safer. |
15:45 | Ensuring Market Access with a New Compliance Format for AI-Based Functional Safety Systems PRESENTER: Thor Myklebust ABSTRACT. The IEC Test Report Format (TRF) standardizes product testing and compliance, ensuring consistency across international standards and simplifying global certification. However, the growing complexity of Artificial Intelligence (AI) and Functional Safety (FuSa) requires an enhanced approach. We introduce the Compliance Report Format (CRF), an improved framework that builds on IEC TRF, integrating AI assurance, cybersecurity considerations, and multi-standard compliance management. The CRF streamlines certification by improving traceability, consistency, and efficiency in safety case development and submission. With increasing emphasis on safety cases in standards like UL 4600 and the forthcoming IEC 61508 edition, the CRF aligns with evolving regulatory requirements. It also supports DNV-RP-0671, reinforcing the integration of safety and cybersecurity assurance cases. By promoting a shift-left approach, the CRF enables early AI risk analysis, FuSa integration, and continuous validation, minimizing late-stage compliance burdens. Additionally, by leveraging agile practices and the ‘definition of ready’ concept, it reduces compliance overhead while improving traceability of AI-related safety evidence. A profile-based approach within the CRF tailors’ safety and security requirements to domain-specific AI and FuSa standards, ensuring flexibility and adaptability. Engineers benefit from guidelines, references to industry standards, and links to scientific research, facilitating efficient compliance. The CRF represents a modernized compliance strategy, supporting engineers, auditors, and Certification Bodies in navigating AI-driven safety and cybersecurity challenges more effectively. |
Panelists: Sarah Duckett, Tom Jansen, Tom Logan, Sanja Mrksic Kovacevic and George Warren
14:45 | Integrating Multicriteria Risk Assessment for Enhancing Urban Resilience to Flooding in a Changing Climate PRESENTER: Lucas da Silva ABSTRACT. This paper aims to propose and implement risk assessment tools within a multicriteria framework to sort flood risks in urban areas, addressing the pressing challenges climate change poses. Floods, as recurring natural disasters, significantly disrupt urban life, leading to economic instability and extensive damage. The increasing severity of these events highlights the need for effective public governance in disaster risk management. However, governments often struggle to map and manage vulnerable areas due to diverse risk factors such as social vulnerability, exposure, and hazard, compounded by factors like population density and urbanization. To tackle these complexities, the research proposes a GIS-based multicriteria model that aids decision-makers in sorting urban flood risks. By incorporating the impact of climate change on intense rainfall, the model enhances understanding of high-risk scenarios in flood-prone urban areas. It enables an integrated evaluation of various factors, including hydrological aspects, infrastructure, social vulnerability, urban mobility, and humanitarian logistics. Furthermore, the findings stress the importance of employing multidimensional risk assessment tools in public policies to promote urban resilience. Engaging multiple stakeholders, including civil society, is crucial for understanding risk perceptions and fostering a preventive, collaborative approach to disaster management. By involving diverse actors, the model not only aids in resource allocation to critical areas but also supports the formulation of inclusive policies that enhance community resilience against climate-induced disasters. This approach aligns with ISO risk standards and the broader context of climate adaptation practices, ultimately aiming to improve urban resilience and protect vulnerable populations from future flood risks. |
15:00 | Hazard Intensity Threshold For Exposure Modeling Of Systems Of Interest To Climate Change PRESENTER: Matthieu Dutel ABSTRACT. The goal of adapting physical assets to climate change is to anticipate and prevent damage to these assets to maintain high levels of system performance. Exposure modeling plays a key role in this adaptation process. While climate data is available, its spatial resolution varies depending on the location of the assets. Information on physical assets also exists, but detailed damage data for a given hazard intensity is often private, particularly regarding specific assets. This lack of information hinders the creation of precise vulnerability curves and accurate hazard intensity thresholds for each asset type and subsystem. Due to these data limitations, it is common practice to use indicators produced by climate services, though the relevance of these indicators for the specific system under study is not always certain. In this paper, we present our approach to selecting a hazard intensity threshold that is relevant for exposure modeling of physical assets to climate change. Our method is based on Boolean Exposure Modeling (BEM), which requires a clear definition of the hazard event (H). This approach ensures a focused response to the question: "Is the system of interest exposed to H or not?". The methodology will be applied to a case study involving administrative divisions of French territory, which contain physical assets. More specifically, the hazard intensity threshold will be determined based on the Eurocode EN 1991-1-5. The result will be a BEM output map covering 30-year periods, using climate model data as input. This outcome enhances the exposure modeling toolbox for adapting physical assets to climate change by providing a model tailored to both the assets and climate change. |
15:15 | Transformation Capacity Expansion and Its Effects on the Vulnerability of Power Transmission Networks PRESENTER: Thais Lucas ABSTRACT. In light of climate change and global warming, transitioning to renewable energy has become a critical aspect of global energy planning, aiming to reduce reliance on fossil fuels and lower carbon emissions. Brazil, with its predominantly renewable energy mix, is well-positioned to lead this transition. However, additional investments in renewable energy infrastructure are expected by the end of the decade, including solar, wind, and hydropower projects. These developments bring about the need for an electrical system that can operate safely, efficiently, and reliably to ensure a continuous energy supply, even as it becomes more complex. Additionally, Brazil faces is the integration of its power system across a vast geographical area, spanning multiple climate zones and terrains. This integration poses significant challenges in terms of transmission and reliability, requiring robust infrastructure and strategic planning. Implementing and upgrading electrical substations are crucial for the system's operation. These substations play a vital role in maintaining power flow, transforming voltage levels, and ensuring that energy reaches consumers effectively. Therefore, assessing the vulnerability of substations and the transmission networks they connect to is essential for ensuring the reliability of the energy supply across this large, integrated network. This paper proposes a methodology to assess substation vulnerability, applying it to a real case study in a major Brazilian capital city, where the demand for stable energy is critical. Moreover, this approach can be adapted and applied to other large cities facing similar infrastructure challenges. The study considers normal operational conditions, two disruption scenarios for present and future network topologies, and a hypothetical N-2 contingency during a transitional phase. The results highlight critical vulnerabilities within the interconnected system and offer insights into how these challenges can be addressed through targeted investments and planning, particularly in large urban areas that are central to the national grid. |
15:30 | Quantitative analysis of economic losses induced by malevolent acts of interference to process facilities PRESENTER: Gabriele Landucci ABSTRACT. Intentional attacks on chemical and process installations have intensified in recent years due to the exacerbation of conflicts in critical areas. These attacks can escalate affecting nearby areas, potentially triggering domino effects with severe impact on assets, people, and the environment. Hence, understanding the extent of damages and integrating these findings in conventional safety and economic analyses is crucial. While interest in these issues grew in recent years, key gaps remain. Existing security studies primarily focused on impacts to people, with limited focus on economic losses and the combined effect of safety and security barriers in managing external attack scenarios. This work presents a cost-benefit analysis study, based on probabilistic evaluations, aimed at evaluating protection strategies against intentional attacks to process facilities. A Bayesian Network (BN) approach is used to handle the complexity of attack scenarios. Damages are incorporated in the Bayesian Network, along with the cost of safety and security barriers through a dedicated cost-benefit function. A demonstrative case study highlights the benefits and limitations of the methodology, showing the influence of barriers and complex domino chains on economic losses and, at the same time, driving the selection of the more effective protection strategy of industrial facilities. |
14:45 | Petri Net and Profile-Based Stochastic Hybrid Automaton for systems modelling in reliability engineering PRESENTER: Nicolae Brinzei ABSTRACT. With increasingly complex systems, due to large number of their components, complex behaviour of these components but also complex components’ interactions, it becomes difficult to study their behaviour and anticipate the reliability and availability of such systems. Multi-states models based on state-transitions diagrams are adequate tools to take into account all these aspects and avoid simplifying assumptions in the behaviour of systems and their components (e.g. independent components, binary behaviour, …). Among the states-transitions models, we developed in the past the Stochastic Hybrid Automata (SHA) which are a suitable tool to address the system characteristics mentioned above and, moreover, can consider continuous dynamic evolutions of physical processes affecting the reliability parameters. More recently, we proposed their extension, called Profile-Based Stochastic Hybrid Automata (PBSHA). The purpose of PBSHAs is to provide a framework in which it would be possible to consider the way in which the system is used. A real system is supposed to be used nominally throughout its lifetime, but it may be used differently (in degraded states or overloaded) to occasionally meet performance or availability requirements. Such operating strategies have an impact on the system lifetime. PBSHAs were therefore created to consider these changes in usage profiles. This tool proposal must make it possible to find the behavior and results of other known and used methods. That’s the purpose of this paper: compare the results using PBSHA models with those of another method, here Stochastic Petri Nets, using the same experimental design. To do this, the PBSHAs will be developed under PyCATSHOO, a free Python software, and Stochastic Petri Nets will be built using GRIF software. We present the two models developed for a system case-study and the obtained results allow to highlight the modelling capabilities of PBSHA and to validate their behaviour and their software implementation. |
15:00 | Fault Tree Construction: A Survey of Knowledge-Based, Model-Based and Data-Driven Approaches PRESENTER: Stanley Suan You Lim ABSTRACT. Fault Tree (FT) analysis is a widely used technique in system reliability engineering, providing a structured method for identifying the root causes of system failures. By visually representing failure pathways, FTs help engineers assess potential risks and implement preventive maintenance strategies. However, the construction of FTs remains a significant bottleneck, especially for large and complex systems, due to the intricacies involved in manually modeling failure modes and capturing system behavior which increase the risk of incomplete failure analysis of the system. This paper addresses the challenges in both static and dynamic Fault Tree construction by providing a comprehensive survey of construction methods, where we categorize them into model-based and data-driven approaches. Model-based methods rely on system descriptions, utilizing system models accompanied by failure descriptions as input. They include low-level model-based descriptions such as a description of the components with their interconnection or high-level descriptions using Model-Based System Engineering (MBSE) description. In contrast, data-driven methods generate FTs from operational or historical data using machine learning and statistical techniques. These two approaches differ based on the input data they require, and understanding their nuances is essential for effectively applying them to diverse system types. The aim of this review is to provide FT users with a concise overview of existing static and dynamic fault tree construction methods while the majority of review papers about fault trees focus on their qualitative and quantitative analysis methods. Additionally, this paper provides a synthesis of existing construction approaches and systematically explores their limitations and strengths to help practitioners select the most appropriate approach suited to their constraints. The conclusion of this survey highlights the potential future works in the fault tree construction research area. |
15:15 | Reliability of Information Gathering under Dynamic and Stochastic Utility PRESENTER: Kash Barker ABSTRACT. Scheduling tasks across multiple resources is a critical problem in various domains. The scheduling task becomes even more complex when we account for jobs with dynamic utility functions, representing the value or utility provided by a task at any given time. These utility functions may vary over time due to external factors, and the system’s goal is to maximize the total or expected utility while minimizing the overhead incurred by switching between tasks. In this context, the stochastic job scheduling problem models scenarios where jobs can be dynamically assigned to resources, with utility values that may be probabilistic or subject to external influences. Each job has a utility function that varies over time, and switching between tasks or resources incurs a cost—whether time, energy, or some other penalty. This leads to the challenge of optimally scheduling jobs across available resources, accounting for utility fluctuations, and minimizing the negative impact of switching. We explore the application of intelligence gathering via drones, where the job involves surveying a region for the purpose of intelligence gathering. The utility function is the amount of information gathered, which varies based on the region’s geography. Switching costs arise from the time lost when relocating drones to new regions, and the objective is to maximize the amount of information detected while minimizing the time lost in switching drones between regions. We explore the issue of the reliability of intelligence gathering when an adversary attempts to disinform the gathering process, represented with dynamic and stochastic utility functions. |
15:30 | Hardware integrity assessment of the distributed Fast Beam Interlock System (FBIS) at the European Spallation Source PRESENTER: Joanna Weng ABSTRACT. The European Spallation Source (ESS), a cutting-edge research facility under construction in Lund, Sweden, is designed to be the world’s brightest neutron source. The Fast Beam Interlock System (FBIS) is a critical component for ensuring the integrity and protection of the ESS facility. Designed and built by the Safety-Critical Systems (SKS) group at the Zurich University of Applied Sciences (ZHAW), in collaboration with the Machine Protection System (MPS) team at ESS, the FBIS is mainly responsible for stopping the beam when technical problems with the ESS machine or beam anomalies are detected. The FBIS thus plays an essential role in ESS machine protection and is the logic solver element of most protection functions. To ensure the high reliability of the FBIS, a comprehensive analysis was conducted in accordance with the IEC 61508 functional safety standard to assess its hardware integrity. This reliability analysis played an important role in ensuring proper and uninterrupted operation of ESS. This paper presents the analysis methodology developed and outlines the steps necessary to verify the hardware integrity of this complex, distributed system. This includes the calculation of the Probability of dangerous Failure per Hour (PFH) and the evaluation of the architectural constraints by calculating the Safe Failure Fraction (SFF) and Hardware Fault Tolerance (HFT) of the system. These calculations are based on failure rate predictions using the Siemens SN 29500 standard. In addition, a detailed Failure Modes, Effects and Diagnostic Analysis (FMEDA) was performed. The analysis demonstrates that the FBIS meets the corresponding hardware integrity requirements. The developed methodology has been successfully applied to several hundred protection functions at ESS. An example reliability analysis of a complete protection function containing a sensor system and actuators is also shown. |
15:45 | Tracking Reliability and updating the overhaul interval of engineering components: Bayesian approach PRESENTER: Mahesh Pandey ABSTRACT. The Bayesian approach is commonly used to track the progression of a degradation process and update the reliability of a component. However, there is a wide variety of components in which the failure cannot be attributed to a single well-defined degradation process, and inspection techniques are unable to quantify the state of degradation. For such components, only maintenance records pertaining to the timing of corrective and preventive maintenance are available. This paper focuses on developing an adaptive method to update the reliability and revise the maintenance interval based on the survival history of a component. The paper presents a Weibull mixture model to account for heterogeneity in the lifetime data. The posterior model parameters are estimated through Bayesian updating. The proposed method is illustrated using the lifetime data obtained for a group of level control valves used in steam generators of a Canadian nuclear power plant. |
14:45 | Subset Simulation for Extreme Quantile Estimation: An Electricity Market Case Study PRESENTER: Lorenzo Zapparoli ABSTRACT. Estimating extreme quantiles of a model response is crucial in several classes of engineering problems. The need for quantile estimation usually arises from the introduction of a reliability constraint on the response of a model subject to parametric uncertainty. The extreme quantile is estimated to ensure performance and safety under rare, high-stress uncertainty realizations. Examples of such problems are common in civil engineering, aerospace engineering, electrical engineering, and finance. Direct Monte Carlo (MC) methods for extreme quantile estimation are computationally intensive due to the extensive number of samples (model evaluations) required. To address this challenge, the authors propose a novel modification of the Subset Simulation (SS) methodology, which is an advanced Monte Carlo technique originally designed for estimating the probability of rare events. Our modification enables the SS framework to be used for quantile estimation by decomposing the quantile estimation problem into sequential subproblems defined by conditional probabilities. The methodology employs Markov Chain Monte Carlo (MCMC) sampling to iteratively focus on relevant regions of the input space, which map close to the target quantile in the output domain. The modified SS method exhibits logarithmic growth in sample size with the inverse of the quantile probability, compared to the linear growth of traditional MC estimators. We applied the modified SS methodology to an electrical engineering problem involving the estimation of the maximum bid quantity of an electricity market product, subject to a 99.9% reliability constraint. Our results demonstrate that the SS procedure estimates the target quantile with a coefficient of variation (CoV) of 1% while using only one-third of the samples required by the direct MC method. This significant reduction in computational cost highlights the efficiency of the modified SS for extreme quantile estimation. |
15:00 | Data-driven Stochastic Model Updating with Diffusion Models PRESENTER: Tairan Wang ABSTRACT. Stochastic model updating is a vital technique in engineering that can calibrate the input parameters of the computational model to reflect the real-world physical system while accounting for the existence of uncertainties. However, traditional methods such as the Bayesian approach always struggle with high-dimensional and nonlinear problems. Thus, there is a trend to adopt data-driven approaches to solve stochastic model updating problems since of its remarkable capability to process high-dimensionality and nonlinearity. Apart from utilising neural networks as a surrogate for forward models, the conditional invertible neural networks (cINNs), a type of flow-based deep generative model, can serve as an inverse surrogate to address stochastic model updating problems alternatively. Recently, another group of deep generative models called Diffusion Models has become very popular in generation tasks because of their better ability to handle complex distributions, flexibility in network architecture and stability in training. In this work, the feasibility of leveraging diffusion models to resolve stochastic model updating problems is investigated. Diffusion models transform a simple latent distribution (e.g., Gaussian noise) into a complex distribution that aligns with observation data through a gradual iterative process. In contrast to cINNs, diffusion models build up complexity in the learned distribution progressively through a series of Markov chains, allowing for more accurate modelling of complex systems with high uncertainty. A 3 DOF spring-mass system was adopted as an example. The training dataset is formed by input parameters generated from the prior distribution and synthetic observation data obtained from the forward numerical model. This work presents diffusion models as a potential alternative to conventional Bayesian approaches for stochastic model updating, with advantages in accuracy, uncertainty, and flexibility for complicated, real-world applications. |
15:15 | An Inverse Gaussian-based Degradation Process with Covariate-Dependent Random Effects PRESENTER: Antonio Piscopo ABSTRACT. This paper introduces a new inverse Gaussian process-based degradation model with covariate dependent random effects. The proposed model is suitable for fitting degradation data which cannot be satisfactorily described by treating separately the effect of the covariate and other forms of unit-to-unit variability. The model is applied to degradation data of some integrated circuit devices. Model parameters are estimated by using the maximum likelihood method. To mitigate numerical issues posed by the direct maximization of the likelihood function, the maximum likelihood estimates of the parameters of the model are retrieved by using the expectation-maximization (EM) algorithm. The probability distribution function of the remaining useful life is formulated by using a failure threshold model. Results obtained by applying the model to the considered integrated circuit devices data demonstrate the utility of the proposed model and the affordability of the adopted estimation approach. |
15:30 | Two-Stage Stochastic Project Scheduling with Secondary Risk Reduction under Resource Constraints ABSTRACT. Uncertainty in task durations and risks can significantly impact project schedules and costs. Traditional project scheduling models often assume deterministic parameters, overlooking the complexities introduced by secondary risks. This paper presents a stochastic optimization model that incorporates secondary risk reduction into project scheduling. By modeling secondary risks as random variables with known probability distributions, we effectively capture their inherent imprecision. The model minimizes the expected total cost, including the effects of primary and secondary risks, resource utilization costs, and adjustment costs arising from deviations from the baseline schedule. Non-anticipativity constraints ensure that first-stage decisions, such as task sequencing and resource allocation, remain consistent across all scenarios. A case study demonstrates the application of the model, showing that it effectively handles imprecision in risk analysis and provides robust scheduling solutions that balance efficiency and adaptability in the face of uncertainty. |
15:45 | Bayesian Methods for Bounded Transformed Gamma Degradation Processes with Different Age and State Functions PRESENTER: Fabio Postiglione ABSTRACT. To better describe stochastic degradation processes of real technological units, which are typically bounded for physical reasons, a particular version of the transformed gamma process (TGP), the so called bounded transformed gamma process (BTGP), has been recently proposed. Like the TGP, the BTGP is obtained, from the gamma process, by transforming the scales of time and the state via two functions, named the age and the bounded state functions. Different functional forms of those functions are available in literature, and the most suitable ones are to be selected to fit the available wear data with the aim of predicting the remaining useful life and estimating the residual reliability of the technological units under study. In this work, we apply a Bayesian method, namely the Bayes factor, to select the functional form of the bounded state function that provides the best fit of the degradation data under study, and that can be indexed up to three parameters. The proposed model selection procedure is able to exploit prior information held by the experts from previous experience. More specifically, we use some knowledge on the upper bound of the degradation phenomenon and on the shape of the mean degradation function. Some Markov chain Monte Carlo techniques are adopted for the analysis, and then validated on a real data set consisting of some wear measurements of the liners of an engine of a cargo ship. References Giorgio, M. and G. Pulcini (2024). The effect of model misspecification of the bounded transformed gamma process on maintenance optimization. Reliability Engineering and System Safety 241, 109569-1–109569-18. Giorgio, M., F. Postiglione, and G. Pulcini (2024). Model Selection For Bounded Transformed Gamma Processes: Bayesian Approach. In K. Kołowrocki, E. Dąbrowska (Eds.), Advances in Reliability, Safety and Security, Part 3, pp. 39–47. Polish Safety and Reliability Association, Gdynia. |
14:45 | Adaptive predictive maintenance for the batteries of electric Vertical Take-off and Landing (eVTOL) aircraft using Remaining Useful Life prognostics PRESENTER: Mihaela Mitici ABSTRACT. SPECIAL SESSION DATA-DRIVEN PREDICTIVE MAINTENANCE Electric Vertical Take-off and Landing (eVTOL) aircraft are seen as a promising emerging technology for mobility in congested urban areas. Existing eVTOL designs achieve ranges of 50-100km, and a payload of up to 500-800kg. Crucial for eVTOLs is the reliability of the battery management system. In this paper we leverage the use of health condition sensor measurements (e.g., the temperate of the battery), the aircraft flight profile, and the battery charge/discharge profiles to develop data-driven Remaining Useful Life (RUL) prognostics for the eVTOL batteries. We quantify the uncertainty of these prognostics by estimating the distribution of the RUL. In turn, these prognostics inform maintenance decisions for the batteries, e.g., optimal battery replacement moments are identified while we limit the risk of batteries becoming inoperable during operations. The battery replacement strategy is continuously adapted, as prognostics are updated with newly available measurements. We apply our methodology for a fleet of eVTOL aircraft, equipped with Lithium-ion batteries. Each eVTOL performs a sequence of flights, following various flight profiles. We propose an integer linear program to adaptively replace the batteries of the eVTOLs. This model takes into account the RUL prognostics of the batteries, the flight schedule of the eVTOLs, and the maintenance availability of the eVTOL hub. The RUL prognostics are obtained using a Mixed Density Network (MDN). The results show that the RUL is accurately estimated using MDNs. Particularly, the variance of distribution of the RUL is limited, especially in the later phases of the battery life. The results also show that prognostics benefit the planning of the battery replacement, leading to fewer unexpected, unscheduled battery replacement compared with traditional, time-based maintenance approached. Overall, our model support a reliable use of the batteries, with a continuous monitoring of the health state of the batteries. |
15:00 | Why Prognostics and Health Management and Reliability, Availability, Maintainability and Safety have not married yet PRESENTER: Pierre Dersin ABSTRACT. SPECIAL SESSION DATA-DRIVEN PREDICTIVE MAINTENANCE .We argue that tighter integration between Prognostics and Health Management (PHM) and Reliability, Availability, Maintainability, and Safety (RAMS) is instrumental to reap the benefits of predictive maintenance. RAMS and PHM are historically separated research fields although they share common purposes: preventing system failures by keeping assets in good health and optimizing maintenance. How those two disciplines approach failure prediction and uncertainty modeling differs, but those approaches are complementary. PHM algorithms focus on estimating and predicting the health of individual components, such as bearings or pumps, yielding customized component maintenance plans. Yet maintenance decisions must be made at the system level, which necessitates integrated strategies that account for interactions and dependencies among components. This is where RAMS methodologies are most effective since they effectively model component interactions and assess system-level risks. But traditional RAMS methodologies do not inherently support predictive maintenance. They traditionally rely on average, fleet level predefined operating conditions and fail to take advantage of real-time asset monitoring data. As a result, they can only model traditional maintenance strategies, typically scheduled preventive maintenance, and not dynamic predictive maintenance. Thus PHM and RAMS exactly complement each other: PHM provides insight in exact failure mechanisms of individual components and leverages monitoring data to predict actual component-level performance, while RAMS provides insight in components interaction and predicts average system-level performance. Despite that complementarity as well as recent progress , integration of the two disciplines has not happened yet. Why? That state of affairs may be ascribed to the following methodological differences: 1. PHM models continuous degradations; RAMS models faults as discrete events. 2. PHM has traditionally taken a deterministic viewpoint: RAMS models are stochastic. 3. Evaluation: results of PHM algorithms are usually evaluated empirically; RAMS relies on more-mathematical evaluation methods. Recent trends contribute to bridging those gaps. |
15:15 | Decision-Focused Predictive Maintenance: Bridging the Gap between data-driven RUL Prognostics and Maintenance Planning PRESENTER: Zhuojun Xie ABSTRACT. SPECIAL SESSION DATA-DRIVEN PREDICTIVE MAINTENANCE Data-driven predictive maintenance (PdM) increasingly leverages machine learning techniques to predict remaining useful life (RUL) using abundant sensor data, supporting effective maintenance planning. However, most existing research follows a predict-then-optimize (PtO) paradigm, focusing on prognostic accuracy while overlooking how RUL predictions affect maintenance decisions. We propose a novel Decision-Focused Predictive Maintenance framework that bridges the gap between RUL prognostics and maintenance planning. This framework creates an end-to-end pipeline that directly connects RUL estimation to maintenance actions. An experiment using the CMAPSS dataset demonstrates that our framework achieves a 9.3% reduction in maintenance costs compared to the PtO approach. This improvement is primarily attributed to the avoidance of unnecessary preventive maintenance, leading to a reduction in average lifetime waste due to preventive maintenance from 20.9 to 11.3 cycles. More importantly, we highlight the distinction between DFPdM and PtO by analyzing the quantile levels of RUL labels and maintenance decisions, demonstrating that DFPdM exhibits greater consistency in unifying estimation and optimization. Interestingly, we also observe that DFPdM achieves an acceptable prognostic accuracy, despite not being the primary training objective. This prognostics accomplished by recalibrating a specific quantile of the estimated distribution, rather than relying on the expectation or median as is common in conventional approaches. |
15:30 | Prescriptive maintenance and routing for a fleet of degrading vehicles PRESENTER: Pedro Dias Longhitano ABSTRACT. PAPER INTEDENDED TO BE PRESENTED IN THE SPECIAL SESSION PREDICTIVE MAINTENANCE The digitalization of the economy in the past decades has made data availability grow and become more important. Within the maintenance field, new technologies are raising and the possibilities opened with the Internet of Things are countless. Consequently, in recent years, different jargons appeared in the maintenance scientific community, among which Prescriptive maintenance deserves special attention. Although this term lacks a clear unambiguous definition, in this work it is understood as the combined optimization of maintenance decision and system usage and is applied to a fleet of vehicles. Here, we introduce the notion of degradation management to one of the most important combinatorial optimization problems, the Vehicle Routing Problem (VRP). In this variation of the VRP, a fleet of vehicles with limited payload capacity has a set of points to visit in space at each day. Each one of those points has known deadlines. Additionally, vehicles are subjected to degradation, with a stochastic State of Health which evolves as the they increase their travelled distance. The SoH is modeled by a gamma process, which is popularly used in this context. Finding the best order of points to visit requires the solution method to account for the long-term nature of the degradation process and solving for each day independently in the way that most logistic operations are currently done can lead to suboptimal solutions and increases the risk of downtime. In our paper we introduce this optimization problem and formalize it using a Multi-Integer Linear Programming (MILP) notation. We also provide numerical experiments, solving it and discussing the implications of considering degradation while optimizing the exploitation of vehicles as in this prescriptive maintenance approach. As conclusion, it is shown that our solution proposal can reduce breakdown and therefore, maintenance costs. |
15:45 | Prognostic and Energy Management for Multi-Stack Fuel Cell Systems with Stochastic Non-Homogeneous Degradation PRESENTER: Mouhamad Houjayrie ABSTRACT. SPECIAL SESSION PREDICTIVE MAINTENANCE Fuel cell systems represent promising solutions for sustainable power generation. However, they encounter significant challenges concerning durability, reliability, and operational costs. This paper focuses on post-prognosis decisions to enhance the reliability and durability of multi-stack fuel cell systems. Multi-stack systems offer potential solutions to some of these limitations by enabling the distribution of load demand across multiple cells that can be optimized through an Energy Management Strategy (EMS). The Prognosis phase to estimate the State of Health (SOH) of the system and its Remaining Useful Life (RUL) is necessary for making informed decisions within a degradation-aware EMS. In this work, the degradation process of a Proton Exchange Membrane Fuel Cell (PEMFC) is modelled as a non-homogeneous gamma process. This model is made load-dependent by introducing an empirical model linking degradation level to load demand. The prognosis involves estimating the RUL of the PEMFC. Given a known load demand in the future, the advantages of the gamma process lie in its well-defined probability density function of failure time, eliminating the need for Monte Carlo simulations used in other processes, and thereby facilitating straightforward RUL calculations. Then, an energy management strategy of a system composed of two PEMFCs is developed. An objective function that includes the total degradation of the system is proposed and then optimized using Sequential Quadratic Programming (SQP) method under constraints to supply the load demand. Finally, a maintenance strategy and a replacement policy are proposed to minimize the operation and maintenance costs of the system. The maintenance strategy can be coupled with the EMS for better performance and will be compared to other classical techniques such as average load split, providing insights into the efficacy of the proposed approach. |
14:45 | OHS specialists: position and interaction through the company ABSTRACT. Occupational Health and Safety (OHS) specialists are now well-established actors in many companies. Although their practices and profession are increasingly being studied, there does not seem to be many questions as to how the position they occupy in their company (e.g. headquarters, industrial sites) can influence how they practice their job. They also must interact with other actors of organisations who also have a safety assignment in their jobs. To explore this topic, our research is grounded in the tradition of activity and organisational sociology. In this research, we follow an empirical, ethnographic approach in a gas company. This company has decentralised part of its safety department to operational divisions while safety was historically centralised to its headquarters. The aim is to reconnect safety with operational activities. Through interviews and observations of the company's diversity of actors, we intend to understand the safety issues they face daily and how they interact with each other. We intend in the article to share some preliminary outcomes. One of them concerns the management of operational entities which now incorporates a decentralised safety dimension. Interviews and observations show that it generates some contradictory discourses and, consequently, misunderstandings in operational sites. With the separation of the safety department into the company's three divisions, each one seems to be moving towards greater autonomy, which distances them from the previous unified safety policy. However, it should products the emergence of area of regulation. This empirical work invites a situated approach of OHS specialists within the company for a better understanding of their daily practices, with greater sensitivity to different categories and positions of safety professionals. References Antonsen, S. (2009). Safety culture and the issue of power. Safety Science, 47(2), 183‑191 Rae, A., & Provan, D. (2018). Safety work versus the safety of work. Safety Science, 111, 119-127 |
15:00 | Can legal professionals champion safety? A brief historical inquiry PRESENTER: Nadezhda Gotcheva ABSTRACT. Crystal Eastman was an American lawyer, who actively advocated for legislative means to prevent accidents and to ensure workplace health and safety. Almost 60 years ago, the safety-conscious work of another American lawyer, Ralph Nader, resulted in establishment of the US National Traffic and Motor Vehicle Safety Act, requiring the adoption of new vehicle safety standards. Both Eastman and Nader were formally educated as legal professionals. They were committed to identifying and challenging unsafe practices and insufficient accountability within powerful American corporations during the 1960s. Grounded in historical accounts, this contribution aims at elaborating the role of legal professionals as safety advocates, driving regulatory and institutional changes in high-hazard domains to improve workplace and automotive safety. We claim that contemporary safety professionals can draw valuable lessons from these early safety champions, whose professional practice and continuous commitment for speaking up and fighting for safety have led to significant institutional impact on safety in the USA and globally. This study highlights the power of unexpected, bottom-up change, and invites interdisciplinary conversations. The results illustrate how the activism of legal professionals, who are not typically seen as safety influencers, can substantially impact safety decision-making by driving establishment of governmental regulations and better safety standards. The notion of professional practice for safety is rich and complex. Historical examples show that positive impact on safety decision-making may come from unforeseen professional fields and changemakers, whose relentless dedication and bravery to unmask and face unsafe corporate influences and practices continue to teach and inspire. |
15:15 | System 1 and system 2 thinking in safety related decision-making – a study of pilot’s planning and risk considerations PRESENTER: Lisbet Fjaeran ABSTRACT. Much research has demonstrated that people understand risk in two ways: One in which risks and risk related information go through careful analytical, logical consideration and cognitive effort, and another involving quicker and more intuitive processing on a more affective and experiential basis. This dual way of thinking and processing information, often referred to as System 1 and System 2 thinking, is fundamental to understand how professionals make safety related decisions. In technical approaches to risk and safety system 2 thinking has traditionally been seen as superior, while in practice and naturalistic studies system 1 has been identified as the most appropriate when making decisions in complex and time-constrained situations. In this paper, we build on research emphasizing that optimal decision-making requires combined use of both systems or modes of thinking and argue that these operate in parallel or in sequence. Applying this understanding, we analyze data collected through a questionnaire, interviews and observations in a European regional airline. The research focuses on how pilots operating in challenging conditions and sometimes complex situations, perceive and respond to risks. In so doing, we discuss the important and nuanced role of cognitive factors and heuristics in operational decision-making, while also stressing how these are influenced by organizational and systemic factors. We discuss how these factors, alone and together, may act to heighten or attenuate risk perceptions and have different consequences for safety related decision-making and behavior. Connecting research on risk perception and professional practice in aviation, our findings point to the importance of developing regulations, professional values and working conditions that support and promote the use of both systems or modes of thinking. |
15:30 | Investigating the “blues” of safety professionals PRESENTER: Jean-Christophe Le Coze ABSTRACT. “In online forums, at conferences, seminars and in safety publications, people are beginning to question some of the most well-established principles of safety management. Safety as a profession is going through a middle- life crisis (…) We have made significant progress since the days major projects budgeted for a certain number of fatalities, but by and large, the safety profession is frowned upon (…) There is a general rolling of the eyes and resigned shrugging of shoulders, if not outright hostility” (Marriott, 2018, 13). With these words, excerpt of a book entitled “Challenging the Safety Quo”, Craig Marriott, 25 years of experience as a safety manager in different safety-critical industries expresses a certain discontent with the profession. He is not alone. His book is one among many others published in the past decade by other safety professionals. Explicit titles such as “Safety Sucks” (Goodman, 2021) is an example among many others which develop a critical sometimes provocative perspective of their work. The hypothesis of a “blues of safety professionals” is formulated, based on an analysis of these publications in their historical context (Le Coze, 2024). The “blues” captures this discontent of current practices associated with the profession. To explore further this idea, a collaborative study by CRC Mines de Paris and Ineris has been launched, based on a series of interviews with safety professionals from France. The outcome of these interviews are presented and discussed, questioning the role, identity and activity of safety professionals. References Goodman, S, A. 2021. Safety Sucks! The bull $H!# in the safety profession they don’t tell you about; Second Expanded Edition. Pale Horse Media CO. Marriott, 2018, Challenging the SafetyQuo. Routledge Le Coze, JC. 2024. Understanding the “blues of safety professionals”. Journal of occupational safety and ergonomics. |
15:45 | Bridging the gap between safety and performance: observations and implications at operational and strategic levels PRESENTER: Stéphanie Tillement ABSTRACT. High-risk organizations are characterized by the need to reconcile major requirements: safety, of course, and performance. How organizations, at both strategic and operational levels, manage (or fail) to articulate safety and performance, in day-to-day work and over time, remains under-documented. Yet understanding this is a major theoretical and practical challenge. This communication reports on research aimed at better defining the concept of “safe performance”. This concept, which stems primarily from observations at the operational level, places our research closest to the work situations experienced by the professionals: in everyday work, they are faced with a network of established and possibly contradictory prescriptions concerning safety, costs, deadlines, quality, productivity, etc. This systemic perspective has theoretical implications: safety is no longer considered and managed in isolation from other requirements. Such an integrated vision implies a step back from two dominant approaches: the first, known as “safety first”, defends an “idealistic” conception of safety, which tends to make performance requirements and the diversity of stakeholders’ interests invisible; the second, summed up by Farjoun and Starbuck's formula, “Faster, better, cheaper... but not safer!”, insists on the incompatibility between safety and economic performance objectives. How, then, can we overcome these oppositions, and propose a more integrated vision of requirements, closer to the reality of work in high-risk organizations: “producing safely”? In addition to investigations at operational level, this paper reports on definitional work undertaken with high-risk organizations’ directors, to identify the implications of a “safe performance” approach at strategic level. Does safe performance have the same meaning according to the activity considered (design, maintenance, production, dismantling)? According to the organization considered (operator, regulator, policy-maker)? How does time influence the articulation of requirements? How do management tools frame the relationships between performance and safety, promoting or preventing the possibility of building safe performance? |
14:45 | Responding to the Weaponization of Energy Dependencies: Hybrid Threats, National Security Interests, and Securitization PRESENTER: Sissel Haugdal Jore ABSTRACT. The geopolitical development following Russia’s full-scale invasion of Ukraine and the sabotage of the Nord Stream 1 and 2 pipelines have significantly transformed the role of the Norwegian petroleum sector. A direct outcome of this situation is the designation of “transport of gas by pipelines to Europe” as a fundamental national function, which essentially means that gas transport to Europe is recognized as a matter of national security under the Norwegian Security Act. This represents a novel application of the Security Act, where the definition of national security interests has been expanded to include “the relationship with other states and international organizations”. This allows for the consideration of infrastructure or services as essential to Norwegian national security, even if they are not located within Norway or directly coupled to Norwegian domestic safety and security. Our paper explores the expansion of the national security interest concept through the lenses of securitization and weaponization, both of which have played a role in framing the issue as a matter of national security. We analyze how the evolving security landscape, characterized by hybrid threats, currently and in the future, lays the ground for these developments. Utilizing the Norwegian petroleum sector as a case study, we draw on official Norwegian reports as empirical data. We conclude that the shifting security landscape, with its emphasis on hybrid threats and great power competition, will further drive the weaponization of various sectors, potentially leading to the securitization of new industries. This evolution will have implications for organizations and their risk management strategies. |
15:00 | Societal Safety and Security in Norway: the conceptualizing of securitization in contemporary public documents ABSTRACT. Over the past 25 years, various official documents have shaped the understanding of societal safety and security in Norway (Morsut 2022). These documents demonstrate how resources and political will are sought mobilized to tackle risks and threats, but they also raise questions about democratic accountability and the potential for overemphasizing certain risks at the expense of others. This paper aims to explore the extent to which securitization issues are being expressed in contemporary official documents to examine to what extent there is a shift from the notion of societal security and safety to a more territorial concept of national security. By specifically investigating four significant recent public documents (Meld. St. 5, NoU 2023:17, NoU 14 2023, Meld. St. 9 2024-2025), this paper reveals how notions of risks and threats are framed and addressed in relation to contemporary risks such as hybrid threats and the war in Ukraine. The paper highlights the risks associated with over-securitization, where an excessive range of issues are framed as security threats. Through document analysis, it investigates references to 'existential' dangers and the deliberate use of 'security' language, including phrases conveying a 'sense of urgency' and 'imagery of exhortations.' Furthermore, it examines how democratic values and civil rights principles are safeguarded in the face of new laws, regulations, and increased civil and military preparedness |
15:15 | Responding to A New Geopolitical Reality. NATO and EU Strategies of ‘Whole-of-Society’ and ‘Resilience’ and Implications for Corporate Actors PRESENTER: Susanne Therese Hansen ABSTRACT. The invasion of Ukraine, Russia’s weaponization of energy, the Nord Stream and the Balticconnector incidents accelerated NATO and EU initiatives targeted at enhancing the security of critical energy infrastructures. Central to ongoing security efforts targeted at energy infrastructures is a ‘whole-of-society approach’ and a focus on ‘resilience’, both involving a new role for, and new demands on, corporate energy infrastructure owners and operators. In this paper, we first examine the rationales underlying NATO’s and the EU’s whole-of-society approach and the focus on resilience therein. Second, drawing on the case of Norway and major Norwegian energy companies, we discuss potential implications for corporate actors. We conclude with observations about the need for scholarship to explore the various implications of what we label ‘corporate securitization’, that is, the process through which the activities and functions of corporations become reframed as security policy and become subject to security policy tools. |
15:30 | From organisational culture to communities of practice: Organisational culture and resilience in a context of co-emerging safety and security challenges PRESENTER: Torgeir Kolstø Haavik ABSTRACT. This paper reports from an early phase of a research effort engaging with organisations’ response to emerging security threats in the oil and gas sector, combined with theoretical advances in cyber resilience. The ambition of the paper is to scrutinise the multi-dimensional appropriation of the term ‘culture’ to guide behavioural change and compliance with management expectations, rules, and procedures. In addition, we direct attention towards the mechanism of fostering professional culture in communities of practice. We argue that culture is not first and foremost a (pre-)condition for practice, but rather a pattern resulting from practice over time. This implies a ‘practice approach’ and a ‘work-as-done approach’ to organisational culture, that facilitates communication between scholarly literatures that rarely meet: the safety and security culture literature, and the resilience literature. The discussion will use cyber resilience as a case, as it is widely recognized across industries that the state-of-the-art cyber security approaches urgently need to be reinforced by resilience principles. There is a risk that the cultural condition may be “lost in translation” of auditability, so that the way we operationalise safety culture/security culture as a management concept implies a risk of running the errand of compliance rather than facilitating resilience. We argue for more focus on communities of practice in organisations to develop an understanding of contextual conditions, professional competence, and discretionary space in organisations. We also suggest how this focus can be inscribed into a further development of theories about resilience in a cyber-/hybrid threat context. |
14:45 | An analysis of RDT-based methods for defining qualification plans for new completion technologies PRESENTER: Rafael Azevedo ABSTRACT. Reliability demonstration tests (RDT) are usually applied in the qualification process of equipment intended for use in O&G applications. Given a target defined for the reliability level in the field and an associated confidence level, the RDT method can be used to estimate the number of test samples required, assuming a maximum number of failures occurring on the test. The classical RDT equation is presented in the API recommended practice 17Q, assuming a test for the equivalent product service life. This work provides a useful comparative analysis between the classical RDT and a based-RDT Bayesian approach to qualify completion equipment. The context of completion technologies comprises a relevant problem for the service companies since a long mission time and a high reliability level are required, and the classical RDT method may result in unfeasible test plans. A test-to-success approach, i.e., a test in which failure is undesirable, which is common practice in the development of new technologies in the O&G industry, was adopted in the analysis. The results showed that only the difference between the modeling of the test significance level provides a sample with one unit less in the Bayesian approach, while important benefits can be obtained if a suitable method is adopted in defining the Bayesian prior distribution. |
15:00 | Reliability of a new subsea interface for well completion: test results, Bayesian update, and multi-source data PRESENTER: Eduardo Menezes ABSTRACT. One of the large breakthroughs in the O&G industry is the complete electrification of well completion. In this regard, a whole new equipment group has been developed to implement electric completion. One of the most critical components is the subsea interface (SI), which provides power and communication between the downhole and top-side systems. This was a new development for the O&G industry and a series of analysis, tests and aggregation of data from various sources were utilized to estimate the reliability of the technology. In a structured approach, the reliability of the SI was obtained for TRL 6, after the accomplishment of the necessary pre-requisites. The process initiates with the Failure Modes, Effects and Criticality Analysis (FMECA), when the main failure scenarios are outlined. With the results of FMECA, a Fault Tree Analysis (FTA) is constructed for the several modes of operation of the under-development SI. FTA establishes the logical connections between the failure modes and basic events for which the reliability models should be established. These models are updated in a sequential manner, as the test results and multi-source data become available, using Bayesian methods. This work presents the estimation of reliability for the SI, describing the relevant steps in a practical application case. It includes results from expert elicitation at the beginning of the development process and reliability tests executed according to planning suited to the new technology. |
15:15 | Assessing the Reliability and Availability of Offshore Subsea Production Systems: A Comparative Analysis Using ESDs and FTA PRESENTER: July Bias Macedo ABSTRACT. As the demand for oil and gas production from offshore fields grows, the complexity and scale of subsea systems continue to increase. Ensuring the reliability and availability of these systems is critical to minimizing operational risks and maximizing productivity. The layout of subsea production systems plays an important role in achieving these goals, but determining the most efficient configuration remains challenging. This paper discusses four distinct subsea production systems: satellite, manifold, trunkline, and collection ring. To assess the reliability of these configurations, we modeled the systems using a combination of Event Sequence Diagrams (ESDs) and Fault Tree Analysis (FTA), allowing us to capture both the dynamic sequences of events and the potential failure modes within each system. We also considered repair and mobility times for the equipment and pipes, treating these times as exponentially distributed random variables to model downtime and repair logistics uncertainties realistically. By comparing the reliability and availability of each system, we provide a comprehensive evaluation of the trade-offs between design complexity and system performance. The insights gained are expected to guide engineers and decision-makers in selecting subsea production system designs. |
15:30 | A Reliability Model Repository for Real-Time Well Integrity Management in Oil and Gas Operations PRESENTER: Danilo Colombo ABSTRACT. Quantitative risk analysis (QRA) is vital for well integrity management, supporting decision-making by assessing the current and future states of well integrity. This involves considering the reliability of well and subsea components and projecting potential integrity loss. To address these needs, this paper proposes the creation of a Reliability Model Repository that interfaces with reliability data sources, and real-time well conditions monitoring, and applies them to different types of reliability models that assess and predict integrity, enhancing operational safety. The Reliability Model Repository integrates data-driven approaches to evaluate the current state of well integrity and track its history, which is crucial during the production phase. It combines different types of reliability models: statistical distributions, accelerated life-test models, structural reliability models, and machine learning models to capture the impact of covariates (equipment and well characteristics, environmental and operational conditions). Furthermore, the Reliability Model Repository allows modeling of dependence between failures using conditional probabilities and modeling incomplete tests based on the well conditions. This paper will provide a real case study showing how the condition of the well is accessed in real-time and influences the failure rate of three components (Downhole Safety Valve, Production Tubing, and Production Casing) affecting the quantitative evaluation of well integrity. The results can be used to optimize well operation management and comply with Brazil’s ANP-SGIP regulations and boost operational safety. |
15:45 | Offshore Oil-Well: A Risk-Based Maintenance Proposal PRESENTER: Danilo T. M. P. Abreu ABSTRACT. In 2022, ninety-two percent of Brazil's oil production came from deep and ultra-deep waters, with approximately 80% originating from the pre-salt fields. These fields, located 300-350 nautical miles from the coast at depths of around 2,000 meters, have reservoirs situated 7,000 meters below the sea surface. They face significant technological and operational challenges, necessitating a balance between sustainable financial results and rigorous safety standards. This balance requires innovations and strict asset maintenance criteria. In this work, the authors propose a risk-based maintenance approach for subsea oil wells, which is inspired by practices from other industries such as the nuclear industry. The risk tolerability parameters are based on measures that relate to the system’s failure rate and incremental failure probability. The merits of this approach are analyzed through computational experiments using discrete-event simulation (DES) and the Monte Carlo method for reliability, availability, and maintainability (RAM) modelling of safety and production functions. The flexibility of DES allows modelling several critical aspects relevant to the management of subsea oil wells, including periodic test policies, delayed and opportunistic repairs, and complex maintenance prioritization decision-making. The results show how the sensibility of indicators such as blowout probability, repair costs, and downtime vary as a function of the risk tolerability parameters. As a benchmark, the results for the risk-based scenario were compared to similar results for prescriptive maintenance scenarios and it is shown that the correct calibration of risk tolerability parameters can lead to best outcomes when compared to typical prescriptive maintenance policies. |
14:45 | Industrial Cybersecurity: Current Trends and Challenges PRESENTER: Ravdeep Kour ABSTRACT. Industrial cybersecurity has become a critical concern in today's interconnected world, as critical infrastructure systems increasingly rely on digital technologies. This paper explores the unique challenges and opportunities presented by industrial cybersecurity, highlighting the need for enhanced cybersecurity measures. The paper discusses the potential consequences of cyberattacks on industrial systems, including disruptions to critical services, economic losses, and even physical harm. To address these challenges, this paper discusses cybersecurity initiatives, standards, guidelines, directives, and acts that can provide a comprehensive framework for cybersecurity and AI governance. A systematic literature review has been conducted in this paper using Scopus and Google Scholar, which provide the foundation for identifying relevant publications. These publications show key trends and themes in industrial cybersecurity research, including the growing importance of education and training, as well as cybersecurity risk assessment and mitigation. |
15:00 | Cybersecurity barriers and performance requirements PRESENTER: Knut Øien ABSTRACT. The cybersecurity barrier management project aims to develop new knowledge and guidance to secure industrial control and safety systems against cyberattacks. With several threat actors targeting the petroleum sector, the number of publicly known cyberattacks is increasing, revealing a larger threat landscape. At the same time, increased digitalisation of the sector has led to increased vulnerability. Cyberattacks against control and safety systems in the petroleum industry can cause physical damage to facilities, harm personnel on board, and affect security of supply to Europe. Barrier management systems have been introduced for traditional safety against accidental events, but to a lesser extent against intentional events such as cyberattacks. In this paper, we focus on the identification of cybersecurity barriers, and the development of corresponding cybersecurity barrier requirements (i.e., how well the barriers should perform). The main research approach is iterative empirical research where the research and exploration are carried out in several iterations in close interaction with industry partners, advisors, experts and professional forums. A literature survey has been performed and initial considerations about the distinction between cybersecurity barriers and non-barriers have been made. The main result described in this paper is a nine-step methodology for identification of cybersecurity barriers and establishment of performance requirements for the barriers. This includes a definition of cybersecurity barriers. However, a definition alone is not sufficient to distinguish between cybersecurity measures being considered cybersecurity barriers versus those considered as non-barriers; expert insights in cybersecurity measures are a prerequisite for the identification of cybersecurity barriers and the establishment of performance requirements. |
15:15 | Cyber sovereignty and the (multitude of the) industrial metaverse PRESENTER: Marte Høiby ABSTRACT. Digital profiling and tracking in the age of attention economy and surveillance capitalism implies intricate and subtle ways of profiling and influencing people, but also industries, organizations and societies. The nature and full impact of this is only partially recognized at the civic level, receiving rather slack attention at the political/legal level, and scarcely addressed as a safety/security issue in industrial and societal contexts and corresponding critical roles and positions of individuals. These aspects carry serious challenges for the ambition of industrial participation in the private-public domain for national security, and for total defense approaches. In this paper, we explore the rise of vulnerabilities released by new industrial digitalization paradigms and practices, and their potential impact on society and state security from a cyber sovereignty and resilience perspective. Accompanied by growing concern for industrial and national sovereignty in cyberspace and burgeoning geopolitical instability, this constitutes a gap which urgently needs recognition. By investigating the concept of cyber sovereignty and the growing interest for the industrial metaverse on a global scale, we discuss to which extent cyber resilience thinking may be a counterweight to the vast digital vulnerabilities and challenges that can be envisaged. Although the exploration is brief and results tentative, reflecting the inherent complexity and dynamism in the field due to both technological, economic and geopolitical factors, we argue that both practical and theoretical (envisaged) approaches to cyber resilience may fall short towards the pace of change in the cyber domain. This may have serious implications for the prospect of industrial contribution to national security |
15:30 | Exploring and developing resilience training scenarios for security of electricity supply PRESENTER: Tor Olav Grøtan ABSTRACT. The electrical energy sector faces large challenges and needs for digital transformations. The introduction of intermittent renewable energy sources, increased demand, and new energy consumption patterns influence grid stability and security of supply. Moreover, the strained geopolitical situation implies new threats and digital vulnerabilities to a complex system comprising a mix of new and old technologies. Existing regimes, design, and operational principles (e.g., “N-1”) are challenged by the urge to facilitate a higher utilization of the existing grid resources. A more risk-based approach is suggested to face these challenges. In other sectors, the disappointments and shortcomings of anticipation-based risk management have incited a strong interest in resilience approaches. The successful adoption of methods for enhancing the cyber-resilience of the electric energy sector requires that the approaches are adopted to the unique characteristics of this sector. However, cyber resilience is not confined to cyber security but includes, from a sociotechnical perspective, countering of digital vulnerabilities in the context of security of supply. Moreover, we see resilience as an adaptive capacity both residing in normal operation and invoked at boundary conditions. Resilience is thus a process and practice, not only observable outcomes. Hence, resilience is an inherent ability that manifests itself at boundaries and margins of operation. These boundaries are not static but influenced by past actions and future strategies. This paper aims to enable distribution system operators (DSOs) to understand and benefit from their adaptive history, grasp their precariously vulnerable present, and envisage their resilient future. The primary method is training on scenarios that clarify boundary conditions for DSOs and foster inherent resilience, supported by a proper learning and strategizing environment using the Training for Operational Resilience Capabilities framework. This paper gives an example of such a scenario. |
15:45 | Building cyber resilience in SMBs - lessons from the project Cyber Innovation Network PRESENTER: Levente Nyusti ABSTRACT. Small to medium-sized businesses (SMBs) form the backbone of Norway’s economy. SMBs are defined as businesses or enterprises with fewer than 100 employees [1] and constitute over 99 percent of Norway's active businesses [2]. Consequently, SMBs frequently hold key positions within our society in sectors such as healthcare, telecommunications, and energy, as well as serving as suppliers for critical infrastructure [3]. Furthermore, cybercriminals might also attack e.g. critical infrastructure through SMBs by way of supply chain attacks [3]. In 2023, Microsoft reported that “82 percent of ransomware attacks target small businesses” [4].However, SMBs often don’t have the resources to employ dedicated security staff or to outsource their cybersecurity needs. Often, this responsibility falls to the company’s IT professional or the most skilled IT person in the company. The project Cyber Innovation Network (CIN) aims to build cybersecurity competence in SMBs by creating a collaboration between industry, academia and research institutions. Our project shows that while these SMBs lack the know-how to protect against cyber threats, this type of collaboration can provide effective countermeasures in the ever-evolving threat landscape. The CIN project has received much positive feedback, and this paper details our approach for teaching fundamental cyber security principles for SMBs. Furthermore, this paper also presents our approach, experiences, and lessons learned in building cybersecurity competence in SMBs. As a first step, we conducted a survey where we invited several SMBs to map the perceived needs and cybersecurity threats. After an initial assessment of SMB needs, we created a two-day beginner course in risk assessment and cybersecurity. These two days consisted of different sessions, including risk management, contingency plans, and challenges related to supplier acquisition. Furthermore, we discuss feedback from the participants and future work. [1]:https://www.regjeringen.no/globalassets/departementene/nfd/dokumenter/vedlegg/smabedriftslivet-uu.pdf [2]:https://www.cw.no/debatt-sikkerhet-smb/debatt-hvem-tar-vare-pa-norges-godt-over-600000-hjornestensbedrifter/1138337 [3]:https://nsm.no/getfile.php/133675-1592831718/NSM/Filer/Dokumenter/Rapporter/helhetlig_ikt-risikobilde_2017_orig_enkeltsider_low.pdf [4]:https://techcommunity.microsoft.com/t5/small-and-medium-business-blog/making-it-easier-for-small-and-medium-businesses-to-stay-secure/ba-p/3941482 |
14:45 | Modelling impact of defects and faults on railway punctuality PRESENTER: Jørn Vatn ABSTRACT. To prioritise maintenance and renewal in the railway infrastructure it is required to have a good understanding of the impact defects and failure have on punctuality. The paper presents and compare three different assessments methods. The first approach is based on a direct assessment where the analyst considers i) the number of trains passing that line during the average duration of the fault situation. Then ii) a direct assessment is given wrt how many minutes each train will be delayed in this period. For this approach is it hard to assess cascading effects. The second approach is based on a network model where each train journey is modelled in terms of speed, acceleration, opportunities for crossing etc. The case study is performed on a Norwegian single track line, hence the limited places for crossings will be important in the model. For such a simulation model it is then possible to induce a fault or a degradation which either represent a full stop, or speed reduction. Depending on “where and when” the failure or defect occur, the consequences may vary. The last approach is based on analysis of statistics. In Norway there is a system here a so-called “event ID” is linked to each train that arrives late to a station. This means that the train operations centre is assigning a specific condition as a reason for the delay. This provides lists of “who to blame” for the punctuality. Fault and defects in the infrastructure is part of this picture, but in addition comes delays due to failures of the rolling stock, other train being delayed causing delayed departure etc. Finaly the three approaches are compared and discussed wrt how efficient they are to model the impact of defects and faults on punctuality, and thus help optimizing maintenance and renewal. |
15:00 | Factors Influencing the Performance of Railway Track Maintenance Teams PRESENTER: Rina Peach ABSTRACT. Team functionality plays an important role in employees’ performance that drives organisational objectives. Optimal performance within the railway industry requires management systems to be created and implemented to achieve team efficiency to enhance productivity. Various factors linking to High Performance Work Systems (HPWS) relating to human behaviour and function are investigated to determine key factors that influence the performance of maintenance teams, specifically focusing on activities linked to the railway industry. A questionnaire was presented to 180 sample population, out of which 131 questionnaire data was used for analysis. The respondents were responsible for perway (track) related maintenance tasks, the quantitative results presented by respondents ranked factors of this study and how they viewed each item using a four point scale methodology. Collaborating the results through descriptive, frequency, reliability and validity statistics, the outcomes of the analysis has shown the following factors as valued the most to the least: rewards and recognition (most valued), effective communication, motivation, trust, teamwork, effective leadership, skills, supervisor support, co-operation and co-ordination, adaptation, performance monitoring, shared responsibility and diversity (least valued). Although every factor is indicated to be relevant in this study, they are valued differently in the railway industry; by leveraging the top items, railway organisations can ensure and maintain high performance. Furthermore, investigating and implementing the least valued factors, organisations can improve the intrinsic motivation of teams which will ultimately have a positive influence on team performance. |
15:15 | Smart Gravel or Smart Wireless Sensors in Rail: A Comprehensive Review PRESENTER: Jan Cordes ABSTRACT. The rail industry is undergoing significant transformation with the integration of smart sensor technologies, which are increasingly essential for monitoring, safety, and operational efficiency. This review examines the current landscape of wireless smart sensor solutions used in rail systems, focusing on both their technological capabilities and limitations. Special attention is given to completely wireless sensors, powered by batteries or through energy harvesting methods. The paper also explores recent advancements in edge AI technology, which enables sensors to process data locally and make predictions without the need for constant communication with central systems. For the purposes of this review, a smart sensor is defined as a device which is powered by battery or/and energy harvesting and is able to execute some form on AI in the field. By reviewing the latest developments in this scope, this work helps assessing their impact on predictive maintenance, fault detection, and autonomous rail operations. The papers reviewed in this work are compared by performance, energy efficiency, durability, and scalability, highlighting emerging trends and future research directions. The review concludes with a discussion on the challenges and opportunities in adopting these advanced sensor technologies for rail. |
15:30 | Assessing CO2-e emission of railway crossing during maintenance activities PRESENTER: Zafar Beg ABSTRACT. Railway operation is considered as the most sustainable transportation. However, such observation often neglects environmental considerations during the maintenance stage, which is a carbon-intensive phase. This study aims to quantify carbon emissions related to the maintenance stage. In this study, we focused on railway crossings (fixed and movable) located on Bandel (track section) 120 near Boden in Sweden as a use case for CO2 estimations. For this study, we utilize 14 years (2010-2023) of maintenance data collected from the Swedish Transport Administration (Trafikverket or TRV) databases. Results indicate that logistics (transportation) during the maintenance stage are the most significant contributors to fuel consumption and CO2-e emissions. Further, the interrelation between fixed and movable crossings demonstrates the environmental impact of movable crossings is substantially higher than that of fixed crossings. The insight of this study can be integrated into decision-making models that combine Life Cycle Cost (LCC) and climate impact to optimize railway crossing replacement strategies. |
15:45 | Stochastic risk assessment of railway masonry arch bridges in seismic prone areas in Portugal PRESENTER: Carlos Cabanzo ABSTRACT. Risk-based methodologies for analyzing transportation assets have been employed to ensure good performance and long-term safety. Failure of transportation structures leads to major economic consequences, often due to the uncertainties associated with these structures and the effects of unexpected extreme events. Generally, risk is described by hazard intensity, vulnerability, and consequences related to the probability of a given hazard to occur, the susceptibility of a system to be affected by said hazard, and the quantification of the effects, respectively. In Portugal, large masonry arch bridges (MABs) were built during the railway expansion to improve the national network. These structures are expected to continue operating as a crucial part of the railway network without elevated maintenance costs. The present research presents a risk analysis of two of the largest masonry arch bridges built during this period. A risk index is computed based on the combination of code-based hazard curves, seismic fragility curves, and direct consequences. The risk assessment considers material uncertainties, damage states, and peak ground acceleration as the seismic intensity measure. The resulting risk curves provide useful information for prioritizing assets’ interventions and taking preventive actions to maintain the desired performance of the railway system. |
14:45 | Operational Risk Assessment of Hydrogen Blending in Natural Gas Pipelines: Industrial Applications PRESENTER: Tarannom Parhizkar ABSTRACT. Hydrogen blending in natural gas pipelines is a key strategy for decarbonizing energy infrastructure, transitioning towards cleaner energy systems, and reducing greenhouse gas emissions. However, the distinct chemical and physical properties of hydrogen introduce operational risks, particularly regarding pipeline integrity, system reliability, and component durability. Hydrogen’s lower density and higher diffusivity, as well as its tendency to embrittle certain metals, pose unique challenges. These factors underscore the need for comprehensive operational risk assessments and the implementation of an effective Integrity Management Program (IMP) to maintain safe and efficient pipeline operations. This study investigates the operational impacts of hydrogen-natural gas blends on two critical industrial applications: power generation units and industrial burners. The thermodynamic modeling tool, Thermoflex, is employed to simulate performance under various hydrogen blending ratios. By analyzing system performance metrics such as productivity, efficiency, and availability across different blending levels, the paper identifies the safe and practical range for hydrogen integration. It also evaluates the effects of hydrogen on key operational parameters, offering a comparative assessment of system behavior under different hydrogen concentrations. The results from this research provide insights into optimizing hydrogen blending strategies while managing operational risks, ensuring that both environmental goals and system reliability are achieved. The findings contribute to the growing body of knowledge in hydrogen integration and highlight best practices for industrial applications to support the transition to low-carbon energy systems. |
15:00 | Modeling of Accidental Liquid Hydrogen Spills and Rainout PRESENTER: Davide Rescigno ABSTRACT. Liquid hydrogen (LH2) is a clean energy carrier that is gaining traction for its versatility. Nevertheless, its use may lead to significant risks due to its low storage temperature, low boiling point, rapid vaporization, and high flammability. In the event of a loss of containment, if a portion of the LH2 does not fully vaporize and reaches the ground, the rainout phenomenon occurs. If the release is continuous, a pool of LH2 might be generated, raising the risk of delayed ignition, which may lead to large-scale fires or explosions. Thus, this study aims to understand the behavior of such cryogenic releases to mitigate potential risks. The simulations of LH2 releases involve the analysis of key factors such as the quality of the fluid, operating pressure of the tank, and jet velocity. This study adopts an integral model to predict the diameter of the LH2 droplets and their vaporization rate, the rainout, and the potential formation of an LH2 pool. The simulations help assess worst-case scenarios and determine the LH2 concentration profiles on the ground. The integral model allows for a preliminary evaluation of real-world release scenarios in hydrogen storage and transport. |
15:15 | Improving classification of imbalanced classes in industrial data: Enhancing defect detection for type IV high-pressure hydrogen vessels PRESENTER: Lina Achour ABSTRACT. In the context of our study on industrial data from the manufacturing and testing of type IV high-pressure hydrogen vessels, we have identified a major challenge: a significant imbalance between defect classes. This asymmetry, typical in defect recognition problems, considerably complicates the analysis and prediction of test results, thus requiring the use of algorithms specifically designed to address this issue in the context of process data classification. Our goal is to improve the accuracy and reliability of predictions by taking this asymmetry into account in the data distribution. For this, we have developed a robust methodology, based on minimax optimization, to enhance these algorithms on a dataset representative of our application domain. The initial results obtained are very promising, showing significant improvements compared to traditional classification methods. These findings highlight the potential of these approaches to optimize the analysis of industrial data in the high-pressure hydrogen tank sector. Moreover, this research offers promising perspectives for better understanding and predicting test results in the industry, and to account for data imbalance. By integrating advanced techniques such as machine learning and optimization algorithms, we aim to provide comprehensive solutions that can effectively manage the complexities inherent in industrial data analysis, ultimately contributing to improved safety and efficiency in high-pressure hydrogen vessel operations. |
15:30 | Data-driven Bayesian network for risk analysis of urban hydrogen refueling station accident PRESENTER: Jinduo Xing ABSTRACT. Hydrogen refueling station (HRS) safety is receiving increasing attention with the growth of hydrogen energy application. Existing risk assessment methods of HRS are primarily based on expert knowledge to develop failure processes. It may lead to insufficient accuracy due to potential subjectivity. This paper aims to conduct a new hybrid risk assessment method by incorporating the latest HRS accident data and physical knowledge into a Bayesian network (BN) model to analyze the key risk influencing factors (RIFs). In this paper, the latest HRS accident data in HIAD 2.1 from 1980 to 2023 is collected. 30 RIFs are identified based on the accident report and physical knowledge. In the structure learning process, Bayesian search, Peter-Clark algorithm are adopted for structure learning, respectively. The expectation maximation algorithm is designed in the parameter learning stage to obtain the data-driven BN model. Additionally, K-fold cross validation is dedicated to test the performance of different BN models. With these developments, new findings and implications are revealed beyond the state-of-the art of HRS risk analysis. |
14:45 | Conceptual Resilience Model of Wastewater Treatment Plants in Scotland PRESENTER: Seehan Rahman ABSTRACT. Many of the wastewater treatment plants (WWTPs) in Scotland are situated along the coastline which makes them already vulnerable to sea level rises, and facing key challenges such as ageing infrastructure, flooding, drought, changing waste profile, energy consumption, etc. in a constantly evolving environment. The necessity for robust models that can simulate and predict the resilience of these WWTPs in real time is very critical to ensuring consistent and efficient management of wastewater to protect public health and the environment. This research details the development of a comprehensive black box model which helps to simplify the complex interdependencies within WWTP processes, and it utilises the first-order kinetic model, multiple linear regression models and mass-balance models to forecast the behaviour of WWTP under stress. The model is designed to process diverse influent characteristics including Biochemical Oxygen Demand (BOD), Chemical Oxygen Demand (COD), Total Suspended Solids (TSS) and also operational factors such as hydraulic retention time. It also factors in temperature fluctuation and rainfall events (which contribute a significant role in WWTP processes), allowing the model to adapt to local climatic challenges which is crucial in Scotland’s variable climate. The model outputs the key characteristics of effluent and focuses on the stability of treatment process demonstrating the system’s resilience through infrastructure robustness and adaptive capacity. With predictive insights and adaptive approaches, this model aims to provide a robust framework for enhancing the long-term resilience of WWTPs ensuring environmental compliance and efficient operation under stringent regulatory demands. |
15:00 | Hidden safety systems failures and their contribution to catastrophic events: case studies from the energy industry PRESENTER: Lar English ABSTRACT. It is accepted that incremental advances in technology have made equipment increasingly safer to operate across all industries. Although safety improvements are commendable, there are instances where the failure of safety systems has contributed to catastrophic events. Using two case studies from the energy industry, we identify the failures, hidden to the operators, that contributed to serious incidents. In the first example involving an explosion at the Upper Big Branch Mine in Montcoal, West Virginia, the failure of the ventilation systems resulted in the build-up of explosive gas and dust which was then exposed to a source of ignition. In the second example involving a gas pipeline explosion in San Bruno, California, a fault in the redundant power supplies resulted in a pipeline pressure increase which was a contributing factor to the subsequent explosion. We explore the possibility that the growing complexity of equipment used to deliver advances in performance, and the correspondingly intricate safety systems that are required, is increasing the likelihood of these hidden failures. The presence of the failures may be known to select company employees but are not communicated to the equipment operators, hence our emphasis on the ‘hidden’ aspects of their failures. The main objective of our paper is to identify instances of safety systems faults that acted as contributory causes in catastrophic incidents. In doing so, we highlight how more effort is required for thorough testing of the function of safety systems and the consequences of associated failures. We argue that an improved focus on design, testing, communication and operator training will do much to avoid the types of safety systems faults that have contributed to the disasters detailed in our case studies. |
15:15 | Safety aspects in the electrochemical energy storage systems: critical issues with Li-ion secondary batteries PRESENTER: Roberto Bubbico ABSTRACT. Electrochemical storage systems are used in an increasing number of applications, from portable devices, such as laptops and smartphones, to electric mobility. The possibility of adopting renewable energy sources in a wider and more quantitatively significant range of applications, such as aeronautics and power networks, strongly depends on the availability of efficient and reliable energy storage systems. However, electrochemical energy storage systems are still affected by several problems especially from a reliability and safety point of view, most of them connected with the heat produced during operation. In fact, while the produced heat is usually adequately dissipated toward the external environment during normal operation, under some demanding conditions, the heat generated can be so large that the internal temperature of the cell will dramatically increase. Besides other “minor” consequences, this temperature increase often leads to a full failure of the cell, sometimes with harmful consequences, such as fires and/or an explosion (thermal runaway). In the case of several cells packed in a module or stack, the failure of a single cell can possibly involve additional cells with even more severe consequences. Many aspects and parameters influencing the thermal runaway have been studied and the results reported in the literature are sometimes conflicting. This is due to the complexity of the underlying reactions which often overlap and influence each other. Moreover, the course of the reactions depends on the operating conditions and on the chemistry of the cell. To this end, the main aim of this presentation is to characterize the thermal behavior of some reference electrochemical energy storage systems and to identify, on the one hand, the most critical issues connected with their safety and, on the other hand, the auxiliary systems and their operating characteristics to improve the thermal stability of the cells and of the battery packs. |
15:30 | Optimizing Automated Optical Inspection for Printed Circuit Boards Using Computer Vision: A Comparative Study for Beneficial Imagesize Reduction PRESENTER: Jannis Pietruschka ABSTRACT. The increasing digitalization makes printed circuit boards (PCBs) an almost indispensable component as a flexible internal interconnection technology in electronic devices. To ensure reliable operation in the final product, the quality of each manufacturing step must be closely monitored. Quality control can be carried out through visual inspection by trained personnel or via automated inspection systems. Since the range of defects that can occur during the manufacturing process is highly diverse and customer requirements for the boards vary, this presents challenging conditions for automated optical inspection (AOI). Several studies indicate that the effective image area, which meets the resolution requirements for evaluation, as well as the optical limitations of the camera lens and sensor, is around 15 to 20~cm$^2$. As a result, multiple images must be taken with a moving inspection head to fully assess PCBs, which typically have a larger surface area. This significantly limits the integrability into a continuous manufacturing process. Furthermore, real-time evaluation necessitates that the detection of defects be synchronised with production speed, which is also influenced by image size. Consequently, a compromise must be reached between image resolution and the level of detail captured in the area to ensure the reliability of the analysis of defects on the PCB. As part of the present study, images of various PCBs were generated, which were tinned using the hot-air leveling process. The evaluation is based on an One-stage detection model, which was trained with PCB images at different resolutions and identifies the regions relevant for defect detection. This comprises a comparative study conducted using diverse image preprocessing techniques, with the objective of optimising the evaluation speed and accuracy. The long-term objective is to implement the model for practical use, thereby enabling its use as an automated inspection method in continuous manufacturing processes. |
15:45 | Understanding People with Water Insecurity: Insights from the Lloyd’s Register Foundation 2021 World Risk Poll PRESENTER: Joshua Inwald ABSTRACT. Water insecurity, or lacking consistent access to safe water, is a global threat to human health and wellbeing. Here, we examined whether people with water insecurity around the world have unique communication needs due to perceiving less disaster preparedness, trusting information sources less, and linking water safety more to severe weather than to climate change. We analyzed survey data from the 121-country 2021 Lloyd’s Register Foundation World Risk Poll (n > 125,000). We discuss four findings, which showed robustness across multiple analyses. First, self-reported water insecurity aligned with a previously validated water insecurity measure and was most prevalent in countries with lower water-safety performance. Second, participants with (vs. without) water insecurity felt less prepared for disasters. Third, local news and emergency services were the most universally trusted information sources, though participants with (vs. without) water insecurity trusted these sources slightly less. Fourth, water safety concerns were more strongly associated with severe weather concerns than with climate change concerns, especially among participants with (vs. without) water insecurity. Thus, communications targeting people with water insecurity should improve disaster preparedness, use trusted information sources such as local news and emergency services, and highlight connections between water security threats and worsening severe weather. |
16:30 | Optimization of Reliability Test Planning for Complex Systems Using a Constrained Heuristic Approach PRESENTER: Eduardo Novaes ABSTRACT. In the development of critical systems, ensuring high reliability is paramount, particularly in the aerospace, energy, and defense industries. This study presents a novel approach to optimize reliability test strategies for complex systems using constrained heuristic methods, focusing on minimizing costs and testing time while adhering to reliability and precedence constraints. Our methodology leverages a multi-objective optimization model, where each test is characterized by decision variables such as the number of test specimens, exposure time, and stress levels. These decision variables are constrained by the system's mission requirements and test precedence conditions, ensuring that each test is conducted in an optimal sequence without sacrificing reliability targets. To solve the problem, we implement a genetic algorithm and an exhaustive search approach, allowing for the evaluation of multiple test configurations under Monte Carlo simulations. By integrating advanced computational techniques, we bridge the gap between theoretical optimization frameworks and practical applications, offering a solution for industries aiming to consider the reliability of their systems within the development process. This is particularly valuable in contexts with limited resources, but system reliability must remain uncompromised. |
16:45 | Demonstrating Reliability for Actual Field Load Efficiently PRESENTER: Alexander Grundler ABSTRACT. Empirical life tests are used to demonstrate reliability of a product. The reliability requirement is often formulated as a fixed values of service life and reliability. In addition, the relevant load for reliability analyses and demonstration is usually also fixed (95 % customer). However, in actual field usage each customer will use the product differently and as a result, the operational hours per year vary, as well as the endured load during the time. To take these aspects into account, respective methods need to be developed. If the distributions can be estimated, the actual reliability of field usage can be estimated instead of one of a extreme customer. Moreover, the demonstration of reliability has to be done by physical tests. As a result, engineers are faced with the challenge of selecting the most suitable test strategy out of the many and also choosing the optimal parameter setting. To overcome these challenges, the Probability of Test Success can be used as a central, objective assessment metric. The newly established context of a hypothesis test of the reliability tests helps in developing easy to implement algorithms and procedures for the identification of the optimal test in the individual case. This paper combines the planning of efficient accelerated reliability demonstration tests with the estimation of actual field usage reliability. The result is a procedure which enables the identification of the optimal test to demonstrate and estimate the actual reliability of a product. This is accomplished by setting up the required equations and definitions as well as developing a monte-carlo-based method to evaluate these equations. It is shown how this framework yields advantages in the planning of the tests and allows for an easy selection of the approriate test parameters. |
17:00 | Confidence Intervals in Optimized Response Surface Designs for Accelerated Lifetime Testing PRESENTER: Marco Arndt ABSTRACT. In Accelerated Lifetime Testing (ALT), products are exposed to increased loads to assess their lifetime within a practical timeframe, thereby expediting reliability evaluation and reducing costs. In the course of this, employing Design of Experiments (DOE) principles in response surface methodology (RSM) offers a robust approach for multivariate modeling of product reliability [1]. However, challenges arise here such as with one-dimensional lifetime models (e.g. Wöhler or Arrhenius) as the model response variance increases considerably while extrapolating to field load levels for predictions. Counteracting this, recent work documents that established response surface designs (RSDs) can be optimized to minimize extrapolation variance while maintaining model accuracy and size [2-4]. This results in more efficient RSDs, balancing overall runs, model estimation quality and, most notably, testing time. This paper investigates the confidence interval behavior of the estimated model response within optimized RSDs. First, a synthetic model is utilized to generate testing times as input for the RSD, allowing for model parameter estimation and variance analysis under efficiency-enhancing modifications and censoring of the test design. Generic confidence interval behaviour of the model response are evaluated here numerically. Additionally, an existing study data set using a central composite design (CCD) is analyzed exemplarily to explore how manipulations and censoring affect confidence interval variation. Therefore, by considering not only the extrapolation variance and most likely model parameterization deviations but also the confidence interval properties for model parameters, responses, and reliability, this work enhances understanding of the most efficient RSDs in ALT in terms of viability. [1] Yang: Life Cycle Reliability Engineering [2] Arndt and Dazer - ESREL 2023 [3] Arndt and Dazer - RAMS 2024 [4] Arndt and Dazer - PSAM17 & ASRAM2024 |
17:15 | Degradation analysis of several performance characteristics of capacitors under elevated thermal and electrical stress PRESENTER: Philipp Mell ABSTRACT. Capacitors are electronic components with a wide field of technical applications. Their functionality has to be assured throughout the intended product lifetime. While in operation, the performance characteristics of capacitors slowly degrade. Ultimately, one or more of these characteristics reaches a threshold at which they are considered failed. Thus, modeling degradation is a key factor in assessing capacitor reliability. While most existing studies addressing capacitor degradation consider long runtimes and few stressors, this paper considers test performed under different temperature, humidity, voltage and excitation frequency (and thus, the number of voltage pulses). To reproduce realistic operation, repeated charging and discharging under PWM excitation is considered. Superimposed stress via controlled operation above the partial discharge inception voltage is included, as this is rarely explored. In the performed tests, the degradation of several performance characteristics was repeatedly measured. Based on the data, different degradation models are developed and compared. The influence of the considered stress factors on the degradation is estimated and statistically assessed. Unlike existing studies, where fixed performance thresholds are used to reduce the data to soft failure times, this paper focuses on the load dependency of the degradation path itself to allow variable failure thresholds. |
17:30 | Reliability in education – a hands-on university experimental course on accelerated life testing PRESENTER: Simon Sehic ABSTRACT. Since the control of most safety-relevant applications is based on semiconductors, their degradation and consequently their failure is the focus of functional safety analyses. Therefore, predicting the semiconductor lifetime is of great importance. Addressing this, a hands-on experimental course (laboratory or lab course) on accelerated life testing is offered to students of the Bachelor of Engineering in functional safety at the Ruhr West University. The course is conducted in cooperation with the University of Defence in Brno. Previously the following learning outcomes or competences to be achieved were defined as follows. The students can (1) apply their knowledge of the reliability engineering fundamentals, (2) apply probability and statistical methods to analyse product life cycles, (3) perform hypothesis testing, (4) apply statistical models, tolerance, and confidence intervals, (5) apply sample size determination and regression analysis, (6) identify, collect, analyse, and manage various types of data to minimize failures and improve performance. The course includes a workload of 3 credits in terms of the ECTS combining theoretical and practical parts. The teaching content is structured in seven phases: (1) Lecture on the fundamentals of reliability and testing theory (2) Setting up the experiment (3) Conducting the experiment (4) Hypothesis formation (5) Data collection, elaboration, and analysis (6) Validation of the results (7) Writing a report, including review and revision The attendance phase includes (1) to (4). The phase is carried out during a university-wide project week starting on Monday morning and ending on Friday noon. This is followed by a home working phase, continuing with (4) and closing with (7). Additionally, bachelor theses on improving the experiment or conducting safety analyses on the system are offered as an option to the students. The proposed contribution describes the experiment in detail as well as the student tasks and the underlying didactical concept. |
Panelists: Sandra Seno-Alday, Hermann Steen Wiencke, Lars Bjarne Røvang and Konstantina Karatzoudi
16:30 | A Sobolev trained neural network surrogate with residual weighting scheme for computational mechanics PRESENTER: Ali O.M. Kilicsoy ABSTRACT. Repeated evaluation of system responses through models become necessary when quantifying uncertainty or optimizing such system. This task can accurately be done through use of complex numerical models such as finite elements. However these models bring with them high computational cost which scales with the complexity of the observed system. Therefore, the use of surrogate models is very practical as they can provide a feasible accuracy for less computational cost. Neural networks represent one type of such surrogate models, whereby a set of data is used to train the neural network model on. The incorporation of sensitivity data, called Sobolev training, can elevate the model performance in accuracy and training time by expanding the loss with additional terms. Each term is pondered with a coefficient weight, which are optimized in parallel training through an adaptive scheme. We use this neural network model in a case study of computational mechanics with regards to its performance. |
16:45 | Spatial Risk Analysis Integration in Climate Risk Assessments: A Global Review of Methodologies and Practices PRESENTER: Mitchell Anderson ABSTRACT. As climate change intensifies, cities worldwide are increasingly conducting climate risk assessments to inform adaptation planning. However, the effectiveness of these assessments in guiding robust adaptation strategies depends significantly on their integration of spatial risk analysis techniques. This paper presents both a framework for spatial risk analysis and a systematic review of spatial risk analysis integration in climate risk assessments globally, examining 86 reports from diverse urban contexts. Using our proposed framework covering five themes from Risk Source, Exposure, Vulnerability, Consequence, and Risk Communication, our findings reveal substantial variability in the depth and sophistication of spatial risk analysis across different types of climate risk- and adaptation-related reports. Over 50% of assessments failed to capture any spatial risk information, and those that did include some aspect of spatial risk seldom went beyond hazard identification and basic exposure assessments. Common limitations include a narrow focus on flooding and heat risks, inadequate consideration of future scenarios, neglect of indirect and cascading risks, poor spatial resolution, and limited integration of vulnerability factors. These shortcomings have significant implications for the effectiveness of climate adaptation planning, potentially leading to poorly targeted or maladaptive strategies. We propose several recommendations for improving spatial risk analysis in climate risk assessments, including varying communication techniques, broadening hazard consideration, enhancing temporal analysis, integrating cascading risks, improving analytical granularity, and better incorporating vulnerability and uncertainty quantification. |
17:00 | Multi-Unit Analyses across Application Domains PRESENTER: Pavel Krcal ABSTRACT. Reliable systems such as nuclear power plants, blocks in oil refineries, computation clusters, rail transportation, or satellites fail rarely. One could argue that studying accidents of an individual unit brings sufficient insight for risk-based decision making. This might be true in many situations, but at the same time, this approach hides potential risks that originate from the fact that there are multiple units with a similar risk profile that share dependencies. A nuclear power plant might consist of several units located on the same site, a refinery contains almost identical processing blocks, and satellites might form a constellation. Studying effects of the dependencies between units might increase risk understanding and provide new perspectives for system design and operation. Risk models from different industries and applications differ in the mathematical formalisms used, scope of the analysis and size, resolution and complexity. One of the widespread methods utilizes fault and possibly event trees. We investigate feasibility of a multi-unit analysis for applications where models for individual units exist. Methods for analysis of dependencies between units might differ based on the size and complexity of the models. Large nuclear power plant fault trees might require different algorithms than significantly smaller models of satellites. Our exploration bases on the multi-unit sequence method developed for the nuclear industry. We also include comparisons with analyses that use more dynamic models than fault trees and evaluate them by Monte-Carlo simulations. |
17:15 | Modularized simulation to manage complex systems throughout the lifetime PRESENTER: Siegfried Eisinger ABSTRACT. In a time where technological and societal challenges demand rapid innovation, there is a growing need for rapid design of novel, complex systems both in industry and society at large. Managing the underlying challenges and arriving at systems of suitable quality in acceptable time frames, while managing risks and uncertainties requires a structured approach using simulation modelling, i.e. "testing, checking, failing and optimizing a digital system is much cheaper and less risky than doing the same on a real system". We argue that this approach must include modular, lifecycle-spanning and assured simulation models that provide decision support across different phases of the system lifecycle. These challenges are discussed in the paper and solutions are presented. The paper concludes that simulation models are essential for managing the complexity of modern systems. Organizations can build Digital Twins that provide decision support across all lifecycle phases through the application of modularization, co-simulation, different simulation approaches, surrogate modelling, and assurance frameworks. These approaches enable stakeholders to collaborate effectively, reduce risks, optimize time to market and achieve more efficient and reliable operations in complex systems. The paper discusses these elements in detail, proposes solutions, provides examples from our recent work and suggests an outlook into the near future. |
17:30 | Gaussian Process Surrogate Models for Efficient Estimation of Structural Response Distributions and Order Statistics PRESENTER: Vegard Flovik ABSTRACT. In this study, we explore the integration of machine learning-based surrogate models with traditional high-fidelity physics simulations to provide enhanced statistical estimates of structural responses under diverse weather conditions. Engineering disciplines often rely on extensive simulations to ensure that structures are designed to withstand harsh conditions, while avoiding over-engineering for unlikely scenarios. Assessments such as Ultimate Limit State (ULS) and Serviceability Limit State (SLS) involve evaluating weather events, including estimating loads not expected to be exceeded more than a specified number of times (e.g., 100) throughout the structure's design lifetime. Although physics-based simulations provide robust and detailed insights, they are computationally expensive, making it challenging to generate statistically valid representations of a wide range of weather conditions. To address these challenges, we propose an approach using Gaussian Process (GP) surrogate models trained on a subset of outputs from simulations. This enables the creation of probabilistic models of structural responses to specific weather conditions, allowing efficient exploration of scenarios while significantly reducing computational costs. By leveraging the GP models, we can efficiently sample the parameter space and estimate the distribution of structural responses, including the uncertainties associated with these predictions. This method is particularly valuable for SLS calculations, where the evaluation of structural responses under a wide range of weather conditions is crucial but difficult to achieve with traditional methods like Environmental Contouring. Our findings demonstrate that even with a limited number of simulations, GP surrogate models accurately predict the statistical distribution of structural responses, providing results comparable to full simulations but at a fraction of the computational cost. This method allows for a more detailed statistical representation of weather-induced structural responses, leading to better-informed design decisions for structures exposed to varying weather conditions, ultimately improving both the robustness and reliability of engineering designs. |
16:30 | UncertaintyQuantification.jl: Efficient reliability analysis powered by Julia PRESENTER: Jasper Behrensdorf ABSTRACT. This work presents the latest features and developments in UncertaintyQuantification.jl , a generalised open-source framework for uncertainty quantification. The software is written in the Julia programming language, a modern high-level dynamic programming language ideal for data analysis and scientific computing. Julia's open-source nature provides a significant advantage over proprietary software such as MATLAB, which is frequently associated with high licensing costs. The framework has undergone extensive development since its initial release in August of 2020 and now includes a number of numerical algorithms for reliability analysis, sensitivity analysis and metamodeling. While this paper presents all currently available features, the main emphasis is on the recent introduction of imprecise probabilities. Propagation of probability boxes and intervals through any numerical model is a feature we believe sets UncertaintyQuantification.jl apart from other software in the field. Another important element is the ability to interface with cluster job schedulers such as Slurm. It makes the software accessible and scalable for any simulation that requires high performance computing. There is no limit to the complexity of a model, only constrained by the availability of resources. Adequate illustrative numerical examples from various engineering disciplines are presented throughout the paper to highlight the capabilities of the implemented algorithms. |
16:45 | Determine Input Parameters for Instance-Specific Reliability Models in the Circular Factory PRESENTER: Victor Leon Mas ABSTRACT. The vision of the Circular Factory (CF) is to extend product lifecycles by transforming used products into new generations through sustainable practices such as reuse, reconditioning, and remanufacturing. It aims to create perpetual innovative products. Achieving this vision requires developing instance-specific reliability models capable of predicting functional behavior at both subsystem and system levels supporting decision making for control of the CF. One challenge in building this reliability model is to identify input parameters that not only allow accurate predictions of functional behavior, but also account for time-dependent changes and interactions at both the subsystem and system levels within the CF. This study introduces a framework to determine the input parameters for instance-specific reliability models in the CF. An angle grinder is used as an exemplary application of the proposed framework. The framework consists of five steps: (1) system decomposition, where the angle grinder is broken down into subsystems and components; (2) component prioritization, which identifies the elements most relevant to the functional behavior; (3) use case analysis, which examines how different operational scenarios affect component performance and failure modes; (4) failure mode identification, which links failure modes to the components; and (5) input parameter extraction, where the necessary input variables for the reliability model are extracted based on the results. This study focuses on tooth breakage as it serves to illustrate the application of the framework through a single example. Identifying the correct input parameters lays the foundation for developing instance-specific reliability models that integrate various failure modes and subsystems within the CF. However, the framework is limited by its focus on a single failure mode, which restricts its generalizability. Future work should address multiple failure modes and validate the framework across diverse use cases. |
17:00 | Impact of common-cause failures on the availability of connected (r,s)-out-of-(m,n):F systems ABSTRACT. There has been a renewed interest in connected bi-dimensional configurations, since they successfully describe real-life systems such as cellular networks, communication and property security solutions, etc. The computation of their availability has long been known as a delicate task, especially for large systems. Most of the published studies have considered independent failures for (mainly) identical elements. In this work, we investigate the modification of the availability of connected (r,s)-out-of-(m,n):F systems when common-cause failures are present. It turns out that this availability may be larger or smaller than the value obtained in the case of strictly independent failures. The transition between these two behaviors occurs for a component's critical unavailability $q_c$ that depends on the size of the system. Analytical expressions for the asymptotic values of $q_c$ are given in the case of the beta-model and the binomial failure rate model. A comparison with the case of one-dimensional consecutive configurations is also provided. |
17:15 | Leveraging usage distributions for reliability design in home appliances PRESENTER: Enrico Belmonte ABSTRACT. Understanding the load history a component experiences is fundamental for accurate reliability prediction. Specifically, effective reliability analysis requires detailed knowledge of both the magnitude and duration/frequency of local loads applied to a component. While the magnitude of local loads can be measured using advanced sensors or estimated via simulation tools, determining the duration/frequency of these loads has historically relied on surveys or assumptions. This challenge spans across various industries, including home appliances, where reliability is a key attribute valued by consumers. The advent of high-end connected devices, equipped with integrated hardware connectivity, presents a groundbreaking opportunity to capture real-world usage data directly from the field. This paper investigates the usage distributions of multiple home appliances, considering both the technical characteristics of the appliances and user interaction patterns. The findings illuminate the influence of usage distributions on reliability design and calculation, offering new insights for improving product reliability. |
17:30 | Machining of HPT Shrouds and the Impact on Air Flow, EGT Margin, Aero-engine Performance and Reliability PRESENTER: Jose Pereira ABSTRACT. This study explores the dynamic landscape of the aviation industry, which is continually propelled forward by the emergence of innovative technologies and the refinement of more efficient techniques. This perpetual evolution significantly shapes the aerospace sector, where companies are unwaveringly committed to elevating their engines' safety, quality, and mechanical efficiency. Within this commitment, engine approval in the test cell demands adherence to stringent performance parameters set by clients and regulatory bodies. Maintenance, Repair, and Overhaul (MRO) companies commonly face problems with lower-than-expected Exhaust Gas Temperature (EGT) margins, underscoring the critical necessity to enhance processes related to this vital parameter. Concurrently, the study aims to show how to extend the engine's life cycle and augment its overall performance. The methodological approach encompasses a case study of the Scallop grinding process alongside the match grinding process, incorporating a quantitative analysis of results before and after implementation. The collected data illuminates that applying the proposed process yielded improved airflow and direction, coupled with adequate control of the minimum clearance. Consequently, a more efficient and increased airflow is characterized by a higher proportion of air in the air/fuel mixture and a larger volume of air passing through the turbine. As a result, a substantial enhancement in the EGT margin of the evaluated engines is observed, demonstrating an average improvement of 78% compared to the results before the implementation of the process. In conclusion, this method offers significant potential for optimizing Aero-engine Performance and Reliability. It contributes to the knowledge of process and aero engine maintenance, can aid safety professionals, and its implications can extend to various industries where safety and reliability are paramount. |
16:30 | Safe AI vs Safe use of AI PRESENTER: Rune Winther ABSTRACT. Following the progress in the field of AI, the last years have seen a substantial increase in research on safe AI. The premise is that to use AI in safety-critical systems we need to ensure that the AI is safe. In this paper, we will emphasize the difference between requiring an AI to be safe, and that AI is used safely. Using AI to control a system with potentially serious safety risks, doesn’t necessarily imply that the AI itself has an impact on safety. The question is what you are actually using the AI for. Is AI used to improve a system’s performance, or specifically to ensure safety? For example, do we use AI in a self-driving car to get the car efficiently from A to B, or to do it safely? We believe there is, too often, an unnecessary mixing of process control (e.g. navigating and driving the car from A to B) and functionality aimed at avoiding accidents (e.g. collisions). The result is a two-fold problem: We cannot fully utilize the potential of AI, and (because) we struggle with demonstrating adequate safety for the AI. It is our opinion that these problems often can be mitigated, but that this requires another way of thinking about AI in safety-critical systems In this paper we will discuss the importance of system design wrt. realizing the benefits of AI, while minimizing safety-concerns related to the AI. In this era, where complexity and interconnecting “everything with everything” is the name of the game, we will show that traditional principles for safe system design (e.g. “high cohesion, low coupling”) are still relevant and perhaps more important than ever. We will also present ideas and discuss issues related to establishing explicit safety arguments for the safe use of AI in safety-critical systems. |
16:45 | Reliability or ethics: Why should the human decision be initial or final? PRESENTER: Dirk Söffker ABSTRACT. Decision support systems or, more recently, human-AI teaming systems vary in one way or another: Who makes the final decision when it comes to important matters or life or death? For fundamental legal, safety, moral, or human persuasion reasons, the final decision in cooperative systems is practically often assigned to humans. The article addresses the complexity of the discussion and discusses the pros and cons of a technological and of a human final decision. Especially in contexts where human behavior is known to be unreliable, the question arises whether it is appropriate to subordinate more reliable decision-makers to human decision-making. The question also arises as to how AI-based assistance and suggestion systems influence the quality and quantity of the decision favorably or unfavorably. The first results of a study on this question are presented, which replaces the ethical and moral discussion with a reliability-oriented one. This is particularly important as our everyday lives are influenced in the same way by the same issues and we have already adapted to technological solutions or, on the contrary, no longer make final decisions on our own because we also trust technological solutions. |
17:00 | AI Security Assurance: Developing framework for Secure and Resilient AI PRESENTER: Ankur Shukla ABSTRACT. The rapid advancement of Artificial Intelligence (AI) technologies has delivered considerable transformative benefits across various industries but has also brought significant security risks. Security assurance of AI systems is critical, particularly as these systems are increasingly integrated into critical infrastructures, healthcare, financial services, and autonomous systems. This paper discusses the challenges, risks, and opportunities related to the use of AI, covering various aspects such as data preprocessing, model training, and deployment. It also presents a conceptual framework for AI security assurance, focusing on evaluating the overall security level based on security requirements, threats, vulnerabilities, and mitigating the unique risks posed by AI algorithms including data integrity, and model vulnerability to adversarial attacks. The framework leverages established security standards, regulations, and acts to identify security requirements and provide a structured approach to identifying and addressing AI specific risks. The paper aims to provide insights on security risks related to AI and highlight the importance of incorporating security assurance measures throughout the AI system lifecycle. |
17:15 | An LLM-assisted game theoretic approach to community risk reduction PRESENTER: Nolan Feeny ABSTRACT. Risk-reducing measures can be beneficial to a collective or a single actor. Preventing an unwanted event benefits all actors who might end up bearing a consequence of the event, while avoiding a given consequence of the event benefits only those who will be affected by this particular outcome. Balancing preventive and consequence-reducing measures could thus be a contentious issue if several actors are to coordinate their risk-reduction efforts. In this paper, we develop an LLM-assisted game theory model that determines an optimal risk reduction strategy in such a situation. We also discuss the ethics surrounding incorporating LLMs into the research process. The model considers the options of reducing the probability of an unwanted event, benefiting all actors, or reducing the conditional probabilities of the potential consequences of the event, benefiting one or more actors. |
17:30 | Traditional vs. AI-based Methods for Detection of Anomalies on Metal Surfaces PRESENTER: Jonas Strohhofer ABSTRACT. Anomalies are deviations from the norm, with varying expression depending on the context. Anomaly detection is an important reliability and quality tool within all technical disciplines but is also critical in many other fields, including medicine, food production, and video surveillance. This paper addresses anomalies on metal surfaces in technical products, such as scratches or particles, while excluding logical anomalies like incorrect packaging counts. Over the past few decades, numerous research efforts have led to the development of a wide range of algorithms to tackle anomaly detection problems. These algorithms vary in complexity, from traditional statistical methods to advanced deep learning (DL) approaches. The growing complexity of DL algorithms is driven by the increasing need for systems that can find unusual patterns in large and complex datasets, where defects are often hard to detect and involve many different factors. However, for simple defects – such as surface scratches – this level of complexity may not be necessary. In fact, it is feasible that simpler, less computationally demanding algorithms may perform equally well, or even better, in these specific cases. In this paper, we aim to conduct a comparative analysis of both traditional, simpler anomaly detection algorithms and more modern, AI-based methods. Our focus is on evaluating these methods specifically for detecting simple scratches or particles on metal surfaces. We hypothesize that for this class of problems, the increased complexity of advanced DL models might provide fewer benefits compared to simpler approaches. This study examines whether complex models offer a real performance advantage for simpler defects or if their higher computational costs outweigh the benefits. |
17:45 | Air Accident Analysis with AI: Reassessing Flight BA 5390 Using the Accimap and STAMP/CAST Methodologies PRESENTER: Moacyr Machado Cardoso Jr. ABSTRACT. Air accident analysis involves considering human, technical, and climatic factors, as well as political and social influences. Although criminal codes vary between countries, air accident investigations generally prioritize a technical and impartial approach, focusing on process improvement and avoiding the assignment of blame or criminal liability (Wang et al., 2023). Methodologies such as Accimap (Rasmussen, 1997) and STAMP/CAST (Leveson, 2004; 2012) offer a holistic view but require deep historical knowledge to correlate actors and their influences. With the introduction of artificial intelligence tools in investigations, the need arises to assess uncertainty and identify possible 'hallucinations' generated by these systems, which could compromise the accuracy of recommendations (Hulme et al., 2024). This article reanalyzes the BAC One Eleven accident on British Airways flight BA 5390, applying the Accimap and STAMP/CAST methodologies with the assistance of ChatGPT 3.5. The study compares the information generated by AI with the official report from the Air Accidents Investigation Branch (1992), aiming to qualitatively assess the AI's accuracy against a gold standard and discuss the implications of AI use in air accident analysis. |
16:30 | Application of GANs for System Condition Diagnostics: A Case Study on Bridge Structural Health Monitoring PRESENTER: David Coit ABSTRACT. SPECIAL SESSION DATA-DRIVEN PREDICTIVE MAINTENANCE This paper explores the application of Generative Adversarial Networks (GANs) for system condition diagnostics, focusing on bridge structural health monitoring. In traditional bridge maintenance, critical condition data is rare due to proactive interventions, resulting in highly imbalanced datasets that challenge the accuracy of data-driven models. To address this limitation, we propose using conditional GANs (cGAN) to generate synthetic data for rare, critical bridge conditions, enabling more robust reliability assessment and predictive maintenance strategies. Our methodology uses classification trees for comprehensive feature selection from a robust real-world dataset, identifying key factors impacting bridge deck deterioration Then we leverage cGAN to learn the underlying patterns and dependencies within observed bridge condition data, and thereby providing valuable insights for bridge condition diagnosis. By conditioning the data generation on specific states, such as bridge with certain age or average daily traffic, the cGAN produces realistic data that mimics the statistical properties of actual observations. Result shows that generated data closely aligning with test data, demonstrating the accuracy of the proposed method. By providing a more accurate and complete dataset for critical conditions, the proposed cGAN approach enhances the reliability of bridge condition monitoring. This approach enables better handling of imbalanced datasets, improving the accuracy of machine learning models in predicting failure and deterioration in long-term bridge health monitoring. |
16:45 | A Condition-Based Maintenance Strategy for a Multi-Component System With Both Continuous and Discrete State Monitoring PRESENTER: Bowen Guan ABSTRACT. SPECIAL SESSION DATA-DRIVEN PREDICTIVE MAINTENANCE State monitoring is the foundation for condition-based maintenance of systems, and advancements in sensor technology have enabled real-time monitoring of the degradation processes in many systems. However, many complex multi-component systems, such as aircraft engine systems and various mechanical systems, cannot rely on sensors for real-time monitoring of all components' degradation states and must instead rely on inspections for determination. To address this, we modeled a multi-component system with both real-time monitored components and components requiring inspections, and proposed a condition-based inspection-opportunistic maintenance strategy. Real-time monitored components are replaced as needed based on their condition, while components requiring inspection are replaced when specific maintenance opportunities arise. A genetic algorithm is used to optimize the parameters of the proposed maintenance strategy, and comparisons were made with purely real-time monitored systems and inspection-based systems. |
17:00 | Offline Learning of Maintenance Policies Using Reinforcement Learning and Historical Maintenance Data PRESENTER: Quang Khai Tran ABSTRACT. SPECIAL SESSION DATA-DRIVEN PREDICTIVE MAINTENANCE In maintenance studies, particularly condition-based maintenance, it is often assumed that the degradation process model is known, allowing for the determination of the optimal maintenance policy. With probabilistic models, we can use parametric approaches or Markov decision processes and their variants to find this optimal policy. However, in practice, when model construction is challenging, it is possible to derive the maintenance policies from a dataset of past maintenance records (such as pre- and post-maintenance conditions, actions taken, and associated costs). Offline reinforcement learning, a branch of reinforcement learning, focuses on learning policies from a fixed dataset without gathering new data from the environment. This dataset may include data from various sources corresponding to different behavior policies (assumed to be non-optimal). The goal is to extract a policy better than the behavior policies, which is the setting for off-policy learning. Hence, an off-policy reinforcement learning algorithm is expected to learn good policy from this fixed dataset. In the literature, offline reinforcement learning with discounted reward metric and discrete state space has been studied extensively. However, this kind of problem, when considering continuous state-space in the infinite horizon under the average reward metric, receives less attention. In this paper, the relative Q-learning algorithm for average reward metric is applied to several datasets collected in a continuous-state environment to examine the characteristics of the extracted policies. The datasets include data from expert policies, non-optimal behavior policies, and mixed data. The results show that when the dataset adequately covers the state-action space and the action space is not too large, a near-optimal policy can be learned with relatively little data. Even when using data from non-optimal policies, the learned policy, although sub-optimal, avoids hazardous regions in maintenance operations. |
17:15 | Robot Fault Detection using Digital Twin and Deep Learning PRESENTER: Haibo Li ABSTRACT. SPECIAL SESSION: DATA-DRIVEN PREDICTIVE MAINTENANCE Reliability and lifetime of robots is critical for modern manufacturing systems, as robots have been widely used in manufacturing, and their failure could lead to substantial financial losses. Predictive Maintenance (PdM) has emerged as an effective strategy, utilizing historical data and prognostic models to anticipate maintenance needs. Digital twins simulate the behavior of real systems and connect to the real system using sensors in real-time, and have shown potentials to booster the performance of predictive maintenance algorithms. In this paper, we present an open-source demonstration platform of using digital twin for robot PdM. A data-driven digital twin is developed first for a robot to predict its temperature during operation. By leveraging time-series data collected in real-time, including motor temperature, voltage, and position, a data-driven algorithm is developed to detect abnormality in motor temperature response. An innovative Recursive Prediction Update (RPU) technique is proposed, which replaces fault-contaminated data with predicted values in real-time, significantly enhancing accuracy of abnormility detection. Results show that the integration of Digital Twins and RPU improves fault detection performance, offering valuable insights for predictive maintenance under limited failure data conditions. |
16:30 | Rethinking safe professional practice for the sustainable transition of aviation? Preparing the next generation of aircraft maintenance engineers for future flight. ABSTRACT. As a growing area of interest for research and policy development, it is increasingly understood that the achievement of fossil-free aviation involves a gradual restructuring of the broader aviation industry. Like many industrial sectors, aviation is currently undergoing a green transition, a paradigmatic sectorial shift that, in tackling climate change, seeks to (re)define future flight in the twenty-first century as both a safe and sustainable sector. Aviation is also undergoing a technological transition where digitalization, electrification, unmanned aircraft systems and artificial intelligence are transforming this heavily regulated and traditionally slow-to-change sector into one that must increasingly embrace a faster pace of technological innovation and advancement. At the nexus of sustainable transition and technological advancement is an ever-evolving need for new professional skills to meet the demands of a more environmentally friendly aviation sector. Yet, despite a positive prognosis of expanding employment opportunities, the aviation industry is currently experiencing a problematic global professional shortage. Scholars highlight therefore that a primary challenge for the sustainable transition of aviation is to attract, educate, and retain the next generation of aviation professionals. This paper addresses these concerns in relation to the education and training of the next generation of aircraft maintenance engineers, more specifically, how educating to achieve sustainable future flight will impact and be impacted by perceptions and expectations of safe professional practice within this socio-technical system. By focusing on training organisations, educators, students, and licensed aircraft maintenance engineers working and/or studying in the domain of aircraft maintenance in Sweden, the paper will, through socio-legal lens of normative pluralism, also examine if, and how, professional decision-making among aircraft maintenance engineers, long defined by professional licensing, legal/regulatory compliance and a strong professional cultural allegiance to putting safety first, endorses – or not – the safe sustainable transition of aviation. |
16:45 | Working stories into memory: case based learning three months later PRESENTER: Sarah Maslen ABSTRACT. There has been interdisciplinary interest storytelling in work contexts both informally and in professional development where facilitators use cases as a pathway into generalisable lessons. Yet there have been enduring questions about whether story-based learning approaches are suited to some forms of knowledge more than others, and whether such methods can have a tangible long-term impact on professional practice. This paper reports on research that evaluated the medium-term pedagogical effectiveness of case based learning for the building of non-technical capabilities in the service of public safety among working engineers. We developed and delivered discussion case, role play, and serious game training using source material from real and imagined disaster scenarios. We conducted semi-structured interviews three months on. Our analysis examines what participants could recall. Where source material is based on a case or scenario within the participants’ sector, we found that participants had greater recall of technical details from a technical specialist position in the sector. Where source material was drawn from another hazardous industry sector, participants recalled more general safety lessons and often from the perspectives of non-engineering professionals and the public. Workshops involving a role play element (role play cases and the serious game) facilitated lessons about the social nature of engineering, where with discussion cases participants took lessons principally related to management of particular hazards. Participants typically could not name the non-technical capabilities, but in their discussion of the cases in interview, they showed a knowledge of these non-technical capabilities, i.e., the generic lesson is bound to the narrative, rather than recalled abstractly. |
17:00 | Between operational departments and human resources management: managing professional skills for reliable industrial performance. ABSTRACT. The nuclear industry is characterized by cutting-edge technologies, high-level professionals and particularly high safety standards. If they wish to maintain or even develop their production capacities in a safe manner, companies in this field are faced with major skills management challenges. This involves managing human resources in such a way as to ensure that the career paths of professionals are mastered, as well as being able to recruit competent people when needed. In this respect, skills management seems to be the responsibility of both the company's operational departments, for their mastery of practical and experiential knowledge, and human resources managers, who must identify needs and propose solutions to meet them. This research focuses on a large, long-established company involved, among other things, in nuclear activities. Initial results show the extent to which different conceptions of skills management coexist within the company. On the one hand, nuclear professionals seem to be guided by tradition, with the logic of a long-term career path within the company and a strong commitment to the activity. On the other, human resources professionals refer to a more modern conception of human resources management, deploying standardized techniques. Faced with the issue of maintaining a high level of expertise and ensuring the safety of activities, we need to consider where these two visions meet. Beyond the socio-technical complexity of Perrow's systems, organizational dynamics are fundamental. The aim is to understand how a work organization can - or cannot - regulate practices and conceptions of skills management distributed between different professional groups, with a view to achieving nuclear safety. |
17:15 | Occupational safety and health in new salmon farming concepts PRESENTER: Kristine Vedal Størkersen ABSTRACT. Employees in fish farming have exposed work environments, and salmon farming is one of the most accident-prone industries in Norway. Currently, new production concepts are being introduced. Concepts for the seawater phase are becoming more diverse, expanding from mostly open netpens along the coast, to include new designs such as semi-closed and submerged units for coastal and offshore sites as well as production on land. Along with new technologies, working conditions are transformed. The objective of this article is to study occupational safety and health (OSH) in new salmon farming concepts. The new concepts involve technologies that reduce manual labor, shield or remove humans from high energies or hazards. When routine operations become remotely controlled, personnel are removed from hazards, but other risks related to monitoring tasks may increase. Also, larger operations still require personnel and involve hazards and increase uncertainty. At the new salmon farming concepts in general, risks can potentially increase, despite improved safety management: This study shows how large investments and new technologies can reduce some hazards, but also lead to new organizing and deskilling, that may shift the entire foundation for decision making and safety management. |
17:30 | Plant management and organizational reliability, the Plant Manager and his team ABSTRACT. Since the 1980s, the HRO school of thought has been studying the concept of organizational reliability, both in terms of its characteristics and the way it is constructed on a day-to-day basis, in normal or degraded situations, within high-risk organizations. To do so, they adopt a macro (organizational structures and designs) or micro (interactions between groups and operational players) view of organizational reliability, apprehended as a dynamic phenomenon. Through the study of a matrix organization made up of facilities, complex socio-technical systems, we propose to apprehend this concept from a situated approach via a managerial perspective. Our comprehensive and descriptive approach to facility management teams and structures has led us to the central figure of the Plant Manager. This actor’s in charge of the regulation of the balance between productive activity and safety. This study led us to characterize the managerial function of this actor, as well as the tensions that run through it. We see him as a pivotal actor in organizational reliability, at the crossroads of strategic requirements, emanating from the organization and its environment, and operational requirements specific to the reliable operation of his plant. Based on Mathilde Bourrier's definition of organizational reliability as “the study of organizational conditions enabling a complex organized system to maintain levels of reliability compatible with both safety and economic requirements” (2003, p.200), we propose to explore this concept through the organizational work of Gilbert de Terssac (1988). The aim is, from a situated perspective, to understand how the actors who design work rules and those who carry them out on a daily basis, interact to enable a high level of organizational reliability to be maintained, whatever the situation, focusing in particular on the central figure of the plant manager. |
17:45 | The part that we play: engineers’ perceptions of their responsibility for public safety PRESENTER: Jan Hayes ABSTRACT. In the study of accident causation and prevention, safety research favors organizational explanations that minimize consideration of the agency of individuals. Challenging this framing of how to promote excellent safety performance in organizations, this paper presents an analysis of the perceptions of responsibility among practicing engineers regarding their role in public safety. Drawing on data from professional development workshops for gas sector engineers involving close to 100 participants, we found that engineers adopt a backward-looking responsibility framework to understand accident cases as linked to engineering practice for accident prevention. Many participants explain that they are far from the passive actors an organizational view of accidents would suggest. In a forward-looking responsibility framework, they describe the importance of speaking up, the professional obligation to do so and strategies to be more effective in articulating recommendations to management. While they do not consider themselves as passive actors, the engineers were also acutely aware of the challenges of asserting influence in large organizations. This research shows the complex and nuanced interrelationship between engineering and management decision making, highlighting an under researched area of safety science. |
16:30 | Climate risk mitigation in Longyearbyen, Svalbard PRESENTER: Marianne Nitter ABSTRACT. The Svalbard Archipelago has experienced the fastest and highest temperature increases in recent decades, due to man-made greenhouse gas emissions, reinforced by the melting of sea ice, which exposes, in particular, the west coast of Svalbard to the warmer temperatures of the ocean, also during the winter. This increase in temperature has effects on the weather patterns, precipitation and thawing of the permafrost, all of which expose settlements and critical infrastructure to a different risk picture and a subsequent need for climate change adaptation. Some of these threats have already materialised in Longyearbyen, the largest settlement on Svalbard, as the town witness increased frequency of landslides, rockslides, snow avalanches, erosion, and floods etc. However, the scenarios concerning both future temperature increase and climate change impact on society are uncertain since this is the starting point for assessing climate risk and developing risk-based climate change adaptation methodologies. This paper discusses the risk-based approach to climate change adaptation in Longyearbyen. More precisely, the paper aims to discuss how a changing climate reflects risk and vulnerability analysis and urban planning in Longyearbyen, and also what we can learn from these progressed climate changes on Svalbard. Data is collected through document analysis and interviews with stakeholders in Longyearbyen. Findings indicate that the authorities the last decade have implemented both short- and longer-term measures to adapt to a changing climate, but also the uncertainty related to the physical and social consequences of a warmer climate. |
16:45 | Safety in the High-Arctic PRESENTER: Gunhild B Sætren ABSTRACT. Longyearbyen is the world’s northernmost city where there has been living people since the beginning of the 1900’s. The city, that was originally built for coal mining industry, has transferred over to tourism and research, and has now about 2500 inhabitants. What remains of the coal industry is seeing it’s end as the last mine will close in 2025. This part of the world, the high Arctic, is further the area of the world where climate changes are most rapid. Thus, this city has had significantly increased risk of natural hazards during the past years. How the community is handling this is of importance for other places in the world as the effects of climate changes are getting closer and more evident. In this project, we interviewed five residents of Longyearbyen about their perception of safety. The participants have lived in the community for varying lengths of time, ranging from over 20 years to less than 1 year. Preliminary results from our grounded theory analysis suggest that the core categories of “Social Safety” and “Physical Safety” are key factors. While these are distinct, they also overlap in important ways, particularly in relation to societal changes following the decline of the coal industry. This transition, combined with the potential loss of expertise from the coal sector, raises concerns about the community’s ability to manage the growing risks associated with climate change and the increased frequency of natural hazards. |
17:00 | Resilience-based monitoring of climate adaptation PRESENTER: Knut Øien ABSTRACT. Climate change is happening today, so we have to build a more resilient society for tomorrow. This is especially true for Longyearbyen in Svalbard, as the climate is changing more rapidly in the Arctic regions than anywhere else in the world. This paper describes a resilience-based approach for monitoring of municipalities' work on climate adaptation, using Longyearbyen as a case aiming at making it climate resilient. It is based on a method termed Critical Infrastructure Resilience Assessment Method (CIRAM) for measurement of the resilient level in critical infrastructures but adapted for the follow-up of work on climate adaptation using indicators. This new method is termed CLimate Adaptation Indicators Method (CLAIM). The paper describes the development and use of the method, which was carried out in close collaboration with the local government. Climate adaptation indicators can help Longyearbyen local council, and municipalities in general, to visualize, report and communicate the work and effort made on climate adaptation to inhabitants, local politicians, and central authorities. Additionally, they can provide continuity in the work on climate adaptation covering both short-term and long-term measures to the effects of climate change. The paper also discusses how further adaptation of the method can provide an approach for long-term risk governance of climate change adaptation as part of the project ARCT-RISK (Risk governance of climate-related systemic risk in the Arctic). |
17:15 | Trust in Risk Communication: Local versus National Responses to Climate-related Risks in Longyearbyen-Svalbard PRESENTER: James Badu ABSTRACT. This paper explores the complex relationship between local and national authorities in risk communication within Arctic communities, focusing on the town of Longyearbyen, Svalbard. Through an analysis of interview data, the study examines how inhabitants perceive and trust risk communication regarding climate-related risks such as avalanches, permafrost melting, and erosion. While local authorities are perceived as more trustworthy communicators, national authorities face significant scepticism. We argue that this scepticism stems from a perceived disconnect between national policymakers and the lived realities of Arctic life, leading to perceived conflicting messages and ineffective long-term climate related risk management. The study shows that local trust is reinforced by familiarity, transparency, and historical local knowledge, while national policies are often seen as rigid and lacking contextual sensitivity. The findings show that a more integrated communication strategy would bridge the gap between local and national authorities and emphasize the need for collaborative, context-specific approaches to enhance community resilience in the face of escalating climate-related risks. This study contributes to understanding how trust shapes the effectiveness of risk communication in remote and vulnerable regions like the Arctic. |
17:30 | Lessons from Arctic climate risk governance: importance of durational and representational aspects of climate risk knowledge ABSTRACT. In the Arctic archipelago of Svalbard, the effects of climate change have and will manifest in the form of annual increases in air temperatures and precipitation. This has had and will have a cascading impact in the form of increased frequency and intensity of extreme weather and natural events such as avalanches, landslides, permafrost thawing, coastal erosion, river and glacial floods, and slush flows, warm spells, and shifts and changes in seasonal weather behaviors. This article brings forth critical insights about how risk knowledge needs to change and be combined by highlighting lessons from investigations into the risk governance of these short-, medium-, and long-term effects of climate change in Longyearbyen. Bringing forth a discussion on the uncertainty and ambiguity associated with the durational and representational aspects of knowledge on climate risk and climate-related risks, such as natural events and hazards. It does so through a grounded theory approach relying on mixed methods combining data from participatory action research, interviews, and autoethnography from Longyearbyen collected during the ARCT-RISK project 2021-2024. It also combines this data with insights from a document study that investigated reports, planning documents, and articles on natural hazard occurrences, prevention, mitigation, and prediction in Longyearbyen from 1989-2022. |
17:45 | Lessons learned from climate risk governance in the hotspot of climate change ABSTRACT. Understanding and adapting to climate change is one of the greatest ongoing challenges society faces. No other areas in the world experience climate changes as fast as the Arctic region. Longyearbyen, a settlement in the Norwegian archipelago Svalbard is located in the hotspot of the climate changes. This means that successful strategies for assessing and managing risks in response to climate change in Longyearbyen will serve as an important basis for future climate adaptation in other relevant parts of the world. This paper presents the main findings from the research project Arct-Risk (Risk governance of climate-related systemic risk in the Arctic), which aimed to develop knowledge and tools to understand and manage the effects of climate change on societal security. Five key lessons learned are identified from the project’s research activities: 1) Climate prognoses and data must be broken down into appropriate time and geographical units to make them applicable in risk assessments and planning work, 2) Methods for identifying and managing uncertainty will improve climate adaptation work and the handling of natural hazard events, 3) Application of local knowledge in various parts of climate adaptation and in systems for handling natural hazard events will provide better risk understanding and thus a better basis for decision-making, 4) Use of sensor technology in warning systems to handle natural hazards and climate change as a flexible and low-cost solution and 5) Use of climate adaptation indicators at the municipal level for awareness and follow-up of systematic climate adaptation work |
16:30 | Digital Twin of an Intelligent Completion System for Anomaly Diagnostic/Prognostic PRESENTER: Danilo Colombo ABSTRACT. Intelligent completion systems (SCI) aggregate value to the oil extraction process, as they aim to increase the reservoir recovery factor. Formed by a set of valves, control and monitoring components, the system is subject to failures during adjustment maneuvers, due to the high complexity of equipment’s number and drive mechanisms, or extreme operational environmental conditions of temperature, pressure, flow and natural contaminants. For this reason, intervention actions to maintain the operational assurance of the SCI are necessary, especially when advance information and data about the health condition of critical components are provided through Prognostic Health Management (PHM), a maintenance approach that allows predictive analysis based on the operating conditions and health of the assets. From this perspective, the adoption of the Digital Twin (DT) technology proposed in this paper allows modeling the dynamic behavior of the SCI in detecting interval control valve (ICV) anomalies. The approach involved the collection of operational data and parameters of the physical asset, followed by the creation of the Digital Model (MD) in dynamic/modular software for the hydraulic control of a producing well in the Brazilian pre-salt with three zones. Numerical validation, development of the diagnostic/prognostic Machine Learning algorithm and training were established in the off-Board Diagnostic (off-BD) structure, with integration into a database to capture the normal and failure states of the ICV. The implementation of the DT off-BD phase demonstrated promising results by enabling the identification of anomalies in the pressure profile correlated to the gradual opening of the ICV in 3 operating scenarios. In this sense, the DT will support decisions based on system behavior to predict failures and health management of undesirable events. Furthermore, it is expected to improve the analysis of deviation diagnosis, system integrity and symptom investigation, reliability, system health management and decision making. |
16:45 | Influence of Inspections on the Risk Evaluation of a Subsea Manifold by a Top Logical Model PRESENTER: Adriana Schleder ABSTRACT. This work presents a preliminary risk analysis of a typical subsea manifold for the Brazilian oil basin through a top logical model developed by considering initiating events and their respective event trees. A total of seven initiating events have been identified, including: Loss of containment in the section between the connector and the flowline, Loss of Containment in the section between valves, Loss of containment in the section between valves, Loss of containment in the section between valves, Spurious closure a valve, Spurious Closing of Valves, and Pipe plugging in the section between valves. For each of these initiating events, an event tree was developed, defining some barriers and identifying risk sequences. Additionally, for five initiating events, namely, Loss of containment in pipelines before valves, Loss of containment in at least one valve, Loss of insulation at Manifold input connectors, Loss of containment in ducts prior to manifold inlet connectors, and Structural Deficiency in the Manifold Protective Structure, for which no barriers were identified. From these event trees and using fault trees to quantify their events, the system top logic model was developed. This top logical model is very useful for assessing the influence of inspection plans on equipment risk evaluation. This model allows for considering sensitivity analyses such as initially considering the standard inspection plan followed without any inspection plan. These results are valuable for decision-making regarding the definition of inspection plans. Other important variables to consider include failure detection probabilities associated with the inspection techniques used to assess the system. The purpose of this analysis is to identify optimized inspection plans for the manifold under consideration. |
17:00 | Research status and development trend of risk and reliability evaluation of blowout preventer group under ultra-high temperature and high pressure PRESENTER: Tianqi Liu ABSTRACT. In the field of oil and gas drilling, the blowout preventer (BOP) serves as a crucial component of well control equipment, playing a vital role in preventing blowout accidents and ensuring production safety. However, in harsh downhole environments with ultra-high temperatures and pressures, the sealing and shearing capabilities of the BOP system can easily diminish due to component failures, necessitating robust reliability management as the cornerstone of risk prevention and control. This article comprehensively analyzes the risk factors that impact the reliability of BOP systems and summarizes the current testing experiments, risk assessment techniques, and simulation technologies involved in BOP systems. Additionally, it systematically reviews the latest advancements in reliability assessment, fault prevention, and control technologies for BOP systems. Furthermore, it discusses future trends in this field. Research indicates that as drilling technology advances towards extreme conditions such as ultra-deep and ultra-high temperature and pressure wells, enhancing the systematization and intellectualization capabilities of BOP system reliability management technologies is crucial for ensuring the safety of drilling operations and promoting efficient oilfield development. |
17:15 | Research on Phase Transition and Flow Characteristics of Tubing Leakage in CO2 Injection Wells PRESENTER: Zhiming Jiang ABSTRACT. Over the past decade, with the increasing number and scale of Carbon Capture and Storage (CCS) projects internationally, the integrity of CO2 injection wells has brought growing concern. The downhole tubing is highly susceptible to corrosion and perforation due to factors such as high-pressure low-temperature injection and CO2 corrosion, which can subsequently lead to wellbore integrity failure. This study employs the Computational Fluid Dynamics (CFD) software Fluent to establish a CFD model for investigating the leakage flow characteristics of downhole tubing in CO2 injection wells. This model incorporates NIST-based CO2 property equations, which accurately capture the depressurization phase transition and gas-liquid coexistence phenomena during the leakage process. By studying the flow characteristics of tubing leakage in CO2 injection wells, this paper reveals the variation patterns of temperature, pressure, and flow velocity during the transient leakage process. The research findings provide a foundation for understanding the flow characteristics of tubing leakage in CO2 injection wells, supporting the safe and long-term effective operation of CCS projects. |
16:30 | Health and Safety in the Norwegian Offshore Wind Industry: Knowledge Gaps and Research Needs PRESENTER: Tone Njølstad Slotsvik ABSTRACT. Offshore wind is a growing industry with inherent safety challenges. Previous research has concluded that the safety of offshore wind maintenance personnel remains an understudied topic. A literature search conducted as part of this study confirms these findings. Our study explores potential knowledge gaps regarding maintenance personnel safety in the emerging offshore wind industry in Norway. Data collection includes a) a survey sent to members of the Norwegian Offshore Wind Health, Safety and Environment (NOW HSE) working group, and b) notes from researcher guided group discussions at a NOW HSE workshop. The results show that research participants experience an overall knowledge gap regarding health and safety themes in the industry, particularly related to the coming regulatory framework. We argue that there is a need for developing research-based knowledge, in particular studies with a system perspective considering the whole value chain. Given the study’s relatively limited scope we argue that more thorough studies of safety knowledge in the industry is needed. |
16:45 | Threats and opportunities in the Norwegian Offshore wind ‘industrial adventure’: Insights into business consortium perceptions PRESENTER: Kjersti Melberg ABSTRACT. Norway has committed to international agreements on reducing greenhouse gas emissions and has set ambitious goal for developing offshore wind as an important part of a future cleaner energy mix. However, recent political and structural challenges have created hurdles for a smooth transition to a livable offshore wind industry (OWI) in Norway, compromising the planned timelines and potentially influencing companies’ willingness to be part of the next Norwegian ‘industrial adventure’. The aim of this paper is to explore and describe how actors within the consortium organizations perceive threats and opportunities related to offshore wind in Norway during the years 2023 to 2024 concession phase. Data consists of ten semi-structured interviews with key informants in two different business consortium representing four organizations and one informant from an industry network organization. Our findings provide insights into the Norwegian concession phase and occurring hurdles and challenges in the process of developing OWI Norway. Our informants paint a complex picture of threats and opportunities and identify several ‘barriers to entry’ but also point to factors that motivates them to be part of the OWI development in Norway. |
17:00 | Public perceptions of the energy transition: Mental models of Dutch and Norwegian citizens PRESENTER: Gisela Böhm ABSTRACT. We present a study in which we explore the public perception of energy transition pathways, that is, of strategies towards sustainable ways of energy production and use. We focus on people’s mental models about such pathways. Mental models are internal representations of some aspect of the external world. Studying mental models in the domain of energy transition pathways is important because mental models have been shown to be related to public acceptance and policy support. Specifically, we presented a broad collection of potential pathways towards the energy transition, comprising individual behaviors (e.g., walking and cycling), technologies (e.g., wind farms) and policies (e.g., regulations) to participants and asked how they perceived these pathways to be causally linked. Participants visualized their mental models using a standardized tool (M-Tool) that allowed them to draw a diagram that reflected their causal perceptions. We measured mental models in this way among a sample of experts (N=25), and representative samples of Dutch (N=299) and Norwegian citizens (N=313). The results show that both Dutch and Norwegian citizens focused particularly on renewable energy technologies whereas experts focused more on policy pathways. Moreover, mental models were related to political orientation and to worry about climate change. Mental models of politically more right-leaning participants focused less on individual behavior than those of their left-leaning counterparts. Higher worry about climate change was related to a stronger focus on individual behavior as well as on policy pathways in the mental models. We discuss these results with respect to their policy implications. |
17:15 | Energy Policies for the Swiss Energy Transition: A Socio-Political Multi-Criteria Decision Analysis Framework PRESENTER: Peter Burgherr ABSTRACT. The transformation of the Swiss energy system is a prerequisite to reaching the Energy Strategy 2050 and the national net-zero greenhouse gas emission targets by 2050. Long-term energy scenarios are often evaluated regarding their technical feasibility, cost optimization, and environmental impacts. However, the socio-political acceptance of a transition pathway can pose a barrier to its political feasibility and implementation. Therefore, it is crucial to disaggregate policy packages into their constituent policies and to carry out a detailed analysis to understand their implementation and consequential risks. The main objectives of this research are twofold: Which policies empower the Swiss energy transition? How good are they in socio-political aspects? For this purpose, a Multi-Criteria Decision Analysis (MCDA) framework was developed, which comprises four distinct steps. First, the identification of relevant policies and the compilation of a policy database. The presented case study covers wind and solar PV policies, including the recent coat decree that puts precedence of energy over other national interests, as well as geothermal policies, an underdeveloped area of the Swiss energy strategy. Then, a set of seven criteria is developed to assess the performance of policies concerning different aspects of risks. For example, the risk that a policy will not materialize, the risk that a policy will increase economic inequality, or that a policy will be rejected in a public referendum. Next, the criteria are quantified based on literature data, own calculations, and estimations or expert judgment. Finally, the performance of the policies is compared using the PROMETHEE outranking method, resulting in a policy ranking and robustness assessment. In conclusion, this case study demonstrated that the best-performing and successful policies are not just those aiming to accelerate renewables' deployment and override other national interests (e.g., biodiversity) because they need to be balanced against societal and political acceptance. |
17:30 | Risks, Rewards and Reactors: Perceptions of Nuclear Energy and its Alternatives in India PRESENTER: Prerna Gupta ABSTRACT. India’s energy transition is critical to reducing global carbon emissions. While nuclear power is a key focus of the Indian government’s energy plans, little is known about how the diverse Indian public views the risks and benefits of nuclear power compared to other technologies, including wind, solar, hydro, coal, oil, and gas. Utilizing data from a national survey, our analysis reveals that nuclear energy is perceived as the riskiest of the surveyed energy technologies, while its benefits are seen as similar to those from fossil energy, and rated lower than renewables. These findings align with observations in EU and US surveys. The study also shows significant sub-national variations in these perceptions, emphasizing the need to account for India's diversity in energy policy and technology assessments. Our results on the drivers of energy risk perception notably diverge from studies conducted in the global north. Contrary to those findings, our data does not show the commonly observed inverse relationship between perceived risk and benefit, suggesting a more limited applicability of the affect heuristic in developing countries with energy deficits. Moreover, cultural worldview explanations often invoked in the global north also fall short in the Indian context. Individuals with strong egalitarian values perceive greater benefits from energy technologies, including nuclear, contrasting with studies in the global north, where high egalitarianism correlates with higher perceived risk and lower benefit. Similarly, and unlike in the EU and US, high communitarian values are linked to a perceived higher benefit from nuclear and coal. We also explore the role of economic and political values through a new political value scale, finding that individuals with a people-centered development orientation perceive greater risk across all energy technologies. Conversely, those with nationalist development values perceive higher benefits from solar and coal but not from nuclear energy. |
16:30 | Possible measures for the safety of process plants in case of cyber attacks PRESENTER: Gabriele Baldissone ABSTRACT. With the digitalization of industrial plants, cybersecurity is becoming an increasingly relevant problem. The most commonly used method for managing this problem is to use computer protection systems (e.g. firewalls). The purpose of these programs is to make impossible or at least difficult to intrude and alter computer systems. In the process industry, intrusion into information systems can lead to malicious alteration of plant parameters, leading to significant risks to the safety of people and property. For these reasons, this paper presents two additional barriers that can be adopted to prevent intrusions into computer systems from causing accidents. The first possible barrier that can be proposed is to use a digital tween. To compare the measured variables with the data obtained from the digital tween in order to identify the deviation between the two values indicating the presence of a problem. The second potential barrier can be in the ability of control room operators to recognize deviations in process variables following the intrusion, as well as to take corrective measures, including manual ones. |
16:45 | An innovative approach for cyber-risk identification in process facilities PRESENTER: Matteo Iaiani ABSTRACT. Cyber-attacks on Industrial Automation and Control Systems (IACS), such as the Basic Process Control System (BPCS) and the Safety Instrumented System (SIS) in chemical, petrochemical, and offshore Oil & Gas facilities, are a major concern due to their potential severe consequences on human safety, property, and the environment. These impacts are comparable to those of major accidents caused by safety-related issues. In this panorama, the ISA/IEC 62443 series of standards provide a systematic and practical approach to address cybersecurity issues of IACSs. In particular, it requires the evaluation of the actual level of cyber risk of a facility and the implementation proper cybersecurity countermeasures for its reduction. This requires the identification of all the impacts that can result from deliberate malicious attacks to the BPCS and SIS, including those on the physical plant, the evaluation of their consequences, and of their likelihood. However, neither specific methods nor guidelines are provided to conduct the proposed approach by the aforementioned series of standards. In the framework outlined, the scope of the present study includes the understanding and the modeling of the dynamics of IACSs when targeted by cyber-attacks, and the response performance of the protection strategies. The proposed innovative approach allows to identify all possible cyber-attack paths for the facility analyzed, which is at the basis for the definition of the entire network of cybersecurity events, from emergence of the threat analyzed in terms of foreseen attack scenarios, through its development and the intervention of preventive security measures and systems, to attack effects in terms of process and storage equipment damage. Overall, the proposed approach fills the gap in the availability of tools aimed at supporting cybersecurity risk assessment (e.g., ISA/IEC 62443 series of standards) in chemical and petrochemical facilities, as well as in offshore Oil & Gas facilities. |
17:00 | Cyber Risk Governance: An Exploratory Study of Practitioners' Perceptions, Practices, and Needs PRESENTER: Jean Bertholat ABSTRACT. Due to their apparent complexity and perception as subjective practices, cyber risk assessments are often underestimated—even among cybersecurity professionals. Yet, they are essential for informed risk management decisions and ensuring sustainable business success. Widely recognized in domains like finance, construction, healthcare, and the military, risk analysis provides significant benefits when used as a foundation for focused countermeasures and planning strategies. This study explores how professionals within organizations perceive the nature and objectives of cyber risk assessments and cyber risks based on their experience. Drawing from a literature review and the analysis of a questionnaire distributed to a heterogeneous panel—including security experts, organizational managers, executives, and other stakeholders—we evaluated how these diverse groups perceive the practice of cyber risk assessments: is it considered an administrative burden or a truly valuable tool for the company, and why? We also assessed the heterogeneity in the usage of the term "cyber risk" and the practices associated with it, which leads to differing needs and tools among professionals. These findings underscore the importance of making cyber risk assessments more accessible, understandable, and relevant for all involved parties. The study introduces, motivates, and contributes to ongoing efforts to optimize the practice of cyber risk assessments, aiming to improve the resilience and security of organizations in the face of growing digital threats. |
17:15 | Decoding CyberSecurity: How Terminology Shapes the Field PRESENTER: Jean Bertholat ABSTRACT. Cybersecurity stands at the forefront of protecting today’s digital infrastructures and personal data. While significant technical and organizational advances have been made, it is crucial to periodically step back and question whether our current cybersecurity philosophy, concepts and models remain fit for purpose of resilience in a rapidly evolving environment. This collection of essays critically examines the prevailing understanding of cybersecurity, contrasts it with an alternative vision, and explores the foundational model on which this practice is based. Rather than offering a definitive new definition, the intention is to “shake things up” and provoke a deeper conversation on whether a shift in paradigm is warranted. By challenging the assumptions held by researchers, practitioners, and policymakers, this work aims to foster innovative thinking that can guide cybersecurity toward a more resilient and adaptive future model. Ultimately, it is a call to reassess current approaches and inspire further debate, research, and action, ensuring that our strategies remain aligned with the emerging challenges of a never-ending increase interconnected digital world. This article serves as an initial exploration into the meaning of the term “cyber()security”, analysing how its usage and interpretation evolve, shaped by the communities of practice that have adopted it. |
17:30 | The future cyber-threats and and our resilient and organic industry-systems ABSTRACT. Resilience has become a cornerstone for safeguarding critical systems (OT) in industries transitioning towards Industry 5.0. Just as the human body relies on interconnected systems to detect, respond, and recover from disruptions, industrial systems must operate with similar adaptability. Regulations like IEC 62443 provide the foundation for achieving this resilience by integrating cybersecurity, safety, and operational reliability. The presentation and extended abstract explores the parallels between human systems and industrial processes, highlighting why resilience is a critical principle for future industrial systems. Responding to failures, enabling rapid recovery and minimum, sustained functionality is paramount. As industries embrace the principles of Industry 5.0, which emphasize human-centricity, sustainability, and collaboration between humans and machines, the need for standardized, robust, and risk-based approach becomes even more critical. Through practical insights and industry examples, insights are given into how organizations align resilience-focused strategies, with IEC 62443, to enhance the protection of critical infrastructure. In this way they steer towards a future with systems that are secure, adaptable, and aligned with the human-centric vision of Industry 5.0. |
16:30 | Evaluating Environmental Risk Assessment Parameters and Processes for Genetically Engineered Crops in Select Case Studies PRESENTER: Nicholas Loschin ABSTRACT. In recent years, advancements in biotechnology, such as gene editing and CRISPR, have led to the creation of novel food and agriculture products such as genetically engineered purple tomatoes and gene-edited mustard greens. At the same time, the regulatory system and risk assessment process have struggled to keep pace with these rapid innovations and techniques. We chose a subset of products to investigate how environmental risk assessment has been conducted, including genetically modified Bt corn, purple tomato, and papaya as well as gene-edited mustard greens and waxy corn. These case studies were investigated to understand the parameters, processes, and procedures taken to undergo an environmental risk assessment by U.S. federal agencies. We also compare the parameters used in environmental risk assessment with those deemed important by diverse stakeholders in the U.S. when evaluating novel agrifood technologies. We find that the environmental risk assessment process has been applied to a subset of the selected case studies, including Bt corn, and other case studies have been deemed exempt from this process. Among other parameters, this presentation will review how studies on environmental exposure, non-target organism hazard, horizontal gene transfer, gene flow, and impacts on threatened and endangered species have been applied to each case study or deemed exempt by regulatory authorities. These findings reveal both strengths as well as limitations of the environmental risk assessment process as applied to genetically engineered crops in the U.S. |
16:45 | Understanding Households’ Management of Refrigerator Temperature in UK Households: A Multimethod Study PRESENTER: Can Cheng ABSTRACT. Proper refrigerator temperature regulation is crucial for food safety, reducing waste, conserving energy, and preventing foodborne illnesses. Mismanagement can lead to bacterial growth, spoilage, and financial losses, posing significant public health risks. This study addresses gaps in understanding UK households’ refrigerator management by combining self-reported data from the Food Standards Agency’s (FSA) Food and You 2 survey conducted in 2021-2022 with over 4,000 participants and observational data from the FSA’s Kitchen Life 2 (KL2) study conducted in 70 households in 2021-2022. KL2 is the first to use observational data with temperature detection to analyse food safety behaviours in households. The effect of age on knowledge and behaviour was assessed using descriptive analysis, ANOVA, and logistic regression. A path model explored how knowledge influenced behaviours, and further analysis examined the impact of prolonged fridge door opening on temperature fluctuations. Results revealed that 61.6% of participants knew the recommended temperature range, but monitoring practices varied. Understanding the recommended refrigerator temperature range does not necessarily increase the frequency of temperature checks across different age groups. Age showed a non-linear relationship with both knowledge and monitoring behaviours. Adults aged 25–64 demonstrated greater knowledge than those under 24 and over 65. While age was generally associated with improved monitoring behaviours, this trend was less consistent among individuals over 65. Fridges frequently exceeding 8°C showed significant temperature changes when their doors were left open for over 60 seconds. Our findings highlight the need for targeted interventions and the importance of refrigerator alarms to mitigate the risks. We expect our findings to help regulatory bodies in developing effective risk communications. |
17:00 | Is it time to intervene? Linkages between DALY and WTP in health-economics models PRESENTER: Constanza De Matteu Monteiro ABSTRACT. To date, several methodologies are able to quantify health impacts and value health-risks reduction, with some approaches specifically designed to evaluate regulation impact. Nonetheless, as methodologies do not cover all the multifaceted criteria pertinent to food regulation, policy makers are often challenged with questions such as “when” and “how” food policies and other public health interventions should take place. Thus, we develop an expanded approach for “cost-benefit analysis” as a framework for health-economic assessment that can support answering both questions for decision-problems. This approach explores the linkages between the metrics “disability-adjusted life year” (DALY) and “willingness to pay” (WTP), emphasizing the potential of combining methods for measuring the impact of regulatory tools such as subsidies and taxes and inform food related policy decisions. A case-study on the lentil market underlines the application of this approach, where annual attributed DALYs and welfare variations with and without fiscal intervention are presented for alternative scenarios examining impacts of an increase in the consumption of lentil supply chain. |
16:30 | Towards a rapid triage method addressing the potential for PTSD conditions following mass violence events PRESENTER: Jon Tømmerås Selvik ABSTRACT. Numbers from past mass violence events, such as terrorist attacks or mass shootings, show that a significant fraction develop a posttraumatic stress disorder (PTSD) condition. For effective treatment, early identification is important. A method for rapid screening of the potential for PSTD development for individuals exposed in the event, allows for an appropriate treatment at an early stage. Such treatment could effectively mitigate consequences and reduce the number of individuals developing PTSD. There already exist rapid triage methods for trauma prioritization following mass violence events, but neither of these specifically include the risk of PSTD development. A starting point towards a rapid triage method having a PTSD focus, is to consider risk influencing factors, as key elements of the method. Relevant peritraumatic factors are considered, such as: incident type (I), exposure level (E), trauma duration (D), and perceived threat level (T). Each of these peritraumatic risk factors can be given a role in assessment of possible PTSD development. A way to assess the factors is to assign a numerical value for each of them, considering the trauma burden for prioritizing the most affected for follow-up. An aggregated score can be established based on the IEDT assessments, which then reflects the potential for developing PTSD conditions. A practical example on how such a scoring system could be designed is given in the paper. |
16:45 | Primary healthcare access in the face of climate change PRESENTER: Darcy Glenn ABSTRACT. There is a global physician shortage. News reports going back to 2021 note the impact of early retirement, burnout post-COVID, and a lack of interest in new graduates entering the family care field. The ability to get a timely appointment is the #1 barrier to primary care access in New Zealand. Climate change has the potential to exacerbate the situation. Sea-level rise is expected to flood coastal clinics and homes. Where do affected patients seek care? Additionally, climate migration will introduce new patients into the healthcare system. Do receiving communities have enough resources to handle these, potentially, high-needs patients? Both climate impacts will not be limited to immediately exposed practices. They will have larger implications for the wider primary care network. We will calculate the travel distance between the GP’s office and residences. Demographic information on ethnicity, gender, and socioeconomic status is connected to the residents. Gender(s), culture competencies, full-time equivalent hours, and office hours outside of a typical 9-5 were applied to the doctors’ offices. Access will then be calculated by matching patients with doctors using the Roth-Peranson optimization (Roth and Peranson, 1999). This method allows patients to select doctors that meet their needs on 4 out of the 6 A’s of access: accessibility, availability, accommodation, and acceptability. Doctors' preferences are able to be adjusted to match the systemic inequalities found in the surveyed data results on access. We will then stress test the system by removing clinics that are flooded/inaccessible due to sea level rise and recalculating accessibility. We will also use population projection scenarios to estimate where and when people are expected to move to New Zealand and recalculate accessibility. We expect the potential loss of accessibility to the directly affected population and a wider decrease in services due to crowdedness. |
17:00 | Advancements in RAMS Processes through Digitalization and AI in Transportation Systems ABSTRACT. The RAMS discipline, encompassing Reliability, Availability, Maintainability, and Safety, has been a critical concern in the transport sector for years, extensively discussed in both industry and academia. The evolution of RAMS is driven by technological advancements and digitalization, particularly evident in railway Control, Command, and Signalling (CCS) systems. A specific challenge lies in adapting risk management and assessment approaches to increased digitalization and the use of Artificial Intelligence (AI). Digitalization and AI offer opportunities to enhance RAMS processes by improving predictive maintenance, optimizing operational efficiency, and enhancing safety control through real-time data analysis and decision-making support. CENELEC standards EN 50129 and EN 50126 have established comprehensive approaches to safety and reliability in the railway industry, crucial for the quality of safety documentation and proper validation of system safety (safety case). This study examines the potential impacts of increased digitalization and AI evolution on RAMS processes and the effects on safety requirements for railway signalling systems. It also explores whether the interrelation between Safety and RAM requirements changes in this context. Typically, Safety requirements are governed by regulations, while RAM requirements are managed through companies' operational and maintenance strategies. The RAMS management in railways will undoubtedly evolve. This presentation is based on a limited literature study on RAMS practices and general project experiences from SINTEF projects, including the ERTMS-NI project in Norway, which deals with RAMS processes at different levels. The presentation concludes with preliminary findings and possible implications on safety documentation for transportation systems, especially in railway. |
17:15 | Graph Neural Networks for Anomaly Detection in Wind Turbines PRESENTER: Luca Pinciroli ABSTRACT. The development of methods for anomaly detection in renewable energy systems is challenged by the complex spatio-temporal correlations among the measured signals, the variability of the operational and environmental conditions, and the presence of control systems that may hide anomalies in the signals patterns. To address these challenges, in this work we use Graph Neural Networks in identifying variations in the relationships among signals. Specifically, we propose a method which employs graph attention networks to dynamically consider interdependencies among signals, and gated recurrent units to capture temporal dependencies and system dynamics. The proposed method is compared to other state-of-the-art methods on a synthetic case study based on simulated wind turbine data. The obtained results show that the proposed method outperforms the comparison methods by more than 10% in terms of accuracy and reduces the detection delay by more than 50%. |
17:30 | Simulation and Risk Assessment of CO2 Buried Pipeline Leakage and Diffusion Using PHAST PRESENTER: Yan Shang ABSTRACT. Carbon dioxide pipeline leaks can pose significant threats to human health and environmental safety. This study utilized PHAST software to simulate far-field CO2 dispersion, validated against experimental results. Key factors analysed include leakage characteristics (direction and size), pipeline parameters (operating pressure and fluid temperature), and environmental conditions (wind speed, atmospheric stability, and ambient temperature), with thresholds of 0.5% and 5.0% CO2 volume as safety and lethal limits, respectively. Results show middle perforations cause wider horizontal dispersion near the ground, while high-consequence areas (HCAs) expand with increasing operating pressure, leak size, and ambient temperature, but decrease with higher wind speed and atmospheric stability. Leakage size was identified as the most critical factor, with HCAs growing exponentially with size. This study provides a scientific basis for CO2 leakage risk assessment and offers guidance to enhance pipeline safety in CCUS systems. |
16:30 | Improving the predictability of hazardous scenarios by natural language processing PRESENTER: Igor Kozine ABSTRACT. The completeness and high predictability of hazardous scenarios by risk identification methods are issues in risk analyses. As stated in Taylor (2025), 16 out of 71 major hazard accidents in process plants were predictable but not predicted. This demonstrates that it is important to improve the predictive power of hazard identification. A way to the improvement is to carry out both an exhaustive - to the extent possible - post-accident analysis, and a predictive accident analysis. Recent advances in natural language processing (NLP) allow significant improvements in the analysis of accident reports. In combination with graphical tools, it is now even possible to output automatically causal diagrammatic models of accidents and visualize them on a multi-scenario accident diagram. This was described in Cardenas et al. (2024). Now we have made a step forward and explored the application of NLP for the improvement of the predictive accident analysis. This analysis identifies deviations from expected or normal conditions, the subsequent events following these deviations, and their interactions leading to an accident. These expected or normal conditions are typically outlined in specifications and procedures. Additionally, we combined the predictive analysis with error mode checklists to generate cause-consequence diagrams that are pictorial causal models of predicted accident scenarios. This paper demonstrates how NLP can assist hazard identification and predictive analysis by analysing accidents involving injuries to people on ships and platforms. References Taylor, J.R. (2025). Using accident anatomy analysis and small language models to support systematic lessons learned analysis and improve the completeness of hazard analysis. Chemical Engineering Transactions. In Press. The Italian Association of Chemical Engineering. Guest Editors: B. Fabiano, V. Cozzani, & A. Bernatik. AIDIC Servizi S.r.l. Cardenas, I.C., Taylor, R., Kozine, I., & Fenn, A. (2024). Accident analysis reinforced by natural language processing and text mining. Submitted for publication. |
16:45 | Large Language Models for Extracting Failed Components from Consumer Electronics Feedback PRESENTER: Jean Meunier-Pion ABSTRACT. Large Language Models (LLMs) have demonstrated over the past few years a strong capability in natural language understanding, opening new opportunities in reliability analysis based on text data. In the meantime, customer review data offer valuable insights into system failures, but the unstructured nature of natural language makes failure information extraction challenging. In this study, we address the problem of failed component extraction from customer reviews of tablet computers, aiming to detect failures at a component level to assess both system and component reliability. We propose a novel approach using LLMs for this task and frame it as a multi-label classification problem. Our method combines the design of a prompting strategy with the use of pre-trained lightweight LLMs to automatically extract the desired information. We conduct a comparative evaluation of state-of-the-art non-proprietary LLMs on this task. To support this work, we introduce a newly annotated dataset of 1,215 customer reviews, of which 356 mention at least one failure, annotated specifically for component failure detection. This fine-grained failure detection framework aims at enabling more accurate reliability assessments by pinpointing individual component failures within the broader system context. Our preliminary results show the potential of LLMs to leverage unstructured textual data for component-level reliability analysis. |
17:00 | Integrating Automatic Speech Recognition and Natural Language Processing with Root Cause Approach to Improve Mining Projects PRESENTER: Pablo Viveros ABSTRACT. This research explores the integration of Large Language Models (LLMs) and Automatic Speech Recognition (ASR) technologies into Root Cause Analysis (RCA) to enhance decision-making in complex engineering environments, particularly mining operations. Traditional RCA methods, such as Ishikawa diagrams and the "Five Whys," often face limitations related to scalability, reliance on structured data, and the labor-intensive nature of manual processes. By leveraging advanced AI capabilities, this study presents a novel step by step approach that combines ASR for accurate transcription of unstructured verbal data with LLMs for automated causal analysis and solution generation. The approach was validated through a case study involving critical mining systems, demonstrating its ability to identify root causes and propose actionable solutions with reduced time and improved consistency. Metrics such as user satisfaction were evaluated using the Technology Acceptance Model (TAM) questionnaire, which showed high operator satisfaction and usability. The findings underscore the potential of AI-driven RCA frameworks in streamlining workflows, reducing cognitive load, and improving decision-making processes. While challenges such as model biases and the need for human oversight persist, this research lays a foundation for future advancements in AI applications for RCA and complex problem-solving in engineering domains. |
17:15 | IEC common data dictionary for functional safety in the process industries PRESENTER: Shenae Lee ABSTRACT. A key aspect of Industry 4.0 is machine-to-machine communication, which enables more efficient data sharing across different companies and business units. Industry 4.0 technologies like asset administration shell (AAS) and OPC-UA in combination with Automation ML have the potential to be deployed in various purposes over the whole lifecycle of systems. However, it is challenging for the machines to identify and exchange domain-specific data, if a standardized semantic for the domain is not established. As a response to this, standardized data repositories for multiple industrial domains have been developed, and the most common examples are IEC Common Data Dictionary (CDD) and ECLASS. IEC CDD is designed to provide standardized product data in a machine-readable format for any IEC and ISO standards. However, the current version of IEC CDD includes a very limited set of data related to IEC 61511 on functional safety in process industries, and moreover, there is a lack of a dedicated information model specific to IEC 61511. This presents a challenge for achieving semantic interoperability among different stakeholders within the functional safety domain. For this reason, the ongoing research project Automated Process for Follow-up of Safety Instrumented Systems (APOS 2.0) has taken the initiative to create a domain-specific ontology that is foundational groundwork for creating the IEC 61511 data dictionary to be integrated into the IEC CDD framework. The main objectives of the present paper are to report on the ontology for functional safety in the process industries that is currently under development and to propose a set of classes and properties to be included in the IEC CDD for IEC 61511. |
Development of small-scale testing for the particle penetration of personal protective equipment using a standardised combustion from a cone calorimeter PRESENTER: Edvard Aamodt ABSTRACT. Several studies have established a connection between the firefighter occupation and elevated chance of cancer and illnesses, attributed to the harsh environment and exposure to airborne combustion products. This especially concerns airborne particles small enough to penetrate protective garment and human skin. These particles also often contain polycyclic aromatic hydrocarbons (PAHs) which are known carcinogens. When developing new textiles for personal protective equipment (PPE), it is therefore important to document their particle and PAH penetration blocking ability. Despite this, currently no relevant, standardized and cost-efficient test method exists. This study introduces a novel method specifically designed for screening of PPE textiles, filling critical gaps in available test methods, to facilitate future improved understanding of the protective ability of firefighter garments in preventing carginogen exposure. In the proposed method, the PPE textiles are exposed to fire smoke from burning of PVC plastic, polyurethane foam and spruce wood, in a standardised set-up using the cone calorimeter. The smoke passes through an exposure tunnel with PPE textile mounted on it, while particle concentration, PAH content and temperatures are systematically measured on each side of the textile. The method shows promising results for generation of “standardised” smoke and for documenting particle penetration through PPE textiles. However, challenges related to repeatability and costs involved are discussed. |
Constellation of Mini sounders for Meteorology deployment and renewal scenarios study with Petri nets PRESENTER: Anton Roig ABSTRACT. The rapid expansion of satellite constellations, with a significant number expected to be operational within the next decade, presents both opportunities and challenges. These constellations are crucial for enhancing global communication, particularly in underserved areas, for advanced Earth observation capabilities, etc. However, the increasing number of satellites also exacerbates concerns regarding orbital debris and congestion in critical Low Earth Orbits and Geostationary Earth Orbits. Sustainable integration and efficient utilization of these constellations within the current space framework are essential. This paper explores the deployment and renewal strategies for satellite constellations, focusing on the weather constellation CMIM as a case study. The study evaluates various scenarios, analyzing factors such as satellite type, quantity, reliability, redundancy within satellites or between the satellites of a plane, etc. A simulation-based approach, employing Petri nets combined with Monte Carlo simulations, is used to evaluate the impact of these factors on system performance, while also focusing on defining possible degraded revisit scenarios. During the project phase 0, the simulation models provide a comprehensive comparison of performance and cost metrics across multiple scenarios. Key considerations include the in-orbit stock management, renewal launch frequency, and redundancy strategies. The results contribute to optimizing service availability and ensuring the long-term efficiency and sustainability of satellite constellations in an increasingly populated space environment. |
FRAMalyse: an open tool to quantitatively analyze and evaluate the characteristics of models derived by the Functional Resonance Analysis Method PRESENTER: Niklas Grabbe ABSTRACT. As modern socio-technical systems become increasingly complex, there is a growing need for innovative risk and safety management models and methods. The Functional Resonance Analysis Method (FRAM) is a recent approach designed to address this complexity. Interest is rising in expanding the capabilities of software tools that support FRAM analyses. Four well-known examples of FRAM-related tools are the FRAM Model Visualizer, the FRAM Model Interpreter, myFRAM, and DynaFRAM. These tools aid in modeling, simulation, visualization, and interpretation of system variabilities. However, they often lack a user-friendly interface for effective practical analysis and evaluation of a FRAM model’s characteristics as well as communication of analysis results. To address this gap, this paper introduces a new open software tool—FRAMalyse. Developed to enhance the quantitative analysis and evaluation of FRAM models, FRAMalyse is particularly useful for managing the complexity of large-scale FRAM models. This initial version aims to empower practitioners and decision-makers to explore FRAM models systematically, efficiently, and effectively, potentially increasing the adoption and usability of FRAM across different domains and industries. The paper explains the functionalities of FRAMalyse, provides application purposes, and gives an outlook for possible enhancements in the future. |
Navigating Security and Safety Challenges in Autonomous Vehicle Systems: A Risk-Based Assessment Framework PRESENTER: Aqsa Rahim ABSTRACT. In the past few years, the advancement and adoption of autonomous vehicles AVs have rapidly increased. As a result of which safety and security of Avs has become a big challenge. These vehicles are dependent on systems including various sensors, AI systems and connectivity systems These systems are vulnerable to security threats and safety risks. The autonomous driving system faces various security challenges including cyberattacks on the AV software, communication systems or on the cloud-based platforms. These security threats could affect control systems of the vehicle which could lead to catastrophic failures. In addition to this, system and sensor errors could also result in failure of the AI decision system, which pose risks not only to passengers but also to other road users. To the ensure the safety and reliability of AVs a comprehensive risk assessment framework is needed that will not only evaluate the impact of such incidents but also incorporate advance technologies for timely detection and prevention of such incidents. This paper will focus on developing a risk-based framework to assess and mitigate the challenges associated with AV technologies, emphasizing the safety and security dimensions. We will examine key safety standards and explore techniques such as real-time risk monitoring, machine learning-based threat detection, and resilient system design. Through a focus on risk management practices, we aim to establish guidelines for the secure and safe integration of autonomous vehicles, making the way for widespread adoption and public trust in this transformative technology. |
Predictive maintenance for preventing pressure drifts of gas delivery stations ABSTRACT. GRTgaz owns and operates more than 5,000 gas pressure reduction and delivery stations in France. Since 2020, digital systems are being installed on these stations to record downstream pressure measurements, with the possibility to have remote access to these data. GRTgaz then seized the opportunity to leverage these data to identify potential pressure drifts and anticipate certain maintenance operations. The objective is to intervene preventively before any consequences arise for downstream customers, and to avoid emergency interventions, especially during on-call periods (nights and weekends). The developed algorithm includes: a smoothing phase, which pre-processes the measurement data; a calibration phase, which automatically defines the acceptable limit values for the measurements; a prediction phase, which estimates whether future measurements are likely to exceed the limit values; and an alert phase, which initiates preventive maintenance if necessary. For the prediction phase, linear regressions have been proposed, as well as an AI-based technique exploiting a Long Short-Term Memory (LSTM) approach, which is a type of Recurrent Neural Network (RNN). The initial experiments were conducted on about hundred stations where the first digital pressure recorders were installed. The results obtained from the first two years of experimentation are promising, as they allowed the prediction of more than the half of excessive pressure drifts up to five days in advance, with a false positive rate below a third. GRTgaz is currently in the process of deploying the predictive maintenance policy for gas delivery stations, in parallel with the gradual implementation of the digital pressure recorders. It is expected that this policy will divide the number of the emergency corrective maintenance operations by more than two. |
Defining a launch base Minimum Equipment List using a MBSA approach PRESENTER: Javier Collado Borrego ABSTRACT. The French space agency (CNES) guarantees safety during launch operations at Europe’s spaceport in French Guiana. Safety engineers track the launcher’s trajectory and are able to stop the flight by activating the launcher’s flight termination system. According to French Space Operations Act, safety related missions performed by the launch base, like acquiring external flight trajectory, sending telecommand to launcher… need to meet the Fail Operational criterion. This means that at least two active and independent operational systems (redundancy) performing the mission are needed at T0 to authorize the lift-off. This allows to be able to continue the mission in case of any single failure during the launcher’s flight. Before the countdown, dependability engineers define with each launch base sub-system manager a Minimum Equipment List on the perimeter under his responsibility (telecommunications, telemetry, tracking the trajectory, power supply…). This allows a rapid Go / No-Go decision by the manager in case of an equipment failure, especially if it occurs close to lift-off time. In case of another failure happens, a decision meeting with all sub-system managers and a safety engineer is mandatory to assess the effects of the first failure on all the others launch base sub-systems. The main challenge is to be able to quickly take into account all the connections between each equipment, and the possible consequences due to combined failures. This paper presents a new organizational approach for constructing a launch base Minimum Equipment List using a MBSA model validated with a FTA approach in order to allow a rapid and secured decision making in case of multiple failures affecting different equipments. |
The Value of Digital Inventory Governance for Spare Parts: Case Study from the Norwegian Oil and Gas Industry PRESENTER: Farbod Farshchi ABSTRACT. This research explores the governance of digital inventory platforms for spare parts in the Norwegian oil and gas industry, focusing on both centralized and decentralized configurations. Research methods, including an extensive literature review, comprehensive industry interaction, and detailed interviews with additive manufacturing experts, highlight key characteristics of digital inventory platforms and identify the influencing factors toward optimal formation. Additionally, it evaluates the risk implications of these configurations, demonstrating how centralized models ensure data integrity and compliance to industry standards, while decentralized models potentially facilitate innovation and increased responsiveness within the value chain. Effective governance of these platforms can not only boost operational efficiency but also strengthen the industry's resilience amid a dynamic and turbulent global business environment. |
Expansion of k-out-of-n failure time distributions in dependent environments PRESENTER: Lizanne Raubenheimer ABSTRACT. The use of k-out-of-n systems has become increasingly popular especially for fault tolerant systems. The failure time distribution for a k-out-of-n system can be constructed as the distribution for the (n+1-k)th order statistic. This is easily constructed using an independence structure, however, it is well known that many of these systems are subject to common failure environments. As such we specify a Marshal Olkin Multivariate Exponential distribution to model common failures and develop expressions for the time to failure using a shock model approach. Expressions are developed for the symmetric model, the Binomial Failure Rate Model and the General Model. |
A hybrid method for future capacity and RUL prediction of lithium-ion batteries considering capacity regeneration PRESENTER: Guisong Wang ABSTRACT. Accurate prediction of remaining useful life (RUL) is critical to the reliability and safety of lithium-ion batteries. However, challenges frequently arise when using the measured data for RUL prediction, such as degradation data being significantly influenced by noise and difficulties in estimating uncertainty induced by capacity regeneration. To address this issue, a hybrid prediction method to predict battery future capacity and RUL is proposed by combining the adaptive variational modal decomposition (AVMD), permutation entropy (PE), long short-term memory (LSTM) network and Bayesian neural network (BNN). Specifically, the AVMD algorithm is employed to decompose the battery capacity data into the aging trend sequence at low frequencies and the noise and capacity regeneration sequences at high frequencies. AVMD adaptively optimizes the number of decomposition stages and balancing parameters through kernel estimation for mutual information and the relative energy density gradient as the objective function. PE is utilized to adaptively filter the high-frequency and low-frequency sequences while eliminating the noise sequence. The prediction models based on LSTM and BNN are then respectively developed to forecast the aging trend sequence and capacity regeneration sequence. The proposed hybrid method demonstrates broad applicability and minimal prediction error as verified by the application on lithium-ion battery dataset. |
Towards industrial autonomy: a four-dimensional Level of Autonomy (LoA) Framework PRESENTER: Josepha Berger ABSTRACT. Industrial systems are transitioning towards greener, digital, and autonomous solutions, resulting in significant changes to their design and operation. This path to full autonomy faces several challenges, especially in integrating modern and legacy equipment at industrial sites, causing incompatible communication standards and diverse software systems. Each site presents unique requirements, necessitating close cooperation between technology providers and site operators. Site operators need a thorough understanding of the opportunities, limitations, and safety risks associated with increased autonomy. Additionally, the physical design of sites must be suitable for the integration of autonomous machines, alongside potential combinations of autonomous, semi-autonomous, and manual equipment. Communication challenges can arise when certain machines rely on manual operation, complicating overall system's functionality. Beyond technical hurdles, increased autonomy requires adjustments in business-wide operations, including safety management, logistics, dispatch, procurement, product and document management, fleet management, and the refinement of operator skillsets. To address these complexities, we propose a four-dimensional Level of Autonomy (LoA) framework that helps in identifying and prioritizing key areas for enhancing autonomy. Unlike existing models that focus solely on system-wide or individual machine autonomy, our LoA framework integrates dimensions for machine driving, machine manipulation, system operation, and system mission. The operational dimension considers the orchestration of autonomous driving and manipulation of both individual machines and entire fleets, while the mission dimension emphasizes the management of multiple connected mixed fleets working towards a unified system goal. Dimensions of autonomy are crucial because they highlight areas where human involvement is necessary and provide insights into strategies needed to enhance autonomy or assess the current level of system autonomy. Using safety as an example, as autonomy increases, the sophistication of safety considerations also increases. A comprehensive LoA framework benefits stakeholders, including original equipment manufacturers (OEMs), suppliers, and system integrators, by providing a unified approach for implementing autonomous systems. |
An Updated Scoping Review of Research on Gastrointestinal Effects of Emulsifiers, Stabilisers, and Thickeners PRESENTER: Ellen Bruzell ABSTRACT. Emulsifiers, stabilisers, and thickeners (ESTs) are common food additives used to modify the texture and extend shelf-life of various food products. Recently, several ESTs have come under scrutiny due to concerns about their potential negative impact on gastrointestinal healt, which has led to manufacturers replacing some of the ESTs with others. In 2023, the Norwegian Scientific Committee for Food and Environment (VKM) published a systematic scoping review of research on gastrointestinal effects of the ESTs: agar (E 406), sodium alginate (E 401), carrageenan (E 407), processed Eucheuma seaweed (E 407a), sodium carboxymethyl cellulose (E 466), gellan gum (E 418), guar gum (E 412), and xanthan gum (E 415). The review identified 14 eligible studies, while 214 studies were excluded due to insufficient substance-specific data. No studies with sodium alginate or gellan gum were identified. Of the 14 included studies, one was a human study and 13 was animal studies. Ten of the included studies had high risk of bias and none had low risk of bias. None of the studies addressed chronic exposures. The current update aims to incorporate two key elements: (i) an updated literature search covering studies published from March 2023, and (ii) direct inquiries to the authors of the 214 excluded studies to obtain the necessary substance-specific information. By reaching out to authors, we aim to gather additional data, potentially allowing for the inclusion of more studies. Scoping reviews are particularly valuable when there is uncertainty regarding the extent of the research literature that can answer a specific question. They allow researchers to map the existing evidence and determine if a systematic review or risk assessment is feasible. In cases where data gaps or insufficient evidence may hinder the reliability of a complete risk assessment, conducting a scoping review first helps avoid unnecessary expenditure of resources. |
Enhancing System Dependability in Offshore Ammonia Production Through Curtailment Minimization PRESENTER: Kwangu Kang ABSTRACT. This study investigates the dependability optimization of an offshore floating platform for ammonia production, powered by offshore wind energy, with a focus on minimizing energy curtailment. The primary objective is to ensure stable ammonia production and maximize system reliability by efficiently managing fluctuating energy inputs. While large-scale battery systems can help reduce curtailment, their cost increases significantly with capacity. Therefore, this research balances curtailment reduction with economic feasibility through system optimization. The platform comprises 67 wind turbines (15 MW each) producing a total of 3.47 TWh of electricity annually. The optimized system includes an 80 MW AC/DC converter, a 1 MW - 4hr Li-Ion battery, a 20 MW PEM electrolyzer, and a hydrogen storage tank with a 160-ton capacity. The results show that 12.32% of the total electricity is used for ammonia production (428 GWh/yr), 70.22% is sold to the onshore grid (2,438 GWh/yr), and only 17.48% (607 GWh/yr) is excess, demonstrating effective curtailment minimization. Operational simulations, based on wind data from the Yeonggwang wind farm in 2020, validate the system’s dependability. The energy management strategy prioritizes ammonia production and continuous operation by using stored hydrogen and battery reserves during low wind periods. This ensures stable, reliable system performance while minimizing energy waste and providing economic benefits through grid electricity sales. This research contributes to dependability by demonstrating how an offshore renewable energy system can be optimized for continuous, reliable operation, even with the inherent variability of wind power, while minimizing curtailed energy and ensuring economic sustainability. |
Degradation Behavior and Failure Mechanisms of Silicone Rubber in a Simulated Marine Atmospheric Salt Spray Environment PRESENTER: Rui-Yuan Wang ABSTRACT. Silicone rubber, widely used as a sealing material in aerospace and marine equipment, plays a crucial role in ensuring operational reliability and safety. However, in the harsh marine atmospheric environment characterized by high salinity and humidity, the degradation behavior of silicone rubber differs significantly from that in inland environments, with the failure mechanisms still remaining unclear. To address the challenges of prolonged natural aging and unclear degradation behavior, this study established an accelerated aging platform in the laboratory and designed a neutral salt spray accelerated aging test. Through comprehensive analyses of macroscopic physical properties (mass loss, compression set, etc.), microstructure (XPS, SEM, EDS), and mechanical properties (tensile strength), the study systematically explores the performance degradation of silicone rubber under simulated marine atmospheric conditions with high salt spray and summarizes the failure mechanisms at various stages. The results indicate that the degradation of silicone rubber in a neutral salt spray environment can be divided into three stages, with Cl⁻ ions playing a pivotal role. In the initial stage, Cl⁻ rapidly penetrates the material; in the middle stage, the degradation slows down, and differentiation between the inner and outer layers becomes apparent; in the final stage, the degradation stabilizes, and the microstructure exhibits numerous through-holes, ultimately leading to material failure. These findings provide a basis for predicting the remaining service life of silicone rubber and help improve its reliability and durability in marine environments. |
Optimizing 5G Network Availability: Comparison Between Reliability Block Diagrams and Markov Chains Approaches PRESENTER: Ikram Temericht ABSTRACT. As digital platforms become essential for critical services, ensuring the resilience of 5G networks and future systems is increasingly important. 5G is the first fully virtualized generation, which, while enhancing flexibility and scalability, also adds complexity to its architecture. This virtualization introduces new challenges in maintaining continuous availability, recovering quickly from disruptions, and adapting to evolving demands. In sectors like industrial IoT and B2B applications, even brief outages can have serious consequences, making availability a key metric for evaluating network resilience. In this article, we focus on enhancing availability to bolster the resilience of 5G virtual networks and beyond with respect to budget constraints . To achieve this, we introduce a Reliability Block Diagrams (RBDs) based model for 5G services and compare it to a Markov chains model. While Markov chains are effective for modeling state transitions and probabilistic behaviors, they can become computationally complex and challenging to apply in highly interconnected and virtualized system contexts We demonstrate, on the other hand, that RBDs are particularly suited for addressing the complex and highly virtualized architectures of 5G networks because of low complexity computations. We also introduce an OpenSource framework to compute the availability of 5G services and applied it to a service chain to evaluate both models . It aims to ensure that the required availability levels are achieved at minimal cost Through this approach, we demonstrate that strategically placed redundancy can significantly enhance system resilience. Our analysis demonstrates that RBDs, with their simpler and more scalable computations, are particularly effective for assessing the availability of virtualized systems. RBDs provide a more flexible and efficient solution for modeling complex 5G networks d ue to their linear scalability, making them a suited option for availability calculation for repairable 5G network systems and beyond. |
Non-parametric accelerated degradation data analysis for predicting lithium-ion battery degradation under various working conditions PRESENTER: Cong Wang ABSTRACT. Predicting the degradation and lifetime of lithium-ion batteries is essential for ensuring their reliability and safety. However, complex working conditions and varied degradation mechanisms result in diverse degradation patterns. Traditional parametric modeling methods often lack flexibility due to their fixed assumptions about degradation modes and stress correlations. To address this, we propose a non-parametric accelerated degradation data analysis method for predicting battery degradation and lifetime under various working conditions. First, we apply Gaussian process regression to adaptively fit degradation trajectories under different conditions. Then, these trajectories are connected through our proposed two-dimensional transformation, which scales the degradation trajectory in both time and degradation dimensions using two independent coefficients. An iterative strategy is employed to estimate these coefficients under different conditions. Subsequently, adaptive-order polynomial regression is developed to establish the relationship between the transformation coefficients and working conditions, utilizing a comprehensive loss function to ensure robustness and monotonicity. Finally, applications to experimental data demonstrate the effectiveness and superiority of our proposed method compared to several conventional parametric approaches. |
Agent-Based Modelling for Safe Evacuation in Conflict Zones: Integrating Cyber Risks into Evacuation Strategies PRESENTER: Shushan He ABSTRACT. In conflict zones, it is critical to ensure the safe evacuation of civilians. Traditional evacuation strategies have typically focused on addressing physical threats, such as armed conflicts, terrain, and weather conditions. However, with the increasing use of digital infrastructure, significant cybersecurity risks have emerged. Cyberattacks can disrupt communication systems, spread misinformation, misleading civilians and evacuation efforts thereby increasing safety risk. At the core of risk analysis is the identification, assessment, and mitigation of uncertainties, particularly in volatile and complex environments. In evacuation strategies, risk analysis must account for two primary dimensions: physical risks and cyber risks. Physical risks include threats such as armed conflict, terrain obstacles, and climate conditions, while cyber risk contains issues like information warfare, malware attacks, and the collapse of communication networks, all of which can lead to inaccurate information, delayed decision-making, or direct disruption of evacuation efforts. We argue that modern evacuation strategies must adapt to not only dynamic and unpredictable environments but also to the rising risks of cyberattacks. We propose an agent-based modelling (ABM) framework to simulate various evacuation scenarios in conflict zones, incorporating cyber risk analysis. The framework simulates the behaviour of civilians, conflict actors, and cyber attackers, accounting for physical risk, and cyber risks posed by communication disruption or misinformation caused by cyberattacks. We test the proposed ABM framework for different evacuation strategies under two conflict scenarios. We assess the evacuation time. We show how the proposed agent-based model can examine the impact of misinformation on evacuation process and measure the resilience of the evacuation process. We show that integrating cyber risks with physical threats, the model helps decision-makers in designing more flexible, adaptive, and resilient evacuation strategies, particularly when facing the dual challenges of complex conflicts and cyber threats. |
Decarbonizing Port Operations: A Case Study Mapping the Sustainability Impacts of Green Ammonia Production and Bunkering in Estonia PRESENTER: Georgi Hrenov ABSTRACT. The maritime transport sector is vital to the global economy, facilitating the movement of goods and resources through ports, which are key hubs in global supply chains. However, the sector's rapid growth poses environmental challenges, as maritime activities contribute up to 4% of global greenhouse gas (GHG) emissions. To address this, the International Maritime Organization (IMO) has set targets to reduce emissions by at least 50% by 2050 compared to 2008 levels. One proposed solution is green ammonia (GA), a carbon-neutral fuel derived from renewable energy sources. This study explores GA's potential to reduce maritime transport's environmental footprint and support the United Nations Sustainable Development Goals (SDGs). Focusing on existing port infrastructure, the research evaluates GA production and bunkering's impact on port sustainability (PS) through a literature review and a case study of Estonia. The findings provide opportunities and challenges for integrating GA into port infrastructure, creating a sustainability framework for zero-carbon operations that balance environmental, social, and economic goals. |
Machine Learning Explanation using Counterfactuals - why your axioms matter. PRESENTER: Joseph Omae ABSTRACT. Performance of Machine Learning (ML) models has grown remarkably in recent years. However, this comes with increased model opacity, hence limiting its adoption for decision support in critical domains that have significant impact in people’s lives. Users are often faced with the trade-off between model accuracy and transparency. Moreover, increased regulatory pressure such as the UK’s GDPR mandates the right to explanation when using ML models. While there is no explicit definition of what an explanation entails, attempts have been made to meet these requirements by offering more insights about how black box models make decisions using Explainable Artificial Intelligence (XAI). Empirical studies have proven that Counterfactual Explanations (CE) can be instrumental in model explanation. This XAI approach aims to provide model users with minimum, realistic and plausible changes one needs to make to a model’s input to change its prediction from an undesired to a desired class. Researchers have unanimously agreed upon generation of explanations that are closer to the initial input although closeness still lacks proper definition. We note that CE generation is subjective and as such, various users will supplement closeness with other axioms such as sparsity, feasibility, data manifold closeness and feasible paths among others. We aim to conduct a case study involving key stakeholders to establish axioms prioritized when using AI for decision support. From a policy perspective, ML can be used by high integrity asset management companies in the utility industry to predict system failure. It is of interest to decision makers to understand how best this can be avoided without much disturbance to normal processes. We propose CE as a potential solution to this problem. We aim to conduct our study in collaboration with an Asset Management company and findings will motivate development of more robust CE methods that satisfying stakeholder preferences. |
Recent Trends in Japanese Attitudes Toward Climate Change and Related Actions: Analysis of Three Survey Waves ABSTRACT. The purpose of this study is to classify Japanese people according to the characteristics of their attitudes toward climate change risk, and to determine the characteristics of their typology and changes over time in their proportion in Japanese. The data was Internet surveys conducted in March 2017, March 2020, and February 2024. Cluster analysis was conducted using responses to 13 climate change attitude items in the 2017 survey, then identified the following five segments: indifferent (46.6%), alarmed (35.1%), skepticism (4.1%), agreement tendency (11.5%), and disagreement tendency (2.6%). The “indifferent” segment is the segment whose attitudes toward climate change are all undecided, while the “alarmed” segment is the segment that is concerned about climate change and shows a positive attitude toward countermeasures. In the 2020 and 2024 surveys, the “indifferent,” “alarmed,” and “agreement tendency” segments were confirmed again, while the “skepticism” segment's skeptical attitude toward climate change weakened, and “cautious” has been extracted instead of the “disagreement tendency” segment. The “indifferent” and “agreement/disagreement tendency” segments that don’t have clear attitudes towards climate change, accounting for 60% of Japanese as of 2017, but decreasing to around 45% in 2020 and 2024. On the other hand, the proportion of people who are wary of climate change and have a clear attitude on countermeasures decreased from 35% in 2017 to 20% after 2020, while the newly emerged “cautious” segment accounted for 30%. Since these typologies show differences in attributes such as age and gender, as well as in the rate of implementation of countermeasure actions, it is necessary to consider each segment in the design of communication activities related to climate change policy in Japan. In particular, it is important to seek effective outreach measures to the “indifferent” segment, and the “skeptical” segment, which is small in number but always present. |
Interactive learning in safety: how serious game can enhance retention and collaboration PRESENTER: Justin Larouzée ABSTRACT. Training employees on critical safety protocols is a challenge in many industries, with traditional methods often resulting in disengagement and poor retention. Gamification offers a promising alternative by using game mechanics to foster active participation, thereby enhancing memory retention and learning outcomes. This communication explores the theoretical foundations of gamification in training, focusing on its ability to create immersive, interactive learning environments that stimulate motivation and engagement. However, applying gamification to safety raises important questions about balancing entertainment with the gravity of the content. Are playful elements compatible with the responsibility of ensuring employee safety? To address this, we propose a concrete experiment: a low-tech orienteering-based training for Second Intervention Teams (ESI) in an industrial setting. The gamified training integrates fire safety challenges into a real-world navigation task, prompting participants to recall safety procedures while working together to achieve goals. After a first full-scale experimentation, observations and debriefing with the trainees shows that this approach improves engagement, teamwork, and safety protocol retention. However, the experiment also highlights the limitations of gamification, particularly the need for careful design to prevent trivializing serious safety content. This study concludes by reflecting on the conditions under which gamification can serve as an effective complement to traditional safety training, without compromising the importance and seriousness of the subject matter. |
FEMA’s National Risk Index: Experiences and good practices to support national risk assessment implementations PRESENTER: Konstantinos Karagiorgos ABSTRACT. This study examines the development, challenges, and best practices of FEMA’s National Risk Index (NRI), a comprehensive tool designed to assess natural hazard risks at the community level across the United States. By leveraging authoritative nationwide datasets, the NRI aims to provide insights into areas most susceptible to 18 natural hazards. Through semi-structured interviews with experts involved in its development, we identify key challenges, including inter-agency reluctance, scientific complexities in integrating various risk components, and data management obstacles. The interviews reveal that reluctance from some subject matter experts stemmed from concerns over resources and the long-term impact of the project. Scientific challenges primarily involved defining a composite risk formula that incorporates hazard frequency, social vulnerability, and community resilience, with ongoing debates about the inclusion of different data sets and methodologies. Data management challenges included the processing of large-scale GIS datasets and addressing the limited historical data for certain hazards. The study highlights the critical role of multidisciplinary collaboration and iterative development in overcoming these hurdles, as well as the importance of transparency in data selection and methodology. Despite these challenges, the NRI is considered a successful tool for facilitating disaster risk management by providing open and accessible data to various stakeholders, including emergency planners, government agencies, and the public. The study concludes that collaboration, effective communication, and an integrated risk framework are essential for future implementations of national risk assessments. |
The meaning of integration in risk management PRESENTER: Lars Nyberg ABSTRACT. Increasing societal interconnectedness due to large-scale processes such as globalisation, urbanisation and dependence on critical infrastructure results in complicated risk patterns and put high demands on risk management practices. Global policy agreements such as Agenda 2030 and the Sendai Framework for Disaster Risk Reduction stress the need for more proactive approaches and reduced silo-thinking, which necessarily leads to the involvement of a greater number of stakeholders and to higher expectations for the scientific community to engage in inter- and trans-disciplinary research. From the 1970s onwards, interest in the concept of integrated risk management has resulted in an increasing number of scientific papers. The objective of this paper was to examine how the concept of integration is used in different types of risk management. The paper is based on a literature review covering a 20-year period, where close to 1,100 references were included. Many types of risks are represented in the literature on integration, with a predominance of risks in the private sector (business risk, project risk, etc.), natural risks (flood risk, climate risk, etc.), and risks related to health and healthcare. The preliminary results show that many types of integration are addressed. It may concern integration between risk management and other societal or business processes and objectives, such as strategic planning and decision-making, business and urban growth models, population health, etc. Integration can also refer to aspects of the risk itself and to the risk management process, such as mixing of top-down and bottom-up approaches, interdisciplinary perspectives, and better stakeholder involvement. The study is part of the INTEFF-project on integrated and effective risk management funded by the Swedish Civil Contingencies Agency. |
Bridging Inexperienced Risk Assessors and RAPEX Method with Semantically Enriched Injury Scenario Diagrams PRESENTER: Xiaodong Feng ABSTRACT. The Rapid Exchange of Information System (RAPEX) method conducts risk assessments by constructing injury scenarios and plays an essential role in effectively protecting European consumers from product injury and rapidly implementing product recalls. Currently, usage of the RAPEX method in Japan is limited, as there is a lack of knowledge about how to construct injury scenarios using the RAPEX method among less experienced risk assessors in Japan. This paper introduces an approach to transform narrative injury data into semantically enriched data graphs that describe injury scenarios. A sample of 2503 injury data from the National Institute of Technology and Evaluation of Japan was used in the study. An ontology-based approach was employed to interpret injury-related factors, including product names, injury mechanisms, hazards, etc and further identify cause-effect relationships. A graphical database, neo4j, was used to store injury scenario diagrams and to query injury factors to determine their frequency of occurrence. Less experienced risk assessors could then conduct risk assessments based on the RAPEX method. The results indicate that ontology-based injury scenario diagrams created by experienced experts provide additional exploration capabilities. By combining ontology-based injury scenario diagrams, a knowledge-empowerment bridge could be built between free-text product injury data and less experienced risk assessors, and it also provides the potential for data reuse in different countries or geographic areas. |
Reliability and Degradation Analysis of Complex Systems Using Stochastic Petri Nets and Monte Carlo Simulations in Modelica PRESENTER: Sandra Gyasi ABSTRACT. This study investigates the anticipated transient behavior of molten salt reactors (MSRs), focusing on the effects of a reduction in mass flow caused by primary fuel pump degradation. The analysis evaluates the impact of this degradation on reactor power, core temperature, and overall safety, addressing a gap in existing research on MSRs under such conditions. An integrated approach is employed, combining Stochastic Petri Nets (SPNs) and Monte Carlo simulations within the Modelica environment. SPNs model the probabilistic transitions of the fuel pump between functional, degraded, and failed states using Weibull distributions. These stochastic models are dynamically coupled with the deterministic MSR physical model, which uses validated empirical data to capture continuous system behaviors like heat transfer and mass flow. The results reveal significant power increases and temperature fluctuations in the core during degraded states, providing critical insights into reactor safety and performance under adverse conditions. This work offers a novel framework for modeling the reliability of MSRs under uncertainty, contributing to improved reactor safety, optimized maintenance strategies, and enhanced understanding of transient behaviors in advanced nuclear systems. |
Bringing fun into fear? On the usefulness, opportunities, and challenges of serious games in risk management education PRESENTER: Mathilde de Goër de Herve ABSTRACT. While risk management is a rising topic on the policy agenda, notably due to climate change, the discipline of risk management in higher education is still in its early phases. Risk management in general and disaster risk management in particular can be a delicate topic to teach because of the complexity of systems involved and the uncertainties associated with anticipating the future. Hence there is a need for innovative pedagogical tools, and one trending option is to make use of serious games. The purpose of the here presented study is two-fold. First, to discuss the usefulness of using serious games in risk management education. Second, to identify relevant mechanisms and challenges in order to elaborate more efficient serious games devoted to risk management students. This exploratory study is based on written interviews of two teaching groups in Sweden who use gamification in order to improve the learning of university students. The teachers are asked the following four questions: (i) Why do you use serious games in your teaching?; (ii) What features are needed to ensure successful learning thanks to games (e.g. characteristics of the game, context in which the game is presented to the students, etc.)?; (iii) How do you determine whether and what student learn from playing serious games?; (iv) What are the challenges you encounter when using games in your teaching? The study will take place in the Autumn 2024 and is part of the INTEFF (Integrerad och effektfull riskhantering I ett sammankopplat samhälle) project funded by the Swedish Civil Contingencies Agency (MSB). |
Probabilistic Anomaly Detection Beyond Observed Environmental Conditions PRESENTER: Jaebeom Lee ABSTRACT. Research on developing systems that use various sensors to monitor the real-time condition of bridges—structures that incur significant social costs in the event of damage or collapse—has been actively conducted. Notably, the structural response of bridges is influenced by environmental factors such as temperature, underscoring the importance of developing technologies capable of detecting critical events like stiffness degradation. To address this challenge, techniques have been developed to isolate the effects of environmental factors using structural response data. However, many structural health monitoring systems face limitations in performance when dealing with data outside the range of pre-acquired conditions. This study aims to propose a probabilistic artificial intelligence-based approach that incorporates not only measurement data but also expert knowledge to enhance structural health monitoring. By applying the proposed method to actual bridge data, the research demonstrates its potential to improve the detection of anomalies in structural responses. |
Reliability assessment of centrifugal pumps using warranty drive data as a field fault PRESENTER: Andressa Nicolau ABSTRACT. This study presents a reliability assessment of centrifugal pumps utilizing warranty data to analyze field failures, emphasizing the importance of reliability in determining product quality and competitiveness. Many manufacturers, often lacking adequate resources, launch products without proper reliability assessments, leading to elevated repair costs associated with warranty returns. This research capitalizes on warranty data as a valuable and cost-effective means to evaluate product reliability and implement continuous improvement. It specifically examines the reliability of hydraulic pumps produced by a leisure equipment company with over 50 years of experience, employing failure data collected by Costumer Service. The Statistical Theory of Reliability was applied through probabilistic modeling up to the first failure, with adjustments made to the Weibull model via Maximum Likelihood estimation and validation through the Anderson-Darling test. The analysis utilized MINITAB® software (closed source) and was replicated in RStudio (open source), providing an accessible alternative for data analysis. Given the uncertainty and censoring inherent in the warranty data, results from the Weibull model were compared to those from the Kaplan-Meier model, underscoring the necessity for refinement in failure data interpretation. Complementary tools such as Failure Modes and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) were employed to identify underlying causes and effects of failures. Findings revealed a reliability rate of approximately 40% halfway through the warranty period; however, this may not accurately reflect true product reliability due to data limitations. The study advocates for future research to enhance the methodology and explore its applicability for small and medium-sized enterprises (SMEs), ultimately promoting a culture of reliability with significant social impact. |
Developing A Business Model For Sustainable Additive Manufacturing In The Oil & Gas Industry PRESENTER: Mandana Jorshari ABSTRACT. The oil and gas industry faces mounting pressure to adopt sustainable practices due to its significant environmental impact, including greenhouse gas emissions and resource depletion. Additive Manufacturing (AM), or 3D printing, offers a transformative solution by minimizing material usage and reducing emissions through localized production. This study aims to develop a sustainable and economically viable business model to integrate AM into the oil and gas value chain. Using a mixed-methods approach, quantitative data analysis was combined with qualitative insights gathered via the Delphi method, including expert opinions from industry specialists. The research evaluates the benefits of AM, such as improved operational efficiency, reduced waste, and supply chain optimization, while identifying the challenges of its adoption in the Norwegian oil and gas sector. Findings indicate that AM holds great potential to enhance sustainability, lower operational costs, and streamline the supply chain. This study provides strategic recommendations and case studies, presenting a framework for transitioning to sustainable manufacturing practices in the oil and gas industry through innovative business models. |
System Health Evaluation for Enhanced Resilience of Telecom Networks PRESENTER: Manelle Nouar ABSTRACT. In today's increasingly digital society, telecommunication networks are critical in maintaining seamless connectivity. These networks must be resilient, demonstrating robustness and rapid recovery from disruptions. However, they face growing challenges, such as accelerated obsolescence and component shortages, exacerbated by supply chain constraints. These risks pose significant threats to network stability and can lead to extended downtime if not correctly managed. Our research focuses on server obsolescence, a key issue for the Orange Group. The aging of server components can severely affect the performance and reliability of telecommunication networks. This work aims to predict servers' overall health by analyzing the obsolescence of critical components such as RAM and CPUs. For instance, an outdated CPU can have a cascading effect on other functional components, leading to performance degradation. Moreover, a server is not necessarily obsolete because some components are outdated. In fact, components do not all have the same lifecycle and may come from different manufacturers, resulting in varying lifecycles across the system. Assessing the server's overall health is essential in determining whether it requires replacement or upgrading. To achieve this, we have implemented proactive maintenance strategies to predict when individual components become obsolete and to forecast the server's overall health. The aim is to model the complex dependencies between these components using knowledge graphs, identifying the components that primarily influence server obsolescence, and integrating AI with advanced methods such as predictive analytics and machine learning. Understanding these relationships enables more accurate failure prediction and proactive maintenance, maximizing server life and reducing unplanned downtime. This approach identifies critical performance thresholds and informs maintenance planning. Ultimately, this work contributes to building a more resilient network infrastructure by predicting server degradation, facilitating timely interventions, and minimizing service interruptions, thus improving operational efficiency and reliability. |
Risks, their occurrence and their prevention through the implementation of hydrogen in the energy-intensive industry PRESENTER: Stefan Lohberger ABSTRACT. Hydrogen shows great promise for reduction the CO2-emission of production in energy-intensive processes. Due to the immense amount of energy required to melt raw materials, for example in glass and aluminum production, switching from natural gas to (green) hydrogen or mixing the two fuels would significantly impact decarbonization and emissions. However, there are plant and process risks associated with the implementation of hydrogen that need to be considered. This applies to hydrogen storage, hydrogen distribution, and the melting furnace. It is also unclear whether and to what extent hydrogen affects the quality of the product. Against this backdrop, the question arises about how the decarbonization of energy-intensive processes requires attention on the plant and process side. To summarize the existing knowledge, the use of hydrogen in the energy-intensive industry, differences to the current state of the art and risks are first shown. A literature review on hydrogen damage, its mechanisms, and phenomena is carried out to explain the interaction between hydrogen and certain materials. In the next step, material databases and experimental test results for materials relevant to such industries will be used to compile assessments of their compatibility. To ensure the safety of people, machines, and products in the event of a fault, measurement options for hydrogen leaks are discussed and their operating conditions, advantages, and disadvantages are described. Finally, recommendations for the glass industry are given to ensure plant and process safety as well as economic efficiency. |
P2PNeXt: Advancing Crowd Counting and Localization Using an Enhanced P2PNet Architecture PRESENTER: Thomas Golda ABSTRACT. Accurate crowd counting and localization are essential for ensuring public safety and managing risks in densely populated areas, such as during large events or in urban environments. They enable authorities to monitor and manage large gatherings effectively, thereby preventing overcrowding and potential accidents. In emergency situations, accurate crowd data can facilitate quicker and more efficient responses by enabling the identification of high-density areas that may require immediate attention. From the computer vision perspective, these are crucial capabilities, demanding both precision in object counting and accurate spatial localization of individuals. In this study, we propose an enhancement to the P2PNet, a point-based framework for crowd counting, by integrating a modern neural network architecture, ConvNeXt, as the backbone. We explored two primary directions for the backbone integration: utilizing a feature pyramid to combine various feature maps, and employing a single feature map from ConvNeXt, bypassing the feature pyramid. Initial experiments indicated that the single-feature-map approach, particularly with the very first feature map, yielded superior results. However, through a few critical modifications to the feature pyramid module — including bilinear interpolation for upsampling, batch normalization across convolutions, and the inclusion of ReLU in the decoder — the feature pyramid approach ultimately outperformed the single feature map method. The revised feature pyramid, especially the first feature map output from the decoder module, achieved the best results across multiple datasets. This way our research contributes to the broader understanding of risk assessment and management, offering a robust solution for precise crowd density estimation and localization. |
Climate-proof Urban Infrastructure Systems: Integrating Resilience and Sustainability Principles in Expansion and Management PRESENTER: Srijith Balakrishnan ABSTRACT. The increased severity and frequency of extreme weather events stemming from rising global temperatures have substantial impacts on the built environment and pose significant societal risks. In several cities, such risks are compounded by aging infrastructure, infrastructure interdependencies, and growing stresses from rapid urbanization. In such circumstances, it becomes crucial to align infrastructure asset expansion and management strategies with sustainable and resilient transformations, enabling urban infrastructure systems to meet user expectations and thrive against unanticipated shocks. However, to-date, the research on sustainability and resilience in the context of infrastructure asset expansion and management have taken independent directions. Further, there is a considerable gap between theory and practice when it comes to climate-adaptive infrastructure. Thus, in this work, we propose an integrated system design framework for expanding and managing critical infrastructure networks in regions with significant climate-induced risks. Specifically, we develop a multi-objective infrastructure network optimization model that aims to develop optimal resource allocation solutions accounting for sustainability (economic viability, environmental efficiency, and social equity) and resilience (robustness, redundancy, resourcefulness, and rapidity). We demonstrate an application of the proposed framework by studying the interdependent stormwater drainage- and road- network in Chennai, a city that faces substantial risk of flooding during heavy rainfall. In doing so, we develop strategic insights on how investments in network expansion and management impact overall system performance. These include interventions in the stormwater drainage system - such as adding drainage links, conducting preventive maintenance including repair and desilting, and enhancing existing network capacity; and in the transport network such as implementing grade separation, installing permeable pavements, and performing preventive road maintenance. While the use case is specific to drainage and transport networks, the study will develop insights on the practical challenges in integrating resilience and sustainability aspects in traditional infrastructure asset expansion and management approaches. |
Methodology for Predicting Remaining Useful Life of Casing in Subsea wells PRESENTER: David Semwogere ABSTRACT. The corrosion process can be modelled as a stochastic process. We use the Wiener process with adaptive drift which changes with time operating conditions. The operating conditions that affect the drift such as temperature change with the production or shut-in mode of the well. These are modelled as the covariates For high grade steel casing alloys specifically used in subsea well construction, pitting corrosion is the predominant corrosion type. The protective chromium oxide layer protects the underlying metal from corrosion. Before the layer is breached, we assume uniform corrosion behavior. However, when this layer is broken, localized pit corrosion ensures. These pit corrosion developments are modeled as a Poisson process γ whose intensity assumes a normal distribution. Since flow in the annulus is considered to be static, the corrosion products in the pits accumulate slowing down the corrosion. The effect of both the protective chromium oxide layer and the accumulated corrosion products is modelled as an exponential link function depending on the quantity of the metal loss λ(t;θ). The degradation indicator is the remaining metal loss given by the modified Wiener process equation The model properties are determined by comparing with metal thickness logs performed on the casing of an actual subsea well and other well lifetime data. Here, the Brownian motion represents the measurement errors in the log measurements. The remaining useful life (RUL) is the time it takes for this remaining metal thickness to reach the metal thickness of the of the burst and collapse limits. These limits depend on the expected operations on the well. This model is more realistic and highly relevant for most mature subsea wells which are not equipped with annular sensors while at the same time monitoring data can be incorporated. |
Remaining useful life prediction for train bearing based on an BiLSTM-KAN PRESENTER: Yiwei Zheng ABSTRACT. As one of the key components of the train bogie, accurate bearing remaining useful life (RUL) prediction and timely maintenance play a vital role in the safe and reliable operation of the train. The environment of rail transit trains is complex, and the vibration signal of train bearings shows the characteristics of non-linearity and non-smoothness. Meanwhile, the safety requirements of rail transportation system are comparatively demanding, and the time series RUL prediction of bearings should consider the long-term and multi-data problems. For the complex degradation process of rail transit train bearings, a hybrid bidirectional long and short-term memory (BiLSTM) networks and Kolmogorov-Arnold Networks (KAN) RUL prediction method is proposed. Based on the BiLSTM network, KAN is used to replace the fully connected layer, which improves the parameter utilization and enhances the ability to obtain the nonlinear pattern information in the hidden state of BiLSTM. Compared with the traditional time-series prediction method, the method has better prediction accuracy, stronger interpretability, and is more suitable for the prediction of train bogie bearing RUL in high safety requirement scenarios. |
Paving the Way to Inequality: Evidence from San Francisco on the Role of Pavement Maintenance in Social Vulnerability PRESENTER: Jingran Sun ABSTRACT. Urban infrastructure, particularly roads, plays a crucial role in shaping social dynamics and resilience in cities, yet long-term effect of their maintenance on social vulnerability is not well understood. While much research focuses on infrastructure disruptions and their impact on societies, this study investigates how prolonged neglect of road systems can influence social vulnerability in cities, using San Francisco as a case study. We hypothesize that keeping roads in poor condition for a prolonged duration diminishes a region's economic attractiveness, leading to lower property values and accelerating social segregation. To explore this, we combined panel datasets on street pavement conditions, socioeconomic vulnerability, and property prices at the census tract level, spanning the years 2010 to 2023. By constructing appropriate regression models, we examined the impact of road conditions on social vulnerability over time. Our findings reveal that mismanagement of urban streets is statistically correlated with social segregation, as neglected roads could not only impede economic activities but also result in increased economic segregation in cities. This deterioration in infrastructure not only diminishes local resilience but also worsens social vulnerability, leading to further societal disparity. The findings underscore the importance of infrastructure equity as one of the driving forces for creating resilient and inclusive communities in cities. |
Honey traceability in Italy: consumers’ perception and information needs PRESENTER: Stefania Crovato ABSTRACT. The European Commission has recently identified honey fraud and lack of traceability as critical issues in the apiculture sector. Developing innovative traceability systems is highly recommended to improve food safety controls throughout the production process and enhance transparency, giving consumers better access to information. Understanding consumers’ preferences and perceptions is crucial for designing effective communication tools tailored to their needs, increasing satisfaction, and building trust in the production processes. This study aimed to investigate Italian consumers’ preferences regarding honey purchases and their need for information related to product traceability. The research was developed in two phases using both quantitative and qualitative social research methodologies. In the first phase, a quantitative online survey was conducted to assess consumers' expectations and knowledge of honey and beekeeping practices. In the second phase, consumers tested a traceability system implemented through the European project BPRACTICES. This system, based on the QR Code/RFID technology, provided a web page with detailed honey-related information suggested by beekeepers. Consumers’ satisfaction with the system was evaluated using focus group discussions and paper and pencil interviews. Additional insights into consumers’ information needs and preferences were gathered. The findings highlight consumers’ growing demand for transparency, quality, and sustainability in the honey sector. Participants expressed a desire for more information about honey origins and production practices and stated they were willing to pay a higher price for a package of honey if it offered such details. The traceability system was well-received, with participants appreciating the opportunity to learn more about beekeepers and their practices. This research underscores the importance of meeting consumers' increasing expectations for traceability, which may boost confidence in the authenticity and quality of honey products. |
TechMap: Measuring digital literacy PRESENTER: Sílvia Luís ABSTRACT. The goal of this poster is to present the ongoing project TechMap. Digital technologies have become an integral part of our daily lives. Accessing medical prescriptions, making bank transfers, or renewing official documents, are examples of services that tend to be carried out digitally. Those who do not “go digital” typically face many obstacles and have minimal alternative services. The COVID-19 pandemic was an alarming example. Faced with increasing numbers of infections, public health institutions relied on digital services to manage services and communicate with citizens, assuming that everyone had the means and proficiency to access such information and to perform the required actions in case of infection. As a result, issues of digital exclusion and digital inequality have been the focus of growing attention and concern. Measuring citizens’ digital literacy is of utmost importance to ensure they have the minimum level required to access the services and information provided digitally. Back in 2015, when UNESCO set the 2030 Agenda, this need was underlined, and one of the agenda indicators was, precisely, to assure that citizens achieve the minimum literacy skills. In particular, the “Sustainable Development Goal indicator 4.4.2: Percentage of youth/adults who have achieved at least a minimum level of proficiency in digital literacy skills”. Despite this, no adequate measures were developed and, in 2019, UNESCO made a call for the creation of such a measure. Until now, no research team or company has focused on developing an adequate measure to assess the minimum level of digital literacy as addressed by the UNESCO framework. The TechMap project is developing and validating a self-report measures to assess minimum digital literacy. TechMap is contributing to innovating current scholarly work on digital literacy by introducing a new multidimensional, mixed-method approach based on frameworks and guidelines established by UNESCO. |
Citizens’ risk perceptions and protective behaviours relating to mosquito-borne diseases in North-Eastern Italy PRESENTER: Giulia Mascarello ABSTRACT. Mosquitoes can transmit diseases such as West Nile, Dengue, and Zika with serious impacts on global health. Health authorities adopt prevention and containment measures to limit their proliferation. Citizens play an important role in risk prevention. Understanding citizens' perceptions of these insects and the behaviours adopted to protect themselves from bites is crucial for improving risk communication activities and promoting correct practices. The present study aimed to investigate risk perceptions, knowledge, and behaviours of residents in North-Eastern Italy regarding mosquitoes and to profile respondents concerning perceived risks. The survey was conducted through a semi-structured questionnaire administered via CAWI (Computer Assisted Web Interviewing) and CATI (Computer Assisted Telephone Interviewing), involving a sample of 1001 respondents. The results highlighted that citizens have good knowledge of the role of mosquitoes as vectors of various pathogens and of the correct measures to adopt for health protection. Some knowledge gaps emerged regarding the ecology of mosquitoes and the diseases that can be contracted in the area of residence. The perception of individual risk is higher among women, those with a high school diploma, and residents in level ground. Respondents with a higher perception of risks adopt protective behaviours but show a lower level of knowledge. The social data collected, integrated with epidemiological knowledge of mosquitoes and pathogens in North-Eastern Italy, allowed the design of targeted control actions at a local level and the strengthening of tailored risk communication interventions starting from the knowledge gaps and the population groups most at risk. |
Ensuring safety over time: The role of risk-based management, socio-technical performance, and contextual changes PRESENTER: Yara Kharoubi ABSTRACT. Flooding has always posed a significant risk to the low-lying Netherlands. It continues as a challenge exacerbated by rising sea levels and more severe extreme weather conditions due to climate change. To mitigate this risk, the Netherlands has developed an extensive system of flood defences, including dikes, dunes, and storm surge barriers. These defences are subject to strict safety standards based on acceptable failure probabilities. Focusing on storm surge barriers, this presentation will focus on these infrastructures and their robust management over time to achieve required safety levels. Storm surge barriers protect against extreme storm events (ranging from 100 to 10,000-year occurrences) as their gates close to prevent the propagation of high water levels. Due to this critical role in such low-occurrence events, storm surge barriers are designed to meet high reliability requirements over their long lifetime. For this purpose, Rijkswaterstaat, the Directorate General for Public Works and Water Management, developed a specific risk-based asset management approach. This approach aims to ensure the barriers meet stringent safety standards specifying acceptable failure probabilities as requirements. Nonetheless, the management of storm surge barriers depends on its effective application in practice and adaptation to contextual changes over time (e.g., sea-level rise pressuring the system, operating teams, and its maintenance). Considering these reasons, the presentation goes through 3 main points: (1) the development of this approach to storm surge barriers to trace its development in a framework that supports flood authorities in managing their critical assets (2) the application of the approach in practice, revealing the role of social and technical aspects in effectively managing these assets, and (3) an approach to analyse the influence of contextual changes on the asset and its management, enabling adaptations to sustain performance at required levels. |
What is risk? Towards an ontology that would mitigate risks related to the information and communication technologies ABSTRACT. Information and communication technologies (ICT), particularly in their increasingly autonomous AI forms, are widely acknowledged as risks. Much of the research to minimize ICT-related risks is focusing on hypothetical super-AI and/or human values alignment. There are, however, a number of other underappreciated ICT problems, some of them related to how Big Data are un/structured. Some of the solutions to deal with this problem come from ontology: a discipline providing controlled vocabulary for entities and their relations within a domain. Proliferation of various ontologies, however, brings also problems: ambiguities, circular and inadequate definitions, use-mention errors, confusions of reality with our thoughts or perceptions of reality, human and technical idiosyncrasies, duplicities, data silos, etc. A promising approach to adress these problem has emerged: a top-level ontology, certified via a long, voluntary peer-reviewed community of experts and stakeholders that is successfully applied to dozens of specific domains (ISO/IEC 21838-1:2021 and ISO/IEC 21838-2:2021, based on Basic Formal Ontology (BFO). However, before dealing with risk in a disciplined way conformant with ISO standtards based on BFO, we need a good account of what risk it. This is by no means a trivial task and here we attempt to adress it via discusion of both ontology and risk science literature. |
Graph-Based Modeling for Mitigating Cascading Cyber-Attacks in Interconnected Industrial Control Systems PRESENTER: Marios Samanis ABSTRACT. The research focuses on the cascading effects of cyber-attacks on industrial control systems (ICS) and the critical interdependencies within interconnected infrastructures. This study emphasizes the modeling and simulation of cyber-attacks to reveal how attackers can exploit system interdependencies to maximize disruption. By analyzing how interconnected systems, such as water and power networks, rely on each other, the research uncovers vulnerabilities that arise from their mutual dependencies. The goal is to enhance resilience by developing robust defense strategies to mitigate cascading failures caused by cyber-attacks. To model these interdependencies, graph models were developed where system assets are represented as nodes, and their interdependencies as edges. Simulations using graph convolutional networks were conducted to evaluate how attackers could optimize strategies to exploit unseen vulnerabilities. This approach can be extended to incorporate real-world vulnerability and exploit information, allowing for a more granular identification of asset criticality. These models provide valuable insights into both attack scenarios and defense mechanisms, enabling a better understanding of the systemic risks posed by cyber-attacks, especially in complex ICS estates composed of diverse equipment types. The research also explores the use of testbeds to simulate cyber-physical systems, essential for testing and verifying the behavior, performance, and security of these systems. Three testbed approaches were adopted: two based entirely on software simulations and another hybrid approach using software to simulate industrial processes while employing real PLCs for process control. While physical testbeds offer high-fidelity simulations, they are resource-intensive and impractical for testing extreme conditions. Software simulations, in contrast, provide an efficient, cost-effective, and flexible alternative. In conclusion, this research provides a framework to anticipate cascading failures caused by cyber-attacks and offers strategies to bolster the resilience of critical infrastructure systems against such threats. |
Counterfeit electronics in Industry 4.0: risks and detection PRESENTER: Giovanna Mura ABSTRACT. SESSION: Human- Machine Interaction in Industry 4.0: Ergonomics, Security, and Counterfeit electronic components are fraudulent copies, imitations, or substitutes marked as genuine, or altered without the legal right to mislead or defraud. The presence of counterfeit electronics in critical systems can cause reliability risks and human safety and security problems. Counterfeit devices were detected in defence systems, radiation detectors, secure communications devices, avionic and space applications, medical devices, high-speed train brakes, and airport landing light systems where failures could have been catastrophic. In critical sectors, the challenge of avoiding counterfeit parts and materials becomes particularly acute when obsolescence necessitates the use of parts from sources other than the original manufacturer. The original manufacturers may no longer produce the required parts, or their authorised wholesalers may not have them in stock. Industry 4.0 can be easily affected by counterfeiting, considering the massive use of IoT devices and the large amounts of generated data that could be disclosed, causing safety risks and know-how loss. Counterfeit electronics, if not detected, can lead to software, hardware, network and information security problems and disastrous system breakdowns during field operations. Counterfeiting is a complex and variable activity, making detection a challenging task. The range of counterfeit electronics, from recycled to tampered parts, adds to this complexity, highlighting the need for advanced detection methods. This work aims to raise awareness about the risks of using electronics from unauthorised suppliers, which could lead to the procurement of counterfeit devices. It will also provide an overview of methodologies to mitigate these risks when procuring components from non-certified suppliers. Importantly, it will propose a non-destructive detection approach that relies on electrical measurements and machine-learning algorithms that could potentially be trained in field operation. It will offer a potential low-cost solution to this pervasive problem, at least in simple electronic devices. |
Fatigue reliability assessment of structural details using a Bayesian Network and spectral approach ABSTRACT. Fatigue cracking is a critical form of deterioration affecting ship and offshore structures, often leading to significant safety hazards and economic consequences. A comprehensive assessment of fatigue behavior is essential to ensure the reliability and integrity of these structures. However, estimating fatigue life is inherently uncertain and complex due to (i) the randomness in the encountered sea states, (ii) the variability in material properties, and (iii) the effects of inspection and maintenance interventions. To address these complexities, this paper presents a probabilistic risk assessment framework that integrates a sampling-based crack growth model with Bayesian networks. The crack growth model focuses on factors (i) and (ii), employing a spectral approach to characterize the uncertainty in fatigue loading. Additionally, this sampling-based method can reflect the sequential nature of loading experienced by the structures on fatigue accumulation. The Bayesian network model maps the relationships among factors (i)-(iii) and quantifies the associated fatigue-induced risks. This approach allows for the adaptation of fatigue predictions based on real-time data from in-service inspection and maintenance activities, thus improving the accuracy of fatigue life estimation. A numerical example is provided to demonstrate the application of the proposed framework. It shows that by effectively addressing factors (i)-(iii) within a dynamic risk assessment framework, this work directly links the fatigue life estimation to decision-making processes to enable efficient intervention planning for ship and offshore structures. |
multimodal transport path optimization for perishable cargos: considering refrigeration-failure and uncertain railway loading demand PRESENTER: Qianli Ma ABSTRACT. The transportation of perishable cargo through existing multimodal freight networks has significantly increased. The focus on efficient transshipment of refrigerated containers has been driven by the strict quality requirements of perishable goods. Refrigeration-failure at intermedia nodes has greatly affected the cargo’s quality degradation. This research proposes a multimodal transportation path optimization model to enable efficient transportation. The objective is to minimize average cost and quality degradation. Considering fixed schedules of waterway and railway transportation, the impact of refrigeration supply and failure on quality degradation is explored. The non-dominated sorting genetic algorithm II (NSGA-II) determines the optimal path. A numerical experiment is conducted for the import of apples from the Port of Antwerp to Lanzhou, China. Results indicate significantly higher quality degradation from refrigeration failure compared to the supply state. Limiting transfer time at the intermediate node to less than 7% of the total time is beneficial to the freshness of perishable products. |
A 10-year analysis of the global maritime accidents: from a spatial and temporal perspective PRESENTER: Chengpeng Wan ABSTRACT. Maritime accidents pose challenges to global shipping safety, affecting human lives and economic activities. Hotspot identification and analysis of maritime accidents is crucial for accident prevention, as it enables the identification of high-risk areas and supports targeted safety interventions. However, accurately pinpointing high-risk areas and implementing effective safety measures remain persistent challenges for the maritime industry worldwide. Based on the global ship accident data from 2013 to 2022, this study aims to employ advanced analytical techniques including Kernel Density Estimation (KDE) and Emerging Hot Spot Analysis (EHA) to identify maritime accident hotspots. The KDE method, which does not consider the temporal dimension, is used to explore the spatial distribution characteristics of ship accidents in two dimensions. In contrast, the GIS-based three-dimensional spatiotemporal analysis method (i.e., emerging hot spot analysis) considers both spatial and temporal factors, allowing for a dynamic analysis of the spatiotemporal evolution of hot spots. The combination of KDE and EHA enables a comprehensive analysis of accident hotspots. The results reveal that the Europe, the English Channel, and the Strait of Malacca have consistently been accident hotspot regions. Additionally, the Mediterranean, the Singapore Strait, and the waters around China and Japan are areas where shipping accidents have continued to emerge as significant safety concerns. These regions have been identified as requiring particular attention regarding maritime safety management. Bases on EHA, this study also provides a detailed classification of hotspot patterns, enabling a comprehensive understanding of the spatio-temporal evolution of these accidents. Moreover, the study highlights the importance of implementing precise, region-specific safety interventions to proactively prevent accidents and enhance overall maritime safety. |
Effectiveness Analysis for Unmanned Emergency Supplies Delivery System of Systems Based on FDNA PRESENTER: Zhenkai Hao ABSTRACT. Rapid and efficient disaster management is crucial following natural disasters, with emergency supplies delivery being a pivotal task that directly impacts the survival and safety of affected individuals. As unmanned systems are increasingly applied to emergency supplies delivery, it is essential to manage and allocate of various equipment from a systematic perspective to leverage the advantages of unmanned systems. One key research point is on evaluating the effectiveness of the emergency supplies delivery systems of system (ESDSoS). This paper assessed the effectiveness of the ESDSoS using a functional dependency network analysis (FDNA) method and further analyzed potential directions in which unmanned systems could enhance the effectiveness of this system. Case studies are also provided to illustrate the points made in the paper. |
The Impact of Disruptive Versus Non-Disruptive Climate Protests on Climate Change Attitudes and Policy Support ABSTRACT. Recent climate protests have increasingly employed disruptive tactics to draw public attention, raising questions about their effectiveness in driving climate action. Using a survey experiment, (N = 986) this exploratory study finds that while disruptive protests elicited a stronger emotional response, they did not lead to increased support for climate action. Instead, a potential for increased polarization emerged, with U.S. conservatives showing greater favorability for the status quo following exposure to disruptive protests. Conversely, exposure to non-disruptive protests increased climate policy support and decreased climate change fatalism, suggesting that non-disruptive approaches may be more conducive to fostering positive climate attitudes and policy preferences. |
Research on Resilience and Security of Cluster Unmanned Systems Based on Backup Complementary Strategy PRESENTER: Lin Shen ABSTRACT. With the continuous development of technology, traditional large-scale unmanned systems exhibit significant limitations when used for entertainment applications in urban airspace. These limitations include low visibility and insufficient visual impact. Moreover, individual products may face risks of being unavailable, unreliable, or even completely ineffective in environments with numerous electronic devices. Cluster unmanned systems based on backup complementary strategies can dynamically optimize their usage strategies according to mission conditions. They can maintain better performance and cost-effectiveness in scenarios with functional variability and complex modes due to their strong resilience. Therefore, cluster unmanned systems have become a new application paradigm and have been widely used in corporate events and festivals, inevitably having a significant impact on future usage concepts. This paper reviews the research progress of engineered resilient systems and cluster unmanned systems centered on drones. It deeply analyzes the connotations and interrelationships of resilience and security in cluster unmanned systems. Based on this, the paper uses the Markov model analysis method to verify the resilience of cluster unmanned systems and proposes corresponding safeguard strategies to provide theoretical support for the further development and application of related technologies. |
Bearing fault diagnosis based on lifelong learning under cross operating conditions PRESENTER: Shixin Jiang ABSTRACT. Rolling bearing, a widely used core component in industry, will bring a serious threat to the safety of the machine and staff when it fails. At present, the time-varying operating conditions and catastrophic forgetting have brought great challenges to bearing fault diagnosis. One of the reasons is that good performance can only be maintained if the model is kept under the same conditions as the offline training phase. If the model is directly trained by using the data acquired from new operating condition, the model will suffer from catastrophic forgetting, resulting in poor performance of previous operating condition. In order to solve the above problems, a bearing fault diagnosis method based on lifelong learning is proposed in this paper, which is implemented based on Residual Network with Convolutional Block Attention Module(Res-CBAM) and Elastic Weight Consolidation (EWC). As the basic fault diagnosis model, Res-CBAM can adaptively extract fault features. The introduction of elastic weight consolidation can make the model retain the feature extraction ability of the past condition when learning the fault features of the new condition, so as to solve the catastrophic forgetting problem. The experimental results show that the proposed method has good performance in fault diagnosis under cross conditions. |
Fatigue State detection of navigational officer ABSTRACT. Abstract: The fatigue experienced by navigational officers unfolds in intricate temporal sequences with interdependencies. Addressing the prevalent deficiency of fragmenting fatigue states in most seafarer fatigue investigations, a novel approach for fatigue detection of navigational officer is proposed: the Gauss Mixture Hidden Markov Model (GM-HMM). This innovation aims to identify the fatigue dynamics of navigational officers by utilizing electroencephalography (EEG) data obtained from the bridge simulator, alongside the dynamic characteristics inherent in ship navigating fatigue. The GM-HMM model operates within a probabilistic framework to discern the fatigue states of navigational officers, capitalizing on the physiological insights gleaned from EEG signals. Empirical evidence underscores the heightened efficacy of the GM-HMM model in fatigue detection, as contrasted with the traditional logistic regression model. Consequently, this model not only enhances the efficacy of shipborne real-time fatigue warning systems but also mitigates the maritime perils stemming from human factors, which might otherwise precipitate accidents of considerable consequence. |
The need for systematic approaches in risk assessment of safety-critical AI-applications in machinery PRESENTER: Franziska Wolny ABSTRACT. The integration of artificial intelligence (AI) into safety-critical machinery applications in industrial environments presents substantial challenges for conformity assessment and safety certification. Unlike traditional control systems, AI's data-driven nature and non-trivial behaviour complicates the assurance of compliance with established safety standards. This contribution highlights the specific challenges with respect to the new European Machinery Regulation (2023) and the AI Act (2024). We present corresponding developments in standardisation and research and discuss to what extent safety cases can underpin the safety evaluation and conformity assessment for today's applications in industry. |
Ratcheting behavior and fatigue life prediction of austenitic stainless steel throughout its entire life cycle PRESENTER: Jian Li ABSTRACT. Austenitic stainless steel refers to stainless steel with austenitic structure at room temperature, with non-magnetic, high toughness and plasticity, strong rust and corrosion resistance, and other characteristics, often as a pressure component and widely used in pressure vessels, pressure pipelines, nuclear power systems and aerospace and other fields. In the actual service process, the pressure component is often subjected to the double action of temperature and cyclic load, which may not only produce ratcheting deformation, but also cause creep damage related to time and temperature, resulting in creep-fatigue interaction, and then lead to the service life of the component and structural reliability. In this study, the ratcheting behavior and fatigue life prediction of austenitic stainless steel throughout the life cycle were carried out, and the corresponding constitutive model was established to predict the fatigue life under complex loads. This study provides the basic theory for evaluating the ratcheting behavior of austenitic stainless steel in the whole service environment. |
Uncertainty-based flood risk assessment in rainfall-runoff processes PRESENTER: Chengjun Cao ABSTRACT. This study examines the propagation of uncertainty from rainfall estimation to urban flooding with the aim of improving the overall reliability of early warning systems in the environment. The study is divided into three main parts. In the first part, the special interpolation method kriging with external drift (KED) is used to estimate rainfall in the study area based on precipitation data from satellite and rain gauge stations. Uncertainty is then quantified using statistical metrics to assess the reliability of spatial rainfall estimation. This is used as a basis to develop a model for evaluating the performance of rain gauges. The second part involves the incorporation of the estimated rainfall data and its quantified uncertainties into a hydrological modeling framework to simulate the water levels in the urban catchment during various rainstorm events. An uncertainty analysis based on rainfall uncertainty is also performed on the model outputs to assess how rainfall variability affects flood prediction. By assessing the range and likelihood of different water level scenarios, this step contributes to the development of more robust flood risk assessment and warning strategies. The third part of the study presents a conceptual framework that integrates urban water level data with a city-scale road network model. This integration aims to assess the potential impact of flooding on urban transportation and traffic congestion. Overall, this study emphasizes the importance of uncertainty analysis - from environmental inputs to urban system responses - in improving the reliability of flood prediction and emergency management. The methodology provides insights for engineers and decision makers on disaster prevention in the face of increasing climate uncertainty. |