View: session overviewtalk overview
10:45 | Bayesian Updating for Reliability with Imprecise Probabilities: Julia Implementation and Application to the NASA Langley UQ Challenge 2019 PRESENTER: Lukas Fritsch ABSTRACT. Model updating has emerged as a technique of great importance in a number of engineering contexts where measurements are employed to infer the parameters of systems. In many applications, the limited availability of experimental data, coupled with significant model uncertainties, presents a considerable challenge for inferring accurate model parameters, particularly in the context of hybrid uncertainties. Nevertheless, accurately quantifying these uncertain parameters is of key importance in order to ensure the reliability of systems within an engineering context. This highlights the need for flexible yet robust methodologies to address systems with imprecise probability models in the context of stochastic model updating. In light of these considerations, we present a Julia implementation of a Bayesian updating technique within a structural reliability framework. The objective is to infer parameters represented by imprecise probabilistic models and obtain imprecise failure probabilities. The methodology is illustrated with reference to the NASA Langley UQ Challenge 2019, which demonstrates the application of Bayesian techniques in updating the uncertainty model of a black-box system and conducting subsequent reliability analysis. A two-step Bayesian updating procedure is employed here. This procedure uses the Transitional Markov Chain Monte Carlo algorithm and applies model reduction techniques to construct an approximate likelihood function from coefficients of time series data using the Euclidean and Bhattacharyya distances. |
11:00 | A Python framework for arithmetic with uncertain numbers PRESENTER: Yu Chen ABSTRACT. Modern risk analyses carefully distinguish between variability and incertitude. Partial knowledge about the input distributions and their dependency are often encountered in various engineering context. Probability bounds analysis provides a bounding approach that allows to propagate partial information about random variables and make rigorous calculations, without requiring overly precise assumptions on the distribution or dependency specifications. Uncertain numbers, as an umbrella term, refer to a general class of uncertainty presentations which may be probability distributions, intervals (i.e. sets of real values defined by an upper and lower bound), and probability boxes (p-boxes) which represent sets of probability distributions bounded by two distribution functions. This framework enables rigours computing through probability bounds analysis within the popular Python environment and provide a suite of techniques for characterisation, propagation, and validation of uncertainty in a user-friendly manner. This study presents a comprehensive tool to manage with uncertainty for engineering computations in both intrusive and nonintrusive way. |
11:15 | Subinterval sensitivity for high dimensional models PRESENTER: Dawid Ochnio ABSTRACT. This paper introduces an interval-based non-probabilistic sensitivity analysis method, named subinterval sensitivity. A powerful, reliable and rigorous sensitivity analysis method, which is best suited to quantify the importance of inputs purely with respect to their mathematical model. The method has only recently and partially appeared in the literature, while its scalability to high-dimensional models is claimed here for the first time. We apply subinterval sensitivity to quantify and rank the importance of the parameters of a trained neural network model while drawing comparisons with the established Sobol' sensitivity analysis method. Sensitivities on the parameters of a trained neural network can shed light on overparametrization and explainability of the neural network surrogate model. |
11:30 | Optimal decisions under unidentifiable ambiguity ABSTRACT. Decision makers often need to decide on dynamic issues where (Knightian) uncertainty (also known as ambiguity) is highly persistent (unidentifiable) over time. Under such circumstances new information either does not arrive, or arrives too late, why expert beliefs do not update from the time of first decision and onwards. In other words, information arrives too slow for interventions to be updated and adjusted meaningfully. Examples include (i) uncertainty about tipping point thresholds impacting climate change and their corresponding cascading damages; (ii) excavation with potential for environmental or seismic disasters; and (iii) investments in technology and human-capital-promoting institutions under ambiguous possibility of obsolescence. In such situations, decision makers (representing the preferences of stakeholders) consult experts about the unknown likelihoods of different scenarios and making decisions thereafter. Challenges are distinguishing preferential components from beliefs, and how to weigh the different scenarios against each other over time if uncertainty is unidentifiable or persistent. I develop a framework based on smooth ambiguity preferences in dynamic decision environments when uncertainty is unidentifiable, i.e. information updating is too slow. I describe the procedure by which the stakeholder reports their preferences over risk and ambiguity, whereby the experts report optimal decisions based on their beliefs over different scenarios. The advantages of the framework are (i) perfect separation between preferences and beliefs, (ii) perfect separation between risk and ambiguity preferences, and (iii) parsimony when uncertainty is unidentifiable. When preferences are identifiable, but highly persistent, the optimal decision under unidentifiable ambiguity can serve as a limiting case of optimal decision under short to medium-term time horizons. |
11:45 | Error-Modelling for the Quantification of Total Uncertainty PRESENTER: Peter Kuhn ABSTRACT. We generate uncertainty estimates for an image segmentation model trained on recognizing icebergs. Like most deep learning methods, the segmentation model is based on a maximum likelihood approach and thus generates point predictions minimizing prediction error. In high-risk contexts, such as the detection of icebergs, such point predictions are insufficient, because users also want to know how reliable or certain the model deems a prediction to be. In the last decade there have been great advances in the domain of uncertainty quantification. However, all state-of-the-art methods developed here, like Monte Carlo Dropout, require deep modifications of the model’s architecture. These methods are thus inapplicable in contexts where we want to quantify the certainty of predictions from a fixed model architecture, or where the model is a black box supplied by third parties or only accessible via an API. The proposed solution to these restrictions is the use of a second-order model to predict the prediction error of the first-order or base model. Our chosen second-order model was a UNET architecture. We show that second-order models outperform naïve uncertainty methods, like predictive entropy, along standard benchmarks and can even reach performances comparable to state-of-the-art methods like Monte Carlo Dropout. As second-order models are much easier to implement, they offer a practically useful tool for uncertainty quantification. We verified our results on the Drone UAV dataset. |
10:45 | Climate risk disclosures. The risk perception perspective PRESENTER: Konstantina Karatzoudi ABSTRACT. Organisations and institutions publish climate risk disclosures to provide information related to their carbon footprint and exposures to climate-related risks. While frameworks for climate-related risk disclosures, such as the Task Force on Climate-related Financial Disclosures (TCFD) have advanced this practice, challenges remain. Criticisms especially focus on some disclosures being more performative than substantive, focusing on improving the public image rather than driving meaningful impact. This paper explores in-depth the mechanisms and relationships through which climate risk disclosures shape risk perceptions and transfer responsibilities, examining how they affect the understanding of both stakeholders and the broader public, and the extent to which they are enabling more risk informed decision making. To achieve this objective, we draw on insights from contemporary risk science with the main focus on risk perception research and current climate risk disclosure practices. |
11:00 | Perceptions of climate-change related health risks in the Netherlands PRESENTER: Veerle Cannemeijer ABSTRACT. Background: One of the important areas planetary health focusses on is the interaction between climate change and health. The changing climate impacts our health in several ways. For example, new allergies might develop, and extreme heat causes increased death rates. To minimize these impacts measures must be taken. To stimulate adherence to guidelines that minimize the climate-change related health risks, citizens need to be aware of these risks. Effectively communicating risks seems essential. To do so, in-depth insight into the perceptions of citizens is needed, first. This study explores these perceptions in the Netherlands. Aim: We aim to understand how people living in the Netherlands perceive different climate change-related health risks to ultimately make recommendations about how to communicate effectively about the health impacts of climate change. Methods: To do so, we conduct a survey study examining risk perception of planetary health risks in the Netherlands. The data collection of the survey study is planned in April 2025. Results: Preliminary analysis of the survey study will be presented. These results will show the risk perceptions of the public in relation to several possible health risks related to climate change. We show what risks are perceived most severe and most likely to happen. In addition, it will be examined if and how perceptions differ across multiple subgroups (for example, age groups, gender, etc.). Special attention is paid to vulnerable groups, such as elderly, children, individuals with chronic diseases and migrants. Conclusion: Our study will show what risks are perceived as most severe and/or likely to happen, hereby providing valuable input for risk communication strategies. |
11:15 | Public understanding of climate change and policy responses from Malaysian Borneo PRESENTER: Elspeth Spence ABSTRACT. Countries in South-East Asia are especially vulnerable to the impacts of the climate crisis with increased flooding, heatwaves and drought all affecting people more regularly in these regions. It is clear that reducing emissions and engaging with a portfolio of climate responses is crucial to try and limit the effects seen globally. In this research we conducted five deliberative workshops in Malaysian Borneo to determine how communities perceived climate change and the range of possible strategies to tackle the issue. Malaysia is well-known for its status as a mega-biodiverse country, containing rare species not found elsewhere in the world due to its tropical environment and vast rainforests. Climate change was of concern to our participants however there were a range of other factors which were more important such as the economy, access to water and electricity, and the development of Sabah more broadly. Severe flooding, heatwaves and soil erosion were attributed to poor infrastructure planning and loss of trees rather than only due to climate change. When discussing possible strategies to combat climate change respondents wanted better local public transport and were supportive of tree-planting and conservation. All groups stressed the need for government to take action and tackle climate change with acknowledgement by some that there had to be attention on development in Sabah which differed even from West Malaysia never mind countries in the Global North. Although Malaysia will have a consultation process for its national climate change bill participants do not find these activities transparent enough and have low trust in government and industry to act in the communities interest. It is imperative that local communities are given the opportunity to be included in decision-making around what the appropriate solutions for their regions may be regarding climate action and to make a meaningful contribution. |
11:30 | Measuring the Impacts of Visualisation Tools on Environmental Risk Perception PRESENTER: Christina Carrozzo Hellevik ABSTRACT. The well-established definitions of risk stress that risk is a notion we associate with a potential threat for an object or system we value, its magnitude and likelihood (e.g., Kaplan and Garrick, 1981; Boholm & Corvellec, 2010). This leaves room for a subjective and potentially biased (Slovic, 1987) evaluation, which will naturally be influenced by the type of information or evidence available, the way in which it is communicated and our prior knowledge and opinions. Slovic (1987) and Tversky and Kahneman (1974) draw attention on our tendency to oversee frequencies of occurrence when assessing risk under uncertainty, and, instead, resorting to heuristics or shortcuts. In addition to individual biased risk perceptions, Kasperson et al. also highlighted a social effect of risk amplification (1988). More recent literature on on man-made environmental changes discussed the opposite effect, or social attenuation of risk (Pidgeon 2012). Zhao & Luo (2021) systematise some of the cognitive biases that may inhibit pro-environmental behaviour in the face of climate change and increase polarisation. The 'debiasing' strategies they propose include visualisation tools. To explore this, we carried out an experiment to assess whether being exposed to a visualisation tool has any effect on environmental risk perception. To attempt to generalize the results, we used 3 different topics on all the participants, namely marine plastic pollution, sea-level rise and drinking water safety, and three different tools. For each tool, a short report was provided highlighting the same information as in the tool. The participants were assigned randomly to a treatment, no-treatment version of each of the task; i.e., a report, or a computer with the visualization and each topic in a random order. After each task, the participants were asked to answer a questionnaire about their risk perception on the topic. |
10:45 | 3 bottom up initiatives for enhancing data-driven asset management decision making ABSTRACT. Asset management in critical infrastructure systems, such as the Eastern Scheldt Storm Surge Barrier, requires reliable data for effective decision-making, as renovations and maintenance are technically and logistically difficult. A traditional approach in decision making often relies on manually logged failure data and complex failure distributions, which may not reflect the real-world performance of assets or give a false sense of significance. This abstract presents 3 initiatives for improving asset management decision-making. Firstly, we demonstrate the value of using SCADA (Supervisory Control and Data Acquisition) data as a more precise and continuous source of information. SCADA data, being automatically recorded, provides a higher fidelity dataset that reduces human error and allows for more insights into asset performance. This shift from manually logged failure data enables a more accurate characterization of operational behavior, offering significant advantages for reliability analysis. Secondly, we challenge the conventional use of bi-Weibull distributions for failure modeling. While widely accepted in reliability engineering, bi-Weibull distributions may not always represent the observed failure mechanisms effectively. We propose a transition to using normal failure distributions, which better aligns with the available data of the storm surge barrier’s components. This methodological shift simplifies the modeling process, reduces parameter uncertainty, and yields more interpretable results for decision-makers. Finally, we focus on the renovation prioritization of the hydraulic subsystems. Machine learning techniques are applied to interpret defects identified during inspections of the pistons outer layer, enabling a data-driven approach to assess the risk of failure. By analyzing these defect patterns, the machine learning models generate a prioritization order for renovation activities. This allows for more proactive maintenance planning. These practices demonstrate how data-driven innovations can refine asset management and optimize decision-making. The case serves as a compelling example of how modern data science methods can transform traditional reliability engineering frameworks. |
11:00 | Integrating risk estimates into the planning of preventive maintenance for large portfolios of bridges PRESENTER: Josia Meier ABSTRACT. Road managers must anticipate preventive maintenance intervention needs for all bridges in their network. This must be done years ahead of time, so that there is sufficient time to perform the appropriate detailed investigations of the structures by engineering offices, to combine with the interventions on other objects and schedule them, to obtain financing, and to prepare projects. Preventive maintenance interventions are, however, not always executed at the optimal time due to multiple factors, including variability in early overviews of upcoming intervention needs, lack of resources to conduct detailed investigations and lack of resources to carry out the interventions. Consequently, some interventions are executed earlier than required and some later, leading to either higher than necessary expenditures or higher than necessary risks. While existing bridge management systems do an admirable job in predicting when future interventions are required, there is potential to improve how they can help determine which investigations or interventions are to be postponed if necessary. The work presented in this paper meets this challenge by demonstrating how standardized overviews of bridge related risks could be integrated into these systems, where the risk estimates are made using fault trees and standardized estimates of probabilities and consequences of base events. The top events of the fault trees are service-related events associated with the detection of situations related to bridges that would cause interruptions to service, e.g., the discovery of an excessively large crack that would cause a manager to reduce traffic loads on the bridge until at least detailed engineering investigations could be conducted. The consequences for each top event are approximated using parameters that enable quick estimation for all bridges in a network, e.g., the expected user costs because of increased travel time due to traffic deviations. The method is demonstrated on four highway bridges in Switzerland. |
11:15 | Challenges and research directions in multi-project, multi-actor infrastructure construction management PRESENTER: Sonia Di Cola ABSTRACT. Infrastructure construction programs are often highly complex due to the involvement of multiple projects and actors. The presence of multiple projects introduces interdependencies that necessitate the coordination of schedules, resources, and priorities, as well as the implementation of risk management strategies. Simultaneously, the involvement of multiple actors gives rise to power dynamics, challenges in communication and engagement, and competing interests that must be effectively managed. A systematic literature review is conducted to explore how the challenges associated with managing multiple projects and multiple actors are addressed within the field of infrastructure construction management. Key themes, strengths, and limitations within the literature are identified. Based on this analysis, critical research gaps are outlined, and potential directions for future research are proposed. The main findings highlight six key gaps that may serve as a foundation for future investigations aimed at enhancing the management of infrastructure construction programs. |
11:30 | Decentralized Physical Infrastructure Networks: A Catalyst for Critical Infrastructure Resilience? PRESENTER: Boris Petrenj ABSTRACT. The growing complexity and interdependency of Critical Infrastructure (CI) systems have worsened their vulnerability to disruptions, and put their resilience as a key focus in both academic discourse and industrial practices. In recent years, Decentralized Physical Infrastructure Networks (DePIN) has emerged as a concept that combines physical infrastructure with blockchain technology to create decentralized networks for various applications. It applies the principles of Web3 (decentralized internet) to the physical world. DePIN aims to establish decentralized networks for managing physical assets and services more efficiently, transparently, and securely. Unlike traditional centralized infrastructure, DePIN distributes decision-making, resource management, and system control across a network of sovereign (autonomously owned) but functionally interconnected nodes. This paper examines the impact of DePIN on CI resilience by analyzing how decentralization can enhance the ability of CI systems to cope with disruptions. We explore the potential benefits of improved flexibility, redundancy, and adaptability on CI resilience. In the first step, we examine the key characteristics of DePIN and its resilience-enhancing features. Subsequently, we examine real-world applications of DePIN in sectors such as energy, transportation, healthcare, and supply chain to assess its practical impact on resilience. By looking from the perspectives of Complex Adaptive Systems (CAS) theory and Resilience Engineering, we gain a deeper understanding of DePIN's potential to enhance CI resilience. While DePIN presents significant potential in improving resilience, there are also challenges and limitations related to coordination, interoperability, and governance that must be addressed to fully realize these benefits. The paper also outlines the current advantages and drawbacks of infrastructure decentralization, considering potentially associated economic and social impacts. Finally, we identify future research directions that can help leverage DePINs characteristics and design principles to develop the next generation of resilient, adaptable, and sustainable infrastructure systems. |
11:45 | Multi-Layer Conceptual Framework for Multi-Dimensional Interdependencies in Critical Infrastructure Systems PRESENTER: Paulina Zurawska ABSTRACT. Critical infrastructure systems, such as water and energy systems, are fundamental to support our daily lives, yet their complexity and dynamic nature present significant challenges for analysis and modeling. Traditional approaches often fail to capture the multi-dimensional interdependencies that evolve over time across various infrastructure types. To address this gap, this paper introduces the Multi-Layer Conceptual Framework, which provides a new perspective for analyzing and modeling interdependent infrastructure systems. The model distinguishes interdependencies based on the infrastructure types (water, energy, transport), contextual layer (technical, institutional, and societal), and analysis time frame (short, mid, and long term). The framework enables the identification of cross-layer interactions, co-evolution patterns, and emergent behaviors within and across infrastructure systems. The model's applicability is demonstrated through a case study on energy transition, which demonstrates how the framework is used in the problem of new energy source deployment, considering the social attitude toward different alternative sources of energy, impact on the new policies and technological development. |
10:45 | Modern Reliability Methods and Tool for New Mobility: Valeo perspective PRESENTER: Marco Bonato ABSTRACT. As a major tier1 supplier, Valeo is actively involved in the fast pace changes shaping modern automotive industries. The advent of electrification, autonomous and connected driving comes with new challenges for reliability of automotive parts. This concept of “novelty” embraces many aspects related to durability: mission profiling, failure modes detections and preventions, determination of failure root causes, physical models to accelerated design validation tests, analyses of warranty field data, development and comparison of validation specifications, testing and simulations etc. This presentation illustrates the new methods and tools available within the reliability community that have been deployed in the recent years by Valeo. By focusing on thermal systems related components, we will emphasize the application of these so-called modern reliability tools during the whole process of product development: from concept to support phase. We will illustrate the application of the various reliability approaches grouped according to the three different branches of the discipline: experimental, predictive, and operational reliability. Several case studies will illustrate the deployment of the new techniques: * Monitoring of product failure during accelerated bench tests * Analysis of Big Road Load Data (vehicle measurements) * Reliability predictions from a small sample size * Bayesian Networks * Fatigue simulation * Data Mining for faster reliability predictions * Data Driven Model-based Systems Engineering for automatic FMEA |
11:00 | Towards a first implementation of the internal Weibull Distribution in Power BI and the related software architecture ABSTRACT. It is inevitable for companies to maintain reliability data for prediction. For this reason, it is important to perform reliability calculation with defined assumptions. The outgoing idea here is the dependency of both the data and the reliability information from external sources. This idea constitutes the starting point of this study. The Weibull distribution function, was calculated within the company. The function graph was obtained and reflected in Power BI. This paper shows the results architectural topology of the reliability data Software and presents the Weibull distribution of failed parts. This curve shows the linear distribution of the failure numbers related failure categories. The advantage of Weibull analysis without using external software is to develop the intern know-how and be able to control the calculation which allows an independent view. The development of the software and the data maintenance are one of the complex works for gaining valuable results and prediction for the devices in the field. |
11:15 | Approach for a reliability proof of autonomous vehicles compared to human drivers through real-world road tests PRESENTER: Dr. Melani Krolo ABSTRACT. Real-world testing, along with extensive simulation, is essential for validating autonomous vehicle performance under real operating conditions. However, proving the reliability of autonomous vehicles compared to human drivers through real-world road tests has been challenging due to the lack of a comparable database on human driving behavior within similar Operational Design Domains (ODDs). A study by UMTRI collected human ride hail data, including crash statistics, in San Francisco from 2016 to 2018, enabling a comparison between autonomous vehicles and human driving behavior over several years. The approach presented in this report evaluates the reliability of autonomous vehicles based on real-world tests on public roads, using data from Waymo autonomous vehicles operating as robotaxis in San Francisco. An analysis of disengagement data revealed improvements in later developmental stages for disengagements due to software discrepancies. The reliability of Waymo driverless vehicles in San Francisco was demonstrated and compared to human drivers from ride-hail services in the same ODD. Based on DMV data as of February 2024, it was shown that Waymo driverless vehicles in San Francisco are at least as reliable as human drivers with similar driving behavior in the same ODD. Additionally, even under conservative considerations, the reliability of human drivers was proven with a high confidence level. |
11:30 | New reliability validation test for radial cooling motors – an oven home appliance application PRESENTER: Alberto Miele ABSTRACT. This paper presents the steps followed to define a new reliability validation test for shaded pole motors used in cooling systems of built-in ovens. The motors are fundamental for maintaining low temperature on sensitive components such as electronics, especially in high-temperature conditions such as pyrolytic self-cleaning, where the oven temperatures can reach up to 450°C. A Design of Experiment (DoE) approach was applied considering temperature, rotating speed and supply voltage as the key stress factors. Axial play, measured as the motor’s movement along the shaft axis, proved to be a consistent indicator of upcoming failures. The test showed that temperature had the highest impact on mean time to failure of the component under investigation. Evaluating the acceleration factor using the Arrhenius model it was possible to propose a new test procedure achieving a 20% decrease of validation time while keeping the target of 90% reliability with a 90% confidence level. The stress-strength analysis showed that the cooling motor under study has a reliability of 99,55% over 10 years of usage in a pyrolytic oven, with a 90% confidence level. |
11:45 | Modeling bimodal behavior of Nitinol cycle-to-fracture distribution PRESENTER: Katrin Wolff ABSTRACT. Nitinol, a nickel-titanium alloy with shape-memory and superelastic properties, is employed in medical devices where its reliability to 10^8 load cycles or beyond is crucial for device safety and effectiveness. Recently, bimodal behavior of Nitinol was reported[1] for rotary bending fatigue tests on thin wires where, at a narrow range of alternating strains, fractures occur at either early or late cycle numbers with a gap of several decades and without discernible difference in failure mode. Here, we show that under realistic bending conditions and on samples representative of the finished device, bimodal behavior may occur over an even larger range of alternating strains. Based on fatigue-to-fracture tests of ultra-high purity Nitinol specimens to 10^7 cycles, we phenomenologically model early and late fractures at different (alternating) strain levels and compare S-N curve extrapolations that either neglect or account for bimodality. We show that failing to account for bimodal behavior may result in inaccurate model predictions and subsequently drastically overestimate device high-cycle durability. Finally, we run high-cycle experiments (10^8 cycles) and confirm predictions of the bimodal model. [1] Weaver et al. 2023 in Shape Memory Superelasticity |
10:45 | Simultaneous Prediction of Causes and Consequences in Hydrogen-Related Accidents Using Transformer-Based Multi-Task Learning PRESENTER: July Bias Macedo ABSTRACT. Hydrogen is emerging as a vital alternative in the global transition to sustainable energy. However, its inherent characteristics—such as high flammability—pose significant safety challenges in the operation of hydrogen systems. To prevent future accidents, a thorough understanding of hydrogen-related incidents is crucial. In this work, we propose a novel approach utilizing transformer-based multi-task learning to simultaneously predict accident causes and consequences from unstructured accident narratives within the Hydrogen Incident and Accident Database (HIAD). By fine-tuning a pre-trained BERT model, the accident narratives are processed to identify root causes while estimating the potential consequences. The proposed multi-task model leverages shared representations, enabling efficient learning of both causal and consequence patterns from historical hydrogen incidents. Our results show that multi-task learning enhances the model’s ability to generalize across multiple prediction tasks, outperforming single-task models. By suggesting likely causes and consequences, this methodology supports risk identification and assessment, contributing to the safer adoption of hydrogen technologies. |
11:00 | Enhancement of a Hydrogen Incident and Accident Database Using Large Language Models PRESENTER: Gianluca Tabella ABSTRACT. Hydrogen holds significant potential for decarbonizing various industries, including energy and mobility. However, the limited availability of accident data poses a significant challenge to effective safety risk analysis and assessment. This study leverages large language models to address the critical task of filling gaps in the Hydrogen Incidents and Accidents Database (HIAD) 2.1, a prominent repository of hydrogen-related unwanted events. A three-step Artificial Intelligence-driven algorithm is proposed: (i) a preprocessing phase to standardize and prepare an event description, (ii) a processing phase utilizing OpenAI's sentence embedding technology to extract semantic relationships, and (iii) an enhancement phase employing trained multilayer perceptrons to impute missing data. The algorithm demonstrates promising results in predicting categorical entries and is applied to enhance the entire database, with a specific focus on the 2019 fueling station fire in Sandvika (Norway). This case study highlights the proposed algorithm's potential to improve our understanding of hydrogen-related incidents and contribute to enhanced risk management strategies. |
11:15 | A Semi-Automated Framework for Coding Fatal Accident Data in Mines Using Natural Language Processing PRESENTER: Bibhuti Bhusan Mandal ABSTRACT. Mining remains one of the most perilous industries, with frequent fatalities caused by a range of occupational hazards. Traditionally, identifying the causes of such fatalities has relied on manual coding of accident reports, which is time-consuming, inconsistent, and prone to human error. As the volume of data grows, especially in developing countries, it becomes imperative to automate this process to ensure timely safety interventions. Advances in Natural Language Processing (NLP) and Machine Learning (ML) provide promising solutions for semi-automated coding, reducing manual effort while improving accuracy. This study utilizes NLP and ML models to predict the causes of fatalities in Indian mines using accident data from the Directorate General of Mines Safety (DGMS) reports from 2016 to 2022. The dataset consists of 401 fatal accident descriptions spanning seven years. Accident descriptions were pre-processed and vectorized using the Bag of Words approach. Five machine learning models were compared: Naïve Bayes, Logistic Regression with and without adjusted weights, Support Vector Machines, and Random Forest. Each model was trained to predict accident causes based on textual descriptions. The models were assessed based on their accuracy in classification, using an 80/20 train-test split for validation. The study utilized a semi-automatic classification approach. If the maximum probability assigned to any class exceeds a predetermined threshold, the instance is classified automatically. Conversely, if the maximum probability is below the threshold, the instance is filtered for manual review. Among the models evaluated, Logistic Regression with Adjusted Weights outperformed the standard Logistic Regression model with an accuracy of 84%. It maintained a precision of 80%, a recall of 83%, and an F1-score of 80%, demonstrating its robustness in handling imbalanced data and effectively identifying positive cases. By implementing these models, manual coding efforts can be significantly reduced, enabling quicker data processing and enhancing safety oversight in mining operations. |
11:30 | Knowledge Graph Construction of Large Language Model Retrieval Augmented Generation for Oil and Gas HSE Professional Q&A System PRESENTER: Yiyue Chen ABSTRACT. In Oil and gas HSE (Health, Safety, and Environment) management, supervision and inspection are essential to stable production. During inspections, dealing with complex professional knowledge are challenged by heavy manual review and low efficiency. Currently, while Large Language Models unfold powerful text comprehension and generation capabilities, their performance is limited in the HSE field due to lack of specialized knowledge and data resources. There exists a contradiction between the professional answers demand and the non-targeted, lacking reference basis responses. Therefore, knowledge graph technology is introduced to enrich knowledge resources and enhance question-answering effectiveness. This study focuses on the construction of an HSE policy document knowledge graph to enhance the accuracy, recall, and pertinence of LLM question-answering. Firstly, HSE-related policy documents are collected, and ontology model concepts are designed, including classification, applicable objects, production links, business areas, etc. Secondly, ontology model relations are designed, including part-of, kind-of, instance-of, attribute-of, and meet-above, etc. Thirdly, policy document entities are identified, and relationships are extracted to construct the domain knowledge graph. Finally, the vectorized content of sliced policy documents is stored in the vector knowledge base, providing rich and accurate external knowledge sources for the large model. Question-answering tests demonstrate that the constructed knowledge graph significantly enhances the performance of LLM answering. In small-scale queries, it can directly answer standard specification requirements. In standard queries, the most applicable files are matched based on multi-dimensional information. In reasoning questions, it supports the analysis of the thought chain and provides the legal basis step by step. Overall, the construction of the knowledge graph effectively addresses the challenge of evidence-based question-answering by large models within the professional field, offering valuable knowledge support to technicians and managers during production activities and HSE inspections. |
10:45 | Towards A Methodological Framework For Early Qualitative Assessment Of The Ecological And Economic Costs Of Digital Twins In Industrial Maintenance PRESENTER: Kenza Elbaroudi ABSTRACT. Over the last decade, the use of digital twins (DTs) has expanded significantly across a variety of sectors. They often take various forms, with an emphasis on the underlying technologies (IoT, Cloud Computing, AI, virtual reality, etc.). However, one question remains: once deployed, will these digital twins be utilized in a manner that is both cost-effective and sustainable? Digital twins originated in maintenance applications, which is where these technologies are currently the most mature. Previous work shows that in the literature, most articles concern predictive maintenance applications, which implies the frequent use of artificial intelligence and therefore the management of large volumes of data. However, in recent years, we have seen the emergence of virtual reality technologies for training and augmented reality for intervention assistance, which require both significant hardware and software resources. The aim of this article is to propose a qualitative methodology for classifying industrial maintenance digital twins. This methodology will enable the assessment of the economic and environmental costs of DTs, making it possible for decision makers to ask the right questions even at the earliest stages of the DT's conception. This is especially helpful since, at those early stages, designers often lack quantitative information. This approach offers a more accessible starting point for eco-design, unlike more quantitative methods, such as Life Cycle Assessment (LCA), which require higher precision in data collection and a significant amount of time. This qualitative methodology will be applied to recent literature, providing a preliminary analysis of the digital twins currently in operation. |
11:00 | Sustainable maintenance scheduling: reducing urgent interventions and workload outside of business hours PRESENTER: Claudia Fecarotti ABSTRACT. Nowadays, Original Equipment Manufacturers (OEMs) are transitioning from product-based to service-oriented strategies, with emphasis on sustainable maintenance approaches where unscheduled interventions are minimized. Preventive and Condition-Based Maintenance (CBM) policies, are key to enhance operational efficiency, reduce costs, and meet customer demands. If adequately developed, these policies can reduce the need for unscheduled corrective interventions, with a positive effect on the system availability and performance, as well as workforce welfare. Despite advances in preventive and CBM strategies, managing high-tech systems remains highly challenging. They consist of multiple heterogeneous components, each with unique degradation and failure patterns, and different criticality with respect to the overall system functionality. As such, these systems are better maintained under a combination of Age-based (ABM), Condition-based (CBM), Inspection-based (IBM), or Failure-based Maintenance (FBM) policies, and the urgency of an intervention is dependent on the components’ criticality. To address the maintenance of such complex systems, we propose and evaluate four distinct maintenance policies that integrate scheduled preventive, semi-urgent, and opportunistic maintenance. We distinguish between critical and semi-critical components, the failure of the former causing the system to stop, while the latter only reduces the system performance. We introduce semi-urgent maintenance visits to replace semi-critical components upon failure, as well as components that, although not failed yet, have reached critical degradation levels. These semi-urgent visits are scheduled within a few days to occur during business hours to minimize disruption and workforce strain. Additionally, opportunistic maintenance is incorporated into these visits, allowing multiple components to be replaced simultaneously. Scheduled visits still take place for routine tasks, but they can be rescheduled if a recent semi-urgent visit has occurred, thus improving efficiency. Each policy is assessed through numerical experiments to determine its effectiveness in reducing costs and corrective maintenance, thus demonstrating the potential for improved maintenance outcomes in complex, high-tech systems. |
11:15 | A roadmap to integrate the sustainable impact of Industry 4.0 technologies in maintenance policies PRESENTER: Mouhamadou Mansour Diop ABSTRACT. Maintenance decision-making has traditionally focused on economic criteria, yet the growing demand for carbon neutrality highlights the need to address all three dimensions of sustainability (economic, environmental, and social) within manufacturing industries. Although Industry 4.0 (I4.0) enabling technologies are widely recognized for their potential benefits, their full sustainability impacts remain poorly understood. Existing studies often emphasize their positive contributions but lack precise quantification of both their positive and negative effects. Moreover, these analyses tend to focus exclusively on the use phase, neglecting impacts during manufacturing and end-of-life stages. This article proposes a structured roadmap for evaluating the lifecycle impact of I4.0 technologies on maintenance policies. By considering multiple scenarios, this approach quantifies their effects across all dimensions of sustainability, ensuring that the benefits realized during use outweigh the negative impacts from manufacturing and disposal. To illustrate its applicability, a preliminary use case is presented using a vibration test bench equipped with IoT sensors. Looking ahead, these sensors are set to generate fault data under varying conditions, which will be used to test maintenance scenarios. Additionally, as outlined in the roadmap, a life cycle assessment (LCA) is planned for the sensor to provide a comprehensive assessment of its sustainability impact. This case study serves to demonstrate the roadmap’s relevance and its potential to support sustainable maintenance decision-making, laying the foundation for integrating I4.0 enabling technologies into maintenance strategies while avoiding undesirable rebound effects that could compromise sustainability goals. |
11:30 | Ensuring Personal Data Compliance by integrating Legal Constraints into Digital Twin Design Methodology PRESENTER: Nathalie Julien ABSTRACT. Advancements of digital twin technology have brought new use cases involving the presence of personal data. Considering the importance of this data, ensuring compliance towards the applicable legislation is mandatory. Through European provisions and especially the GDPR, this study aims to point out the various legal implications linked to the presence of personal data in the digital twin. Ensuring that processing operations comply with European regulations is not only a necessity, but also a key factor in risk management. In this paper, we propose an innovative approach where these constraints are formalized and integrated into an existing design methodology for digital twins. This study focuses on industrial digital twins in connection with maintenance operations. When used for maintenance purposes, the use of a digital twin necessarily implies data sharing. These data flows must be taken into account and anticipated right from the design stage. It is necessary to conform to a framework that is comprehensible and accessible to its designers, in order to respect the binding rules linked to data processing in the architecture of the digital twin. In order to achieve a complete result towards compliance that is adapted to the specific use of the industrial digital twin, three steps will be discussed. Firstly, the legal constraints must be identified directly from the use cases of digital twin technology. Secondly, the provisions of the GDPR are taken as a starting point to propose concrete measures that can be integrated when designing digital twins (record of processing activities, DPIA, erasing policy). Finally, all these compliance operations will be brought together in a proposed methodology directly applicable to the design of digital twins. The design of this framework represents an innovation in the field of digital twins, in that it complies with European standards and provides a framework for its design. |
11:45 | Integrated model for Maintaining Operational Conditions and managing obsolescence PRESENTER: Marc Zolghadri ABSTRACT. Maintenance in Operational Condition (MCO) covers all technical and logistical support activities aimed at ensuring the availability of equipment for mission accomplishment. For complex systems, the management of MCO and obsolescence is a strategic challenge to ensure long-term performance and reliability. Although these two areas are often treated separately, the link between MCO and obsolescence remains little explored in previous research. The aim of this article is to demonstrate that MCO, without systematic monitoring of the health of critical components, is a limited and ineffective discipline. Since obsolescence is inevitable, it is essential to couple MCO activities with specific remediation strategies. To this end, this article uses SADT (Structured Analysis and Design Technique) models to formalize and structure the interactions between obsolescence management and MCO practices. Particular focus is given to the models developed by [Vrignat et al., 2024], which offer a detailed approach to the couplings between these two domains. These models help optimize decision-making and operational processes, by integrating technical and regulatory imperatives, to ensure more resilient and proactive management of complex systems. |
10:45 | Scope preparation for human reliability analysis benchmark of ASEAN nuclear research reactors PRESENTER: Wasin Vechgama ABSTRACT. ASEAN Network on Nuclear Power Safety Research (ASEAN NPSR) investigated the operating culture profile of Nuclear Research Reactors (NRRs) operators using Hofstede’s culture indices in order to prove the homogeneous operating culture before initiating the Human Reliability Analysis (HRA) benchmark project. Thailand Institute of Nuclear Technology (TINT) and Korea Atomic Energy Research Institute (KAERI) together developed the specific HRA framework for NRRs to serve the HRA benchmark project. This study aims to show the scope of preparation for the international HRA benchmark in ASEAN NRRs with limited human error data using an analytic approach and a practical approach. In the analytic approach, since ASEAN NRRs follow its specific procedures to manage the task during emergencies, TAsk COMplexity (TACOM) score is recommended as a linkage to estimate Nominal Human Error Probability (NHEP). As for the practical approach, the HRA framework for NRRs suggests simply estimating prior NHEPs from the Human Reliability data EXtraction (HuREX) and updating the observed human errors using the EMpirical data-Based crew Reliability Assessment and Cognitive Error analysis (EMBRACE). A maximum among the two approaches is expected to provide the final HEPs as a human error database for the international HRA benchmark in ASEAN NRRs. |
11:00 | APPLYING HUMAN ERROR IDENTIFICATION TO ENHANCE STROKE CARE: A BRAZILIAN EXPERIENCE PRESENTER: Moacyr Machado Cardoso Júnior ABSTRACT. Stroke is the second leading cause of death and the third leading cause of combined death and disability worldwide, with a global cost exceeding $721 billion. In Brazil, it is the leading cause of death, resulting in 11 deaths per hour. Stroke is a medical emergency that requires prompt treatment, as up to 1.9 million neurons can be lost every minute without intervention. Ischemic stroke can be treated with medications and procedures, positively impacting mortality and disability rates. Chemical thrombolysis should be performed within 4.5 hours of symptom onset, while mechanical thrombectomy may be indicated between 8 to 24 hours if medication fails. Stroke patient care is complex and Human Reliability Analysis techniques,SHERPA, help identify vulnerabilities and improve the care process. Results: Seventeen critical activities were identified and classified during the service. The most frequent errors were A2 (22%) and C4 (13%). The most critical activities occurred in opening the patient's medical record for the incorrect specialty (activity 1.1), making decisions about thrombolysis (activity 7), and the unavailability of technology (activity 10). Actions: The protocol was updated and redesigned. The training model was restructured and included individual feedback. Signage was also implemented on the patient arrival form and a communication system with the internal regulatory center to ensure differentiated reception in the emergency room for patients with potential for thrombolysis. The application of the method and systemic changes resulted in improvements in service flow and response time in critical tasks (1 to 9). Although the number of patients is still small, in the first half of 2024, 11 patients were treated at the hospital and 10 received care according to the protocol. Conclusion: HTA and SHERPA structured the prediction of human errors and reorganized activities with ergonomic principles, optimizing the implementation of the stroke protocol in the health service. |
11:15 | Are Human Reliability Analysis’s techniques able to account for cultural dimensions? PRESENTER: Caroline Morais ABSTRACT. Human reliability Analysis (HRA) is a method to evaluate human error risk in critical safety tasks, according to influencing factors such as time available, human-machine interfaces, layout and training. The set of performance influencing factors varies depending on the technique used. Checking the dataset used to validate the quantified relation between human errors and influencing factors, it was noticed that most of the data points were obtained in few countries, especially in the northern hemisphere. Are those relations valid for other countries? Could culture be an influencing factor per se? There is a way to dimensionalise cultures, according to Hofstede’s model (2011), which consists of the measurement of: Power distance index (PDI), Individualism vs collectivism (IDV), Uncertainty avoidance (UAI), Motivation towards Achievement and Success (formerly Masculinity vs femininity - MAS), Long-term orientation (LTO) vs short-term orientation, Indulgence vs restraint (RES). Recent research has found evidence relating two of those outcomes to propensity to trust (PT) with a negative effect on safety outcomes in Brazil. Based on the hypothesis that cultural dimensions can affect some influencing factors, this paper evaluated the correlation of those three Hofstede dimensions from Migueles and Zanini’s research (2024) against each performance influencing factor from Petro-HRA, the most used HRA method in oil & gas installations in Brazil in 2024. The authors discuss the importance of extending this study to the original datasets used for HRA before researchers consider creating a new method or an extension of an HRA technique that accounts for culture. |
11:30 | Application of Phoenix Human Reliability Analysis Methodology to Model External Operation with FLEX Strategies consideration PRESENTER: Tingting Cheng ABSTRACT. This paper examines the application of the Phoenix Human Reliability Analysis (HRA) methodology to model external operations in nuclear power plants during Beyond-Design-Basis External Events (BDBEEs), incorporating Diverse and Flexible Coping Strategies (FLEX). BDBEEs, such as those observed during the Fukushima disaster, challenge conventional safety systems and HRA methodologies due to extreme conditions like seismic events and station blackouts (SBOs). FLEX strategies, which leverage portable equipment and multi-phase operational protocols, are critical in addressing these scenarios. The Phoenix HRA methodology integrates Event Sequence Diagrams (ESDs), Fault Trees (FTs), and Bayesian Belief Networks (BBNs) to quantify Human Error Probabilities (HEPs) and model operator performance. This paper conducts a case study of an SBO caused by an earthquake. First, the operation of mobile power supply vehicles, a key FLEX equipment, is modeled alongside conventional SBO recovery operations. Second, critical tasks, failure modes, and performance-influencing factors (PIFs) are identified. Third, the methodology quantifies HEPs for securing FLEX equipment and evaluates overall recovery operations under environmental challenges such as road inaccessibility. Quantitative results demonstrate the impact of environmental factors on HEPs, providing insights to enhance human reliability during FLEX operations. This study contributes to advancing human-system interaction modeling under extreme scenarios and offers practical guidance for integrating FLEX strategies into the Phoenix HRA methodology. The findings aim to support the HRA community in addressing the unique challenges posed by BDBEEs. |
11:45 | Human Failure Probability Estimation via the Task Reliability Index: Sensitivity to Simulator Data Evidence PRESENTER: Markus Porthin ABSTRACT. A novel approach for collection and use of reliability-relevant evidence from main control room simulators has recently been proposed. The Task reliability index (TRI) captures both qualitative and quantitative performance indications in a formalized manner, thus giving a more accurate picture of the difficulty of the tasks than the one obtained using traditional failure count based approaches. The TRI, combining both plant response and human-centered observations, is aimed to be used for updating of human error probability (HEP) estimates using Bayesian inference. Table-top exercises using literature data show promising results, but the approach is yet to be validated and tested in real-life applications. This presentation explores the response of the TRI for HEP estimation through a series of numerical sensitivity studies, covering diverse ranges of HEP values and possible obtained evidence. Both informative and non-informative prior distributions are used to demonstrate the behavior of the estimate in typical use cases. Cases where evidence reinforces prior expectations as well as cases of conflicting evidence will be explored. Other practical topics concerning the implementation of the measure are also discussed. |
Dominic Balog-Way, Ann Bostrom, Frederic Bouder, Rui Gaspar, Katherine McComas and Magda Osman
10:45 | Probabilistic Vulnerability Assessment framework for Road Networks Considering Social Variables PRESENTER: Hrishikesh Dev Sarma ABSTRACT. Community wellbeing is dependent on access to a variety of critical infrastructure. When access is disrupted, it can result in societal impacts that are often difficult to quantify. The Capability Approach has been used in several research domains to quantify community wellbeing as it provides a degree of objectivity in identifying metrics that reflect individual freedoms and opportunities. This study proposes a methodology for vulnerability assessment of communities based on the level of access they have to critical infrastructure and uses a Capability Approach based framework to estimate access-vulnerability subjected to flood hazard. A Monte Carlo simulation-based model is proposed that generates a synthetic population using weighted random sampling that mirrors the individual characteristics based on census data. The access of this population to infrastructure deemed necessary for their continued wellbeing based on the Capability Approach is then evaluated using travel-cost as a metric. The model simulates disruptions in the network through stepwise random link or node removals incorporating randomized flood distribution with varying return periods. The proposed methodology is applied to the County Fingal region of Ireland and presented as a case study, specifically focusing on three central human capabilities’: life, bodily health, and bodily integrity. Vulnerability indices are computed for various locations based on these randomized scenarios, reflecting the varying degrees of risk faced by different demographic groups. The findings emphasize the importance of social variables in understanding and mitigating the impact of access disruptions on communities. |
11:00 | What do novice drivers need to know today? Revision of the research foundation – the GDE matrix ABSTRACT. Many European countries today base their driving education upon the GDE matrix. GDE is a theoretical summarized overview of knowledge that has emerged through many years of research, and it shows what the driver's competence should consist of. The matrix was published in 1999 and since then a lot of research in the pedagogical and psychological field have been made. Therefore, it is time to revise the GDE matrix and what the driver competence should consist of. The context has changed, society and research have changed, and it is appropriate to revise the matrix and update it according to recent research and what we know in 2024. In line with the GDE matrix, my study has a theoretical approach. I have done a scoping review of later research to revise the matrix. The reasons for this are plural: Firstly, the GDA matrix is completely dependent on what has been researched. It depends on which subjects have laid down the premises and how one has researched. The matrix is based on traffic psychology, but later years has showed us more plural research (mostly from pedagogy). Today, we know from international research that younger people take lower risk, they are more performance focused, more value based and have a higher degree of accountability. We know that everything lays on the willingness to take safe choices. It’s not knowledge or skills that in the end leads our choices, it all depends on the will to do so, meaning it is important to facilitate for inner motivation (Pont, Moorman, & Nusche, 2008: UNESCO, 2022). The upper levels in the matrix (social environments and personal skills for living) are based on research on humans in general (who we are, how we develop and socialize) and the matrix should be revised according to recent research. |
11:15 | Traffic safety behaviours and attitudes among ATV/ UTV users in Norway PRESENTER: Özlem Simsekoglu ABSTRACT. In recent years there has been increasing use of ATVs (All-terrain vehicles) and UTVs (Utility-terrain vehicles) in Norway, especially by young males. In parallel, the number of traffic accidents involving ATVs/UTVs has increased significantly on Norwegian roads. Therefore, there is a need for research focusing on understanding the underlying reasons for high accident involvement among ATV/UTV users. International research shows high risk-taking tendencies among young people, driving ATVs on tarmacked public roads, and travelling with a passenger as common risk factors associated with accidents involving ATV/UTV riders. Traffic safety attitudes are often related to traffic behaviours, therefore, the present study aimed to examine the traffic safety behaviours and attitudes among ATV/UTV drivers and make some comparisons between the two groups on the measured variables. An online survey was used to collect data from 262 respondents (214 ATV users, 48 UTV users) in 2024. Results show that both for ATV and UTV users exceeding legal speeding limits is a common risk factor, for ATV users having a passenger also appears as a high-risk factor. Regarding the attitudes, both ATV and UTV users reported relatively unsafe attitudes regarding obeying the rules for driving ATV/UTV. Many of them reported that it is acceptable to bend the rules, especially regarding speeding limits, when driving an ATV/UTV. The safest attitudes were related to drinking and driving. In terms of gender differences, as expected compared to female respondents male respondents reported more unsafe behaviours and attitudes, especially related to exceeding speeding limits. Overall, the findings indicate the need to improve traffic safety behaviours and attitudes, especially regarding speeding limits among ATV/UTV drivers in Norway. The findings of the study could be useful for improving both the traffic rules/regulations and training targeting ATV/UTV users. |
11:30 | How disruption information and simulation approaches benefit stakeholders in transport PRESENTER: Corinna Koepke ABSTRACT. The vulnerability of transport processes has been visible in various disruptions over the last years such as extreme weather events, the pandemic, and the war in Ukraine. Additionally, according to the International Energy Agency, the transport sector is globally responsible for more than 7000 MtCO2 emissions with a large portion due to heavy trucks. These trucks are still widely used for goods transportation but also a popular choice in case of disruptions. The EU-project SARIL brings together researchers and stakeholders in the transport domain to study the impact of certain disruptive events, propose better handling strategies supported by technical solutions and enable sustainable transport also in the case of disruptions. This paper presents the SARIL tools and the respective improvements for the businesses of four main roles in the transport sector. An information interface that receives information about disruptive events, such as forest fires, flooding events and reduced infrastructure capacities, forms the basis for the other SARIL tools. Traffic simulations along with sensor data from the infrastructure provide decision support for (1) infrastructure and (2) traffic managers. A network-based simulation environment enables (3) strategic logistics managers to plan routes considering various management strategies such as synchro-modal approaches. The latter also serve (4) operational logistics managers in combination with detailed route attributes for more resilient and sustainable route planning. |
11:45 | A methodology to find the importance of winter road characteristics on winter road accidents ABSTRACT. Various factors such as road geometry, precipitation, freezing temperature, and ice on the road surface increase the risk of different types of road accidents in winter. This study proposes a methodology to classify and model winter road accidents and determine the importance of each input variable (locational characteristics, road characteristics, and winter weather characteristics). This methodology utilizes a machine learning method for multi-class classification and then applies an approach to identify important input variables affecting classification of winter road accidents. The methodology has seven main stages and starts with data analysis, which gives a general overview of the dataset and is a major stage in using machine learning algorithms. Next, four different classes regarding personal injuries are defined for road accidents. After dividing the dataset into training and testing sets, categorical variables need to be transformed into numerical variables to be understood by machine learning algorithms. Then, different models for multi-classification need to be trained and tested to find the model with the best performance based on various evaluation metrics and plotting the process of learning and testing the model. Finally, the recursive feature addition method can be used to rank the importance of input variables on classifying severe road accidents in winter. |
10:45 | Digitalization in the Norwegian customs. How does digitalization influence experience-based intuition? PRESENTER: Olga Kvalheim ABSTRACT. Norwegian Customs is investing heavily in digitalization, in line with one of the EU requirements with the EU's tax and customs union (Tolletaten, 2020). Digitalization is expected to contribute to more efficient utilization of scarce human resources, as well as making the operations fast, transparent, convenient, and effective. It should also provide benefits to business and users of the platforms (Tolletaten, 2020). The initiative is extensive and has an ambitious progress plan. Digitalization can be understood as a process of transforming the structure, processes, people skills and culture of the entire organization so it can use digital technologies to create and offer products, services and experiences that customers, employees and partners find valuable and is about doing new things through digitalization that could not be done before (El Sawy et al., 2016, Tolletaten 2020). However, there are a lot of uncertainties from the customs officer’s side. Our research revealed some interesting challenges regarding how digitalization influences custom officers and the border control work, such as control functions, professional pride in experience-gained knowledge and their “hunting” instinct. Challenges with this led to the following research question: How is digitalization compatible with experience-based intuition? The paper addresses the current experiences, perceptions, and concerns of customs officers in Norway related to the ongoing digitalization process. The paper is based on qualitative data in the form of individual and focus group (FGI) interviews of customs employees. Note: the abstract is intended for Special session on Societal safety and security risks and law enforcement agencies |
11:00 | Model utility – A critical review of the of the Norwegian Customs categorization compliance (for Special session on Societal safety and security risks and law enforcement agencies. ) ABSTRACT. If we think there is some validity to the line from George Box that “All models are wrong, but some are useful” it should perhaps be inconsiderate to criticise a model that is considered very useful in the way that it inspires and directs the policy considerations of a public institution? Still, it is important that we as safety researchers and scientists do not take a models stated premises to be correct. One such example is the value of revisiting and criticizing Henrich’s very influential models of accidents in organizations. Also, revisiting models might be evidently considerate as some models such as the Tragedy of the Commons, as published in Nature, might show itself to have no scientific or indeed empirical basis at all. The approach in this paper is that a model of human action might, as suggested by Lakoff and Johnson, be considered to have some of the same properties of a root metaphor, that while it does enlighten our knowledge and understanding when light onto a subject we also leave some nuances of the topic very much in the dark. In this paper, we explore the model known as “The compliance pyramid” used in various public agencies. The agency in focus in the paper is the Norwegian Customs where the Compliance pyramid is used to clearly delineate between different types of actors, describe their motivations, their potential actions, and also provide some explanation for the Customs on how they should approach the public. |
11:15 | How do leaders in societal safety organizations influence employee well-being and performance during an onboarding stage? PRESENTER: Alexander Trengereid ABSTRACT. We live in a rapidly changing society. Today’s world faces continuous challenges that threaten societal security and safety. To effectively navigate organizational change, societal security organizations – such as policing agencies and customs – must adopt leadership styles that support both reform and stability. The onboarding of new employees is one example of organizational change, necessitating leaders to not only drive change but also prioritize employee well-being and maintain overall performance. This scoping review seeks to map existing literature on the effect of leadership styles and practices in facilitating employee well-being and organizational performance during the onboarding stage of societal safety organizations in the public sector. By drawing on scientific literature on leadership theory, this article investigates whether certain leadership styles – such as transformational, situational and/or servant leadership – promote employee well-being in societal safety organizations during onboarding. Existing literature on the topic is mapped by summarizing evidence on leadership styles, including their nature, volume, and overall impact. A comprehensive search was conducted across several databases to collect relevant literature. The analysis indicates that various leadership styles contribute differently to organizational dynamics, with employee-oriented leadership proving to be the most effective in creating a workplace that prioritizes employee welfare and satisfaction. Central to an employee-oriented leadership style is the creation of a workplace environment that prioritizes employee safety, well-being and personal growth. By enabling such an environment, both new and long-serving employees feel valued and supported, motivating them to embrace change and adapt to emerging challenges. Such leadership styles improve job satisfaction and reduces workplace risks. This study aims to identify research gaps and generate valuable insights for future research on leadership and socialization in societal safety organizations. The findings may assist leaders in developing training programs and interventions that strengthens employee well-being and performance and enhancing organizational safety. |
11:30 | Customs at a Distance and Abstract Police: Theoretical and Empirical Perspectives on Customs Risk Management ABSTRACT. In today’s globalized world, national customs agencies face expanding responsibilities that extend far beyond their traditional role as revenue collectors. Increasingly, customs are made responsible of securing society from harmful threats, while recent trade developments place additional pressure on customs to minimize disruptions to the supply chain. Belgian customs authorities now operate within this delicate balance, utilizing advanced risk management techniques to address security concerns while facilitating trade. This paper explores how Belgian customs have embraced data-driven methods to enhance their customs control practices, moving towards a model of ‘customs at a distance’ and ‘abstract police’ as described in literature. Drawing on empirical data, including semi-structured interviews and fieldnotes, this paper examines the experiences of Belgian customs officers with risk management through these fundamental perspectives, analyzing the challenges posed by these transformed customs practices, including their implications on knowledge, autonomy, and inter-organizational relations within the customs administration. Through this analysis, the study provides valuable insights into the risk practices of customs, underscoring their key role in today’s security environment. In an era of accelerating global trade and increasingly sophisticated criminal networks, customs authorities remain on the frontline, adapting their practices to meet the demands of a rapidly changing security landscape. |
11:45 | Framework and Typology for Cross-Sector Modes of Cooperation in Societal Safety ABSTRACT. [Note: intended as contribution to "Special session on societal safety and security risks and law enforcement agencies"] Many societal safety and security risks exert influence beyond a single organization’s mandate. They span multiple sectors, exist for various durations, and vary in level of extraordinariness. The management of these risks requires interorganizational cooperation. This cooperation can be organized in multiple ways – modes of cooperation. Yet, there are no frameworks in the societal safety and security literature that account for the variety in these modes, especially in cross-sector cooperation and for low-intensity threats. This talk presents a typology of modes of cooperation, based on three dimensions that describe the level of extraordinariness, duration of cooperation, and whether an organization contributes within or outside its own sector. The typology is founded on a systems-theoretical approach to cross-sector, interorganizational cooperation. It helps to identify the differences in modes of cross-sector cooperation between, e.g., customs agencies, first responders, and critical infrastructure managers, thus strengthening the theoretical foundation for better management of cross-sector cooperation. |
10:45 | Establishment of the Basis for Safety Evaluation and Risk Assessment of Ships Using Ammonia as Fuel PRESENTER: Hyunjoon Nam ABSTRACT. Ammonia fuel presents a greater potential for the formation of a toxic atmosphere than an explosive atmosphere, distinguishing it from conventional and other alternative fuels. Pioneers shall obtain acceptance from the Administration by demonstrating that the design of ships using ammonia as fuel can achieve an equivalent level of safety to ships using conventional fuels. However, there is no agreed basis defining which safety criteria should be satisfied, through which safety functions, and against which release scenarios. This poses challenges in decision-making not only for those involved in ship design but also for those responsible for safety evaluation and risk assessment, and for granting approval based on the evaluation results. A basis was originally prepared for an ammonia-fuelled gas carrier project and refined through this paper, providing technical justification to ensure that safety functions are appropriately designed to meet safety criteria within a specific context. The basis categorizes the operational situations into normal operation, accidental situation, and emergency and focuses on minimizing the probability of crews being exposed to the toxic atmosphere in each situation. |
11:00 | A STPA based risk analysis for winter navigation in ice in the Northern Baltic Sea PRESENTER: Liangliang Lu ABSTRACT. Navigation in ice is a critical component of maritime traffic management in the Northern Baltic Sea, where a unique winter navigation system facilitates safe and efficient operations amidst challenging conditions. Despite its importance, existing literature lacks a comprehensive investigation into the structural framework of winter navigation systems and associated hazard and risk analyses. This study addresses this gap by employing Systems Theoretic Process Analysis (STPA) to develop a detailed control structure for winter navigation in ice-covered waters. Through a series of expert workshops, we identify unsafe control actions and potential loss scenarios. The resulting STPA control structure serves as a foundation for analyzing near-miss incidents and accidents in ice conditions, enabling us to pinpoint the underlying factors contributing to potential maritime incidents. This research not only enhances the understanding of winter navigation but also aims to inform the development of indicators and strategies for reducing navigation risks, thereby improving maritime safety in ice-prone areas. |
11:15 | Operational Resilience Assessment of Multimodal Container Ports: A Systematic Sensitivity Analysis PRESENTER: Jinglin Zhang ABSTRACT. Evaluations of port operational resilience currently focus on risks from individual components and/or sub-systems as an isolated entity, often overlooking the ripple effects across the components (e.g. the various transportation modes) within a port. Essential components such as liner shipping, feeder shipping, railways, and trucking form the operational backbone of a multimodal container port. As port functionality becomes more complicated to meet the development of new technologies, the need to accurately assess, measure, and sustain port operational resilience has intensified. However, the ripple effect of disruptions poses significant challenges in assessing this from a global systematic perspective. In response, a resilience assessment framework is newly designed specifically for multimodal container ports. This framework comprises four main elements: a System Dynamics (SD) simulation for simulating the operations of a multimodal container port; a resilience quantification model that translates system performance into a resilience metric based on three key criteria; an Evidential Reasoning (ER) based evaluation model that aggregates the resilience of various subsystems into an overall port resilience level; and a Global Sensitivity Analysis (GSA) technique, which employs Sobol sampling and Sobol sensitivity analysis to quantify the impact degree of both individual and combined disruptions. A series of disruptive scenarios, informed by historical failures and field investigations, are investigated to quantify port resilience extent at different levels to guide the rational countermeasure development. The experimental findings highlight the impact of traffic congestion, yard crane incidents, and liner quay crane accidents on port resilience. It is also evident that most significant disruptions are not formed by failures from individual components at a local level; rather, they interact with each other, causing ripple effect. This framework, for the first time to the authors’ best knowledge, offers crucial insights for bolstering long-term resilience in container port operations from an overall port systematic perspective. |
11:30 | Accident scenarios for safety risk management of ammonia fuelled ships PRESENTER: Marta Bucelli ABSTRACT. Ammonia is deemed to be a promising fuel to reduce carbon emissions from shipping as well as a viable alternative solution as a global hydrogen carrier. Several initiatives are ongoing to demonstrate the use of ammonia in fuel cells and internal combustion engines for use on offshore vessels. While the interest in ammonia increases, so do the concerns regarding its safety. Ammonia is highly toxic to humans and to marine life, and, at certain concentrations, when mixed with air, could explode if ignited. Although safely transported as a chemical and fertilizer for decades, ammonia has been stored in dedicated carriers. A limited amount of transfer and handling operations has been performed in this time, and those were carried out by highly trained crews and operators. The potential large-scale implementation of ammonia in the maritime environment and its handling by different users introduce emerging risks and a potential for stricter requirements. This work presents a bibliographic approach for the definition of accidental scenarios for safety risk management of ammonia fuelled offshore vessels and ammonia carriers. A screening of historical accidental events potentially resulting in ammonia released is performed and a statistical analysis of the causes and consequences of the relevant events is provided to support a tailored and effective risk management. |
11:45 | Decision Model for Marine Hose Spare Parts: A Case Study of Offshore Terminals on the Brazilian Coast PRESENTER: Rodrigo José Pires Ferreira ABSTRACT. Oil and its derivatives play a crucial strategic role in Brazil and globally. The transfer of these hydrocarbons between vessels and maritime terminals is a high-risk activity due to the potential for leaks and environmental damage. Effective management of physical assets, including the proper handling of spare parts for maintenance, is essential. Maritime hoses are among the critical spare parts for offshore terminals. While extensive research exists on their mechanical strength, construction materials, and computer simulations, there is a lack of studies on determining the optimal inventory levels for these components.This work introduces a decision support model designed to optimize the inventory of spare maritime hoses and demonstrates its application in offshore terminals along the Brazilian coast, which serve regions with millions of residents. The model was user-friendly and significantly mitigated the economic, social, and environmental risks for the company. Socially, it reduced the risk of oil and derivative shortages, which could otherwise increase food costs and impact logistics reliant on diesel trucks. Environmentally, it lessened the risk of accidents by improving the reliability of oil transfer lines that extend across vast stretches of sea. Economically, the model led to an estimated reduction of approximately R$ 20 million in fixed asset investments and a R$ 5 million annual decrease in inventory maintenance costs. Additionally, it minimized potential lost profits and improved budget forecasts for future years. The optimized inventory levels reduced the risk of stockouts by over 22% for each type of hose. |
10:45 | Adapting High Reliability Management Framework for Enhancing Resilience in Autonomous Ships PRESENTER: Kwi Yeon Koo ABSTRACT. The increased integration of autonomous systems in maritime operations has advanced ship technology, but it also introduces new challenges for ensuring their safety and reliability. To address these challenges, this study explores the application of the High Reliability Management (HRM) framework. Rather than attempting to eliminate risks, HRM emphasizes maintaining reliable operations by continuously expanding and updating risk models to stay adaptive, even in the face of volatility, uncertainty, complexity, and ambiguity. An important aspect of HRM is its emphasis on the essential role of human operators in maintaining system safety through their ability to detect, interpret, and respond to emerging issues. Within HRM, resilience characteristics are essential as they reflect a system's capacity to adapt, recover, and maintain functionality under unexpected disruptions, often relying on the operators' expertise and decision-making capabilities to implement these characteristics effectively. This study aims to identify key resilience characteristics (RCs) specifically for the Remote Operation Centres (ROCs) of autonomous ships, addressing their specific challenges such as maintaining situation awareness, ensuring reliable communication, and enabling effective decision-making under dynamic conditions. By embedding these RCs within the HRM framework, this study leverages HRM's principles—such as anticipation, robustness, and recovery—to systematically strengthen ROCs' operational capacities. This alignment aims to provide a basis for improved response to unpredictable disruptions, enhanced coordination in multi-vessel operations, and reduced risk of system failures during critical operations. Ultimately, these advancements contribute to the safe and reliable design of autonomous ship systems, positioning ROCs as resilient hubs capable of managing complex and high-risk maritime environments. |
11:00 | Formalizing Testing of Collision Avoidance Systems using Signal Temporal Logic PRESENTER: Tom Arne Pedersen ABSTRACT. The development of Maritime Autonomous Surface Ships (MASS) necessitates the integration of intelligent automated systems, which requires comprehensive testing methodologies to ensure their capability and reliability. Simulation-based testing is an important tool for collecting evidence for validating the safety and reliability of Collision Avoidance (COLAV) systems, ensuring safe navigation. The International Regulations for Preventing Collisions at Sea (COLREG) provide a set of rules applicable to all ships to prevent collision at sea, including remotely operated or autonomous ships. A significant challenge lies in transforming these formal, highly human-centric, requirements into machine-readable acceptance criteria for efficient testing. This study explores formulating the COLREG rules using Signal Temporal Logic (STL) to create an evaluation framework tailored for simulation-based testing. The paper presents STL formulations of COLREG rules 2, 6, 8, 13, 14, 15, 16 and 17, with a particular emphasis on rule 17, which outlines the actions for the stand-on vessel. Rule 17 mandates that the stand-on vessel maintains its course and speed unless a collision becomes imminent, necessitating evasive action. The proposed approach offers a structured method for the automatic verification of COLAV systems against established international standards. In a simulation of one situation, it is demonstrated how STL can be utilized to verify the compliance and effectiveness of collision avoidance algorithms in safety-critical situations. |
11:15 | Mission Reliability Assessment for Autonomous Vehicles Considering the State Dependence of End-to-End Latencies PRESENTER: Luyao Wang ABSTRACT. As concerns about the reliability of autonomous vehicles (AVs) continue to rise, various statistical metrics have been developed to evaluate their long-term and average failure behaviors. However, these metrics often overlook the unique characteristics of AVs' specific mission performance. The AV system, equipped with an intricate computing system, is significantly influenced by end-to-end latencies, spanning from sensors to control signals. Previous research has focused on the impact of latencies within individual subsystems, particularly the control subsystem, without examining these impacts at the broader computing system level. Additionally, the state dependence of these latencies remains unexplored. This paper introduces a mission reliability assessment method specifically designed for AVs, considering the state dependence on end-to-end latencies. We propose a dedicated metric for mission reliability in AV systems, tailored to capture the features of end-to-end latencies. We apply the hidden Markov model to analyze the transition process of end-to-end latencies and estimate the mission reliability. The effectiveness of our method is validated through two numerical simulation cases, demonstrating its capacity for real-time evaluation and offering significant benefits for the online management and operational maintenance of AVs. |
11:30 | Challenges and opportunities in remote operations of automated passenger ferries identified using the CRIOP method PRESENTER: Jooyoung Park ABSTRACT. The launch of MF Estelle, the world’s first commercial autonomous passenger ferry, in Stockholm in 2023 has accelerated the need to establish a Remote Operation Centre (ROC) to manage multiple vessels with fewer human supervisors, reducing operational costs. This transition, by technological development and practical insights from MF Estelle’s operations, presents significant challenges - particularly in ensuring safety when replacing onboard human operators with remote systems. To address these challenges, the CRIOP method (Crisis Intervention and Operability Analysis) was applied for the first time to a ROC for autonomous ships, emphasizing Human Factors and a Human-Centred Design approach. This paper outlines MF Estelle’s current operations, explores potential ROC concepts and development phases, and presents the CRIOP workshop activities. During the workshop, MF Estelle’s operator shared his challenges and concerns regarding the ROC. Additionally, the checklist and scenario analysis identified key issues, including (1) conducting task analysis to support safer and more human-cantered ROC design and ferry operations, (2) ensuring situational awareness (SA) for ROC operators using tools like alarms and CCTV, (3) designing effective communication between ROC, passengers, and VTS/emergency centres, and (4) mitigating critical scenarios such as fires on the ferry, fires in the ROC, and high-traffic collisions through robust design, training, and organizational measures. Finally, the paper proposes recommendations for human factor engineering and design to mitigate these challenges and support the safe and reliable operation of autonomous ferries. Key human factors questions addressed include: (1) Who will the remote operators be, and what will their responsibilities entail? (2) How will passengers be managed when the vessel is unmanned? (3) How will ROC operations handle emergencies or dangerous situations? |
11:45 | Towards real-time safety monitoring for autonomous inland waterway vessels: The SeaGuard tool PRESENTER: Panagiotis Katsos ABSTRACT. Autonomous operation has the potential to significantly enhance inland waterway transport by facilitating a shift to zero-emission propulsion and contributing to the competitiveness to alternative transport modes like road and rail. Autonomous vessels integrate hardware, advanced digital and software systems, as well as humans-in-the-loop and therefore constitute complex Socio-Technical Systems, whose safety can be affected by random faults, as well as vulnerabilities to intentional cyber-attacks. Despite technological advancements that allow for crewless or remotely controlled vessels, autonomous or remote control needs to be enhanced with risk awareness to ensure that associated uncertainties can be managed in real-time, and that autonomous operation is both safe and resilient. To address these challenges, the EC-funded, Horizon Europe project AUTOFLEX (AUTOnomous and small FLEXible vessels) develops the SeaGuard tool, which is intended to perform real-time monitoring and risk assessment given faults, unsafe system interactions, and cyber-security threats, with the aim to facilitate reverting to a safe state within a specified time window by proposing appropriate risk control measures in the form of decision support to operators and relevant stakeholders. To achieve this, SeaGuard integrates detection of anomalies either in the form of cyber-attacks or faults with real-time risk assessment and evaluation of candidate risk control measures. This paper describes the functions required for SeaGuard to accomplish its objectives, the approach that will be implemented for assessing the safety level, as well as a high-level overview of the supporting methodological framework. SeaGuard is expected to significantly contribute to the feasibility of autonomous operations in inland waterways and by extension to the competitiveness of this transport mode against land-based transportation. |
10:45 | Axtreme: A package for Bayesian surrogate modeling and optimization of extreme response calculation PRESENTER: Sebastian Winter ABSTRACT. Engineers often need to understand long term behaviour of complex models in stochastic settings, such as when performing Ultimate Limit State (ULS) calculations. The time/cost of running the models means directly calculating the value of interest is often infeasible. Bayesian surrogate modelling with Design of Experiments (DOE) is one approach to this computational challenge and offers advantages over traditional method such as environmental contours. However, despite its advantages, the adoption of Bayesian surrogate modelling with DOE in engineering has been limited, due in part to the technical expertise required to implement these methods. To address this, we have developed Rax, and opensource python package extending state-of-the-art Bayesian optimisation frameworks for these types of engineering challenges. Rax enables engineers to build accurate surrogate models, compute Quantities of Interest (QOI), and conduct DOE to minimize uncertainty in these calculations efficiently. The package provides a flexible toolkit of ready-to-use functions, helpers, and tutorials, all built on top of robust, industrial-grade frameworks. By reducing the technical barriers to applying Bayesian surrogate modelling and DOE, this package makes advanced uncertainty quantification techniques more accessible, improving decision-making and design efficiency for engineers. In this paper, we demonstrate the application of Rax to a real-world engineering use case, showcasing its effectiveness in streamlining ULS calculations and enhancing decision-making under uncertainty. |
11:00 | Enhancing Robustness in Deep Material Networks by Incorporating Statistical Volume Elements PRESENTER: Abhinav Anil Khedkar ABSTRACT. The modeling of nonlinear materials with arbitrary microstructures is often addressed using multiscale techniques like FE². However, FE² becomes computationally prohibitive when applied to large-scale models, which are essential for accurately capturing the complex behavior of intricate material structures. To overcome this challenge, surrogate models have emerged as efficient alternatives, offering significant computational savings while maintaining accuracy in representing the material's microstructural behavior. One such promising approach is the Deep Material Network (DMN), a data-driven method that enhances computational homogenization by learning the microstructure topology without relying on micromechanical assumptions. This work focuses on the interaction-based DMN, which further extends its capabilities by drawing analogies from micromechanics to model interaction mechanisms within an arbitrary network architecture. This enables the network to generalize across a broad range of material behaviors. In this study, we extend the state of the art DMN framework by addressing an often-overlooked aspect: the inherent uncertainty in microstructural features. Such uncertainties, arising from variability in the Representative Volume Element (RVE), significantly affect the macroscopic material properties. Incorporating these uncertainties into the model is critical for ensuring robust predictions. We propose a novel methodology that integrates Statistical Volume Elements (SVE) as a source of microstructural uncertainty. The DMN is trained using a hybrid approach, where both the network parameters and the model's robustness against microstructural variations are optimized simultaneously. Our approach ensures that the homogenized outputs are not only representative of the microstructure but also exhibit reduced sensitivity to inherent uncertainties. The trained DMN models are subsequently implemented in a Finite Element (FE) framework to assess their performance in practical simulations. This training paradigm facilitates the incorporation of various data sources, even those that exhibit uncertainty, thereby expanding the applicability of DMN in real-world scenarios. |
11:15 | Risk management strategies, solutions, and recommendations for bioplastics production challenges: A co-production initiative PRESENTER: Samuel Domingos ABSTRACT. Introduction: Bioplastics, that are both biobased and biodegradable, are increasingly recognized as sustainable alternatives to traditional fossil-based plastics. Their production is expected to grow significantly in coming years, given that the barriers to their production and commercialization are effectively addressed. These include technological, knowledge, economic, regulatory, supply stability, and behavioural challenges. Method: With the goal of co-producing risk management strategies and potential solutions for the barriers currently hindering the production and commercialization of bioplastics, we conducted a series of semi-structured interviews and a collaborative workshop with key actors in the bioplastic industry, including industry professionals and researchers. Participants were invited to think about and engage in discussions aimed at identifying and overcoming these barriers. Results: Participants emphasized the need for conditions and funding schemes that promote multidisciplinary research collaborations and improve knowledge transfer between academia and industry. They highlighted the importance of lowering production costs through technological advancements, regulatory simplifications, and fair tax policies. Additionally, they identified the need for policy and regulatory tools that ensure competitive fairness and favour companies that adhere to best practices and disseminate accurate information. Participants also stated that for bioplastics to succeed, enhancing consumer awareness and trust is essential, which requires improving communication campaigns and ensuring that claims are supported by solid evidence. Lastly, they mentioned the importance of addressing the seasonality of raw materials and competition from other industries (e.g., pharmaceutical). According to them, this can be achieved through inter-agent collaborations, optimized resource management, and innovative strategies to tap into unexplored feedstock sources. Conclusion: Addressing challenges in bioplastics production and commercialization requires collaborative efforts from industry and academia. By fostering multidisciplinary research, enhancing consumer trust, and developing equitable tools, the bioplastics sector can unlock its potential and contribute to sustainability. These results and the challenges of co-producing risk management strategies will be discussed. |
11:30 | Robust Prognostics for Composite Structures Facing Unforeseen Impacts PRESENTER: Mariana Salinas Camus ABSTRACT. In recent years, Prognostics and Health Management (PHM) has gained prominence due to the increasing complexity of engineering systems and structures. Many of these systems lack comprehensive physical models that accurately describe their degradation processes, and there is limited engineering experience regarding their real-world operational behavior. Consequently, ensuring that predefined safety standards are met throughout the operational lifecycle has become critical. To train prognostic models, it is essential to gather data that captures the degradation process accurately. However, varying operational conditions can significantly impact degradation, underscoring the need for robust prognostic models capable of adapting to different operational scenarios and unexpected events. This work presents a novel adaptive prognostics model, the Adaptive Hidden Semi-Markov Model (AHSMM), designed to provide reliable Remaining Useful Life (RUL) predictions for engineering systems and structures under diverse loading conditions. Specifically, acoustic emission (AE) data from glass fiber-reinforced polymer (GFRP) specimens subjected to fatigue loading were used to train the model. To assess the robustness of the AHSMM, GFRP specimens were tested under the same fatigue conditions, but with the inclusion of multiple impacts simulating real-world phenomena such as hail. The AHSMM demonstrated high performance, and its robustness and predictive accuracy were compared against other state-of-the-art models, such as Long Short-Term Memory (LSTM) networks. |
11:45 | Automated Crack Detection Using Maximum Angular Multiscale Entropy and Machine Learning PRESENTER: Caique Emanuel da Silva Nunes ABSTRACT. To ensure the integrity of civil infrastructures, such as bridges, walls, and ceilings, it is essential to conduct regular inspections to identify cracks and fissures, preventing the progression of structural failures. The lack of proper monitoring can result in severe failures, resulting in substantial risks to public safety and economic impacts. However, inspections carried out by experts are often costly and labor-intensive. Despite advances in computer vision as a monitoring alternative for civil structures, the precision of these techniques still requires improvement. This work aims to contribute using the Maximum Angular Multiscale Entropy (MAMSE) algorithm as a feature extraction technique. MAMSE represents an innovative approach to identifying patterns in complex systems by exploring directional factors such as flow and variations in multiple scales. Unlike traditional models, and even some that account for heterogeneity factors, which assume only isotropic patterns and thus limit classification accuracy, MAMSE captures anisotropic patterns, which are common in many phenomena. The method will be applied to a previously categorized dataset containing real images of civil infrastructures. After extracting the MAMSE features, these will be used as input in machine learning models for image classification. The performance of the models will be compared and evaluated using established metrics to assess the effectiveness of the proposed method in classifying images with and without cracks. The ability to detect anisotropic patterns is expected to yield more accurate and faster results, allowing more effective differentiation between images that do and do not contain cracks. |
10:45 | Synthetic Monitoring Data Generation for Fault Detection in Wind Turbines PRESENTER: Arthur Henrique de Andrade Melani ABSTRACT. The effective detection of faults in wind turbines is crucial to ensure their reliability and reduce downtime. However, the availability of real-world monitoring data representing various fault scenarios is often limited, making it difficult to test and validate fault detection algorithms. This paper presents a method for generating synthetic wind turbine monitoring data using OpenFAST, an open-source simulator developed by the National Renewable Energy Laboratory (NREL). The simulator is used to model the dynamic behavior of a wind turbine under both normal operating conditions and specific fault scenarios, such as rotor imbalance. By leveraging OpenFAST’s ability to simulate the physical response of a wind turbine to environmental conditions and mechanical faults, we can create a comprehensive dataset that mimics real-world monitoring data. This dataset covers various operating conditions, including different wind speeds and directions, enhancing the generalizability of the data for fault detection purposes. The generated data is intended to support the development and testing of fault detection tools, providing a benchmark for algorithms that rely on monitoring data to predict, detect, and diagnose failures in wind turbines. The synthetic dataset aims to fill the gap between theoretical models and real-world applications, facilitating the design of more robust and accurate fault detection methods. This study demonstrates the potential of using high-fidelity simulations for reliability analysis and underscores the value of synthetic data in advancing predictive maintenance strategies for renewable energy systems. |
11:00 | A digital twin model for structural and environmental health monitoring of offshore wind turbines ABSTRACT. Offshore wind energy turbines, including both bottom-fixed and floating technologies, are exposed to environmental loads coming from different sources such as wind and wave forces, tides, temperature forces, and ice forces. Structural health monitoring (SHM) technologies offer unique opportunities to assess the structural integrity of offshore wind turbines in extreme climatic environments. In recent years, digital twin technology has become increasingly prominent in the offshore wind energy sector, revolutionizing the SHM of wind turbines. Digital twins, which are virtual replicas of physical assets, enable real-time data integration and simulation for continuous monitoring and predictive maintenance of wind turbines. Building a digital twin model for SHM of offshore wind turbines involves the integration of advanced sensors, IoT technologies, 5G or satellite communication, as well as powerful computational capabilities. This paper aims to outline the development of a digital twin model for structural and environmental health monitoring of offshore wind turbines in a step-by-step manner. The physical data obtained from various sensors (such as accelerometers, strain gauges, temperature sensors, etc.) is integrated with a digital finite element (FE) model to establish a normal operational profile for the wind turbine. Machine learning algorithms are then utilized to create a comprehensive digital twin model of the wind turbine. Finally, a user-friendly interface is developed for operators to visualize the remaining useful life of components and evaluate maintenance needs. The digital twin model offers numerous benefits, from optimizing turbine performance under varying environmental conditions to the early detection of faults, thereby reducing downtime and maintenance costs. |
11:15 | Seismic fragility curves for onshore wind turbines including effects of earthquake-induced landslides PRESENTER: Stefania Zimbalatti ABSTRACT. Onshore wind turbines are key components of green and sustainable energy infrastructure in many countries, which are built to exploit wind energy and turn it into electricity. To maximize input energy, onshore wind turbines are typically placed in open areas or on the crest of slopes in mountainous regions. Their location poses the need to analyze the geological hazards that affect the risk of these type of structures, such as earthquakes and landslides. Previous studies focused on the vulnerability of onshore wind turbines against wind and seismic ground motion only. Therefore, potential damaging effects of secondary earthquake events (such as landslides, soil fracture, and liquefaction) on wind turbines need to be investigated. This study presents a methodology for the development of fragility curves of wind turbines located on soil slopes subjected to earthquake-induced landslide hazard, accounting for damage due to slope instability on both underground electric power pipelines and above-ground structure. Different damage states (DSs) are quantitatively defined through proper thresholds of engineering demand parameters that capture the structural response of the wind turbine at both local and global spatial scales. To that aim, a detailed finite element model of a benchmark wind turbine developed by the National Renewable Energy Laboratory was created in OpenSees software. After that a suite of ground motion records after strong earthquakes is selected, permanent slope displacement is predicted and nonlinear dynamic response analysis of the wind turbine is carried out to calculate seismic fragility of both the pipeline and above-ground structure. Analysis results show a strong influence of slope geometry and soil properties on seismic fragility, demonstrating the impact that landslide hazard can have on seismic risk of onshore wind turbines in addition to seismic ground shaking. |
11:30 | Causal Intervention-Based GNNs for OOD Generalization in Fault Diagnosis of Wind Turbines PRESENTER: Xinming Li ABSTRACT. The safety and reliability of wind turbines are crucial in key industries. However, complex and dynamic operating environments often generate out-of-distribution (OOD) data, challenging traditional models that assume consistent distributions between training and test sets. This paper introduces a novel graph causal intervention method to tackle the challenge of OOD generalization under node-level distribution shifts. First, a hierarchical graph construction method is used to convert signal data into graph representations. The framework consists of two key modules: an environment estimator that infers pseudo-environment labels to reduce environmental impact, and a mixture-of-experts GNN predictor that dynamically selects appropriate graph neural networks to capture intricate node relationships, enhancing the model’s adaptability to changing environments. Using causal intervention techniques, the model emphasizes stable relationships between graph features and target node labels, effectively removing confounding bias and improving generalization to OOD data. Experimental results show the framework's superior performance in intelligent diagnostics and offer novel insights into fault detection and system reliability under OOD conditions. |
11:45 | Validation of Human Centred Bayesian Networks - Case Study on a Cable Cut of an Export Cable of an Offshore Wind farm PRESENTER: Babette Tecklenburg ABSTRACT. Bayesian networks (BN) are a commonly used method in the risk and reliability domain to asses the likelihood of certain scenarios or the resilience of an infrastructure. This study is conducted based on a literature review of Bayesian networks by Animah et. al in the maritime domain (Animah 2024). Even though there are about 78 journal papers published in maritime domain on Bayesian networks between 2018-2022, it is challenging to gather suitable data in the development process. Further, in some cases either a-priori or conditional probabilities are necessary depending on the node. Especially acquiring conditional probabilities are challenging due to the requirement that for every state of the parent node(s) a conditional probability needs to be defined. Following the guidelines of the German Research Foundation for Safeguarding Good Research Practice, in order to assure the quality, validation must be carried out (German Research Foundation 2019). In this work, we focus on the research question on how to validate Bayesian networks? In order to answer this research question, as a first step, a literature review is performed to determine a status quo of current validation methods. In the second step, several validation methods such as bench mark exercise, formal walk-through, qualitative feature test and sensitivity analysis are identified. As a final step, few validation methods like formal walk through and qualitative feature test are applied in the case study. For this purpose, a BN is developed which describes a scenario where a dragging anchor cuts an export cable of an offshore windfarm. It is observed that, during the applied validation process, the studied scenario, the design of the BN as well as the implemented probabilities are improved. Short references: Animah (2024): DOI: 10.1016/j.oceaneng.2024.117610. German Research Foundation (2019): https://www.dfg.de/resource/blob/174052/1a235cb138c77e353789263b8730b1df/kodex-gwp-en-data.pdf, last checked on 30.09.2024. |
14:45 | Investigation of the Effect of Outliers using Sliced-Normal Maps for Stochastic Model Updating PRESENTER: Thomas Potthast ABSTRACT. Monitoring processes are important for assessing structural, dynamical systems over their lifetime. In this context, stochastic model updating techniques minimize the discrepancy between the computational model and measurement data by calibrating input model parameters. Herein, Sliced-Normal Maps (SNM) pushes the initial towards the updated input distribution by reweighing the probability density functions (PDF). The ratio used for this procedure is based on the initial output distribution and the unknown distribution of the response quantified by measurement data, where both are estimated using Sliced-Normal (SN) distributions. These estimation through optimization is affected by the inserted dataset. Measurement anomalies can pollute this dataset due to faulty sensor behavior, i.e. outliers, leading to faulty calibration and model predictions. This work investigates the effect of outliers on the SNM approximation of the updated model input distribution. Based on this, the SNM scheme will be expanded to detect and eliminate outliers. For this, the estimation of SNs is reformulated to consider a tighter subset of data in the optimization procedure. The investigation will be performed on a numerical example without measurement noise, where artificial outliers are used within the dataset. This is used to study the effect on the resulting updated input distribution using SNM and compare both optimization schemes. |
15:00 | Knowledge and uncertainty informed surrogate modelling via multi-tasks meta-learning PRESENTER: Yu Chen ABSTRACT. Uncertainties play a central factor in typical computational pipelines that span data, model, and prediction, leading to the significance of calibration, validation and verification for many complex physical or engineering systems, especially for safety-critical engineering applications where detailed investigation into robustness are required for safe operations in extreme or uncertainty environments. Key challenges include for example the availability of data, which could often be imprecise, sparse and expensive to obtain. Data-driven analytics (machine learning or deep learning) may be limited by a performance ceiling imposed by the quality and quantity of data. Domain knowledge of the physical processes are attempted to be utilised in many ways as inductive biases in empirical, statistical or numerical modelling to compensate the effects of data inefficiency. But the existence of epistemic uncertainty due to the limited knowledge of complex physical processes would again motivate the quantification of uncertainty for such source of information. It is therefore desired an approach that is both uncertainty- and domain knowledge- informed, capable of learning and making trustful predictions given scarce data. This study presents a novel approach which enables knowledge to be learnt, across multi-tasks during learning, and a distribution of predictor functions to be produced for prediction during extrapolation. Its superiority is best reflected in a limited data situation. |
15:15 | Verification of Bayesian Physics-Informed Neural Networks PRESENTER: Zhen Yang ABSTRACT. With the rapid advancement of machine learning technology, its applications are becoming increasingly vital across various critical systems and domains. However, the effectiveness of machine learning models heavily depends on high-quality data, which is often costly to obtain and affected by inherent uncertainty. To address this challenge, we propose a robust Bayesian physics-informed neural network (BPINN) that enables the analysis of limited datasets while incorporating uncertainty quantification, all while maintaining the physical interpretability of predictions. In this study, we develop a verification problem to systematically assess and verified the effectiveness and robustness of our approach and shown the performance by predicting the fracture time of a steel alloy based on very limited dataset. |
15:30 | Belief Reliability Modeling for Demand-Driven Uncertain Production Systems with Delay Effects PRESENTER: Waichon Lio ABSTRACT. This paper explores belief reliability modeling for demand-driven uncertain production systems with delay effects. In such systems, demand exhibits cyclical patterns, but real-world complexities introduce epistemic uncertainty, especially with limited data. Delay effects, where initial consumer responses differ significantly from later stages, further complicate demand dynamics. The study introduces a model incorporating delay effects to better predict inventory requirements and system capacity. By defining performance margin and belief reliability based on belief reliability theory, this paper provides a framework to assess system reliability, considering initial inventory, productivity, and demand fluctuations. The proposed model helps producers balance cost efficiency and reliability, ensuring adaptability to both initial demand surges and subsequent stabilization. Additionally, the paper derives analytical expressions for belief reliability and first hitting time of failure, offering practical tools for managing production risks and optimizing system performance under uncertainty. The results highlight the importance of delay effects in shaping demand patterns and system reliability, providing valuable insights for production planning and risk management. |
15:45 | Enhanced Prediction of Remaining Useful Life with Uncertainty PRESENTER: Weijun Xu ABSTRACT. Accurately predicting the Remaining Useful Life (RUL) of industrial systems is crucial for maintaining smooth operations and ensuring safety. Although various prognostic models have been developed, significant challenges persist in their practical application. While many models may achieve high accuracy on the training and test data used to develop them, they often do not adequately quantify the uncertainty associated with their predictions in the field, which is instead fundamental for confidence that can be assigned to prognostic outcomes that drive decision making which follows. This paper presents the development of novel uncertainty-aware methods for RUL prediction. The contributions of this work include: (a) proposing a data-driven framework for RUL prediction that quantifies uncertainty and provides adaptive confidence intervals under single fault modes and operating conditions; (b) addressing epistemic and aleatoric uncertainties in scenarios involving multiple fault modes and operating conditions; (c) investigating RUL prediction and uncertainty quantification in scenarios lacking run-to-failure data and explicit RUL labels; and (d) exploring how uncertainty at the component level impacts system-level predictions, proposing methods to manage uncertainty propagation for a comprehensive assessment of system health. A case study is considered, regarding the turbofan engines and lithium-ion batteries. |
14:45 | Empirical Evidence on the Role of Infrastructure Vulnerability on HILP Outcomes (for special session on Infrastructure and Community Resilience) PRESENTER: Arka Bhattacharyya ABSTRACT. Fueled by climate change and increasingly interconnected systems, our societies are increasingly confronted with catastrophic events that have a low probability of occurrence. These high-impact low-probability (HILP) events are typically characterized by limited foresight, long recurrence periods, and an extreme potential for devastation. Because these events have historically been so rare, they have remained largely unexplored, and conventional risk management is not designed to account for such ‘outliers.’ Much research has been dedicated to understanding the vulnerabilities that drive disaster damage. However, the patterns or factors that turn hazards into disasters across different contexts – especially for low probability events – are yet to be determined. To enhance resilience against HILP events, we conducted a stakeholder survey, where 104 experienced practitioners, who have managed different types of disasters in 180 countries, identified key determinants of how a HILP event unfolds. Vulnerable infrastructure was identified as a factor that significantly influences HILP outcomes. In this presentation, we will provide empirical evidence on how vulnerable infrastructure contributes to HILP outcomes based on historical data. We are gathering data on different HILP and non-HILP events worldwide and creating control and treatment sets based on infrastructure vulnerability from these events. Propensity Score-based methods are being used to balance confounding factors such as exposure, social vulnerability, existing coping capacities, and other relevant factors as they need to be controlled to determine true causality between infrastructure vulnerability and HILP outcomes. Furthermore, we will demonstrate the results of this experiment quantifying how different levels of infrastructure vulnerability led to different HILP outcomes. These findings could help strengthen the case for enhancing infrastructure resilience in the face of accelerated climate change. |
15:00 | Beirut’s Port Explosion: a Case of Disaster Response and Recovery ABSTRACT. The explosion at the Port of Beirut on August 4, 2020, was one of the most severe non-nuclear blasts ever recorded, causing extensive destruction and exacerbating Lebanon’s ongoing political and economic crisis. This paper examines the systemic deficiencies that contributed to the disaster, with a particular focus on inadequate regulatory oversight and the absence of a comprehensive disaster management plan. The response was marked by governmental inaction and political instability, which significantly hindered recovery efforts and delayed aid distribution, further eroding public trust in state institutions. In contrast, community-led initiatives and non-governmental organizations played a pivotal role in providing immediate assistance, demonstrating the resilience of local populations in the absence of a coordinated reconstruction strategy. This paper analyzes the systemic failures that contributed to the Beirut Port explosion and discusses the broader implications for governance, urban resilience, and disaster risk management in politically and economically fragile states. This findings highlight the urgent need for governance reforms and transparent crisis management frameworks to enhance resilience and mitigate the risk of future catastrophes. |
15:15 | Preparing for the worst? Critical Infrastructures Stress Testing: A Systematic Review and Design Framework PRESENTER: Ali Cheshomi ABSTRACT. Our societies are increasingly facing High-Impact Low-Probability (HILP) events, such as Hurricane Katrina, the war in Ukraine, and Covid-19. Although rare and uncertain in occurrence, these events have significant impacts on society and infrastructure. HILP events can arise from a triggering hazard's magnitude or from compound and cascading effects that amplify one or more triggers. Additionally, the growing complexity and interdependency of Critical Infrastructures (CIs) increased the risk of systemic failures during disruptive events, leading to greater societal losses than in the past. Stress Testing (ST) methodologies, initially developed for the financial and nuclear sectors, are widely used to understand how systems react to various levels of disruptions. Given the increasing consequences of disruptions and the ever increasing interdependencies of CIs , the application of ST has expanded to the realm of CIs, where it has recently emerged as a key field of study. Yet, it remains unclear to what extent these HILP events and the interdependencies of CIs have been addressed by the ST methodologies developed so far. In this study, we systematically review academic research on ST methods applied across terrestrial CIs—including the chemical and nuclear sectors, transportation, water, food supply chains, information and communications technology, and energy—to identify the key shortcomings of current ST methodologies. The review reveals that conventional risk-based ST methods, which primarily focus on the vulnerabilities of individual components using probabilistic approaches, are inadequate when addressing HILP events. Instead, there is a need to move toward the concept of resilience as it favours a more threat-agnostic approach by assessing the interdependencies and structure of system rather than focusing on the vulnerabilities of the components. From this review, we propose a conceptual framework for developing ST methodologies for CIs in the context of HILP events, serving as a foundation for enhancing CI resilience. |
15:30 | A Computation-Free Method for Prioritizing Stress Tests for Resilience Assessment of Transportation Systems PRESENTER: Hossein Nasrazadani ABSTRACT. Transportation systems are crucial for economic stability and growth but are increasingly vulnerable to disruptions from extreme events, leading to significant socio-economic impacts. Stress testing has emerged as a valuable diagnostic approach for assessing the resilience of systems. A stress test, in the context of transport systems, is a set of one or more hypothetical scenarios designed to assess the adequacy of the provided service under various potentially disruptive scenarios. To achieve a comprehensive resilience assessment, there are several stress tests that need to be conducted. However, for complex systems, conducting all potential stress tests using simulations is computationally prohibitive. To address this challenge, this study proposes a novel method to prioritize stress tests without running additional simulations. Using an innovative implementation of importance sampling, the proposed method leverages results from an initial reference risk assessment to estimate the potential impact of stress tests. By selecting resampled subsets that mimic specific stress test conditions, it can identify the tests likely to have the greatest impact on system risks. The stress tests that show a higher potential for increasing risks are then prioritized for more detailed simulation. Applied to a Swiss road network facing extreme rainfall, flooding and landslide scenarios, this methodology enables infrastructure managers to efficiently screen and rank stress tests, focusing resources on those are more likely to lead to an over proportionate increase in risks. By narrowing down the list of stress tests, this approach reduces computational demands while providing actionable insights into which parts of the system can have a higher impact on improving resilience of the system. |
15:45 | Resilience Analysis in the Wake of COVID-19: Insights from Bayesian Modeling PRESENTER: Aishvarya Kumar Jain ABSTRACT. The current world has experienced a profound shift from risk analysis to resilience analysis, a transition underscored by the recognition that resilience encompasses more than just a system's response to threats. It also provides critical insights into preparedness for future events and the recovery processes that follow. The recent COVID-19 pandemic has profoundly impacted global societies, illustrating the vulnerabilities within our systems and the need for enhanced resilience. For over three years, communities worldwide faced unprecedented challenges, highlighting the necessity to evaluate the socio-technical resilience of our societies. Understanding how resilient we are against such threats is essential, and ensuring a swift recovery post-event is equally critical. In this paper, we demonstrate the applicability of Bayesian networks in modeling resilience and its various phases with respect to the pandemic. Unlike deep learning methods, which often rely solely on large datasets, Bayesian networks offer the unique advantage of incorporating expert knowledge alongside empirical data. This dual approach allows for a more nuanced understanding of resilience dynamics. We present a data-driven multilevel hierarchical Bayesian network that not only estimates and compares the different phases of resilience but also identifies and analyzes the underlying factors that influence each phase. To assess socio-technical resilience effectively, we utilized a German dataset (INKAR), which contains vital socio-economic indicators, including population, employment, education, and gender, at a community-level geographical resolution. This research aims to contribute to the growing body of knowledge on resilience, providing valuable insights that can inform policy and practice. The results quantify the resilience of single counties and shows the coping capacity concerning the pandemic over the past years. |
14:45 | Enhancing Prognostics Essentials: Reliability, Robustness, and Feasibility. ABSTRACT. Prognostics play a pivotal role in predictive maintenance (PdM) by forecasting the future health and performance of assets based on their current condition and operational context. To effectively apply prognostics in decision-making for PdM strategies, the methodologies must meet three key criteria: feasibility, robustness and reliability. Feasibility refers to the ability of the prognostic methodology to function with limited degraded data, as obtaining extensive degradation data can be cost-prohibitive. Robustness requires prognostics to maintain reliable performance across a wide range of operational conditions, including those not encountered during training. Lastly, reliability is crucial due to the inherent uncertainties in prognostics, which stem from various factors such as manufacturing variations or future loading conditions. To that end for Remaining Useful Life (RUL) prognostics, RUL should be treated as a random variable. Ensuring the reliability of RUL predictions involves aligning the mean RUL estimates with actual RUL values and providing accurate uncertainty estimations that offer decision-makers actionable insights. Driven by these requirements, this work introduces a novel adaptive similarity-based prognostic methodology inspired by Markov models. This approach will be compared and validated against state-of-the-art methods, such as Long Short-Term Memory (LSTM) models, using both simulated and real data from the aerospace sector. |
15:00 | Improving fault diagnosis efficiency by integrating FMEA and FTA in a compact fault signature matrix PRESENTER: Lincoln Josue Arellano Ortega ABSTRACT. Complex systems have an integrated architecture that leads to non-trivial interdependencies between components. Any fault in such a system can impact other components, reducing system performance. While most existing methods can detect process abnormalities and component faults, they often fail to identify root causes. In response, this study presents a fault diagnosis framework based on domain-specific knowledge. The framework enhances root cause identification by leveraging expert insights, maintenance logs, and/or other documented knowledge. The proposed method integrates this knowledge through a structured approach. First, a Failure Mode and Effects Analysis (FMEA) is conducted to determine the most critical failure modes and associated fault symptoms for each component. Second, a Fault Tree Analysis (FTA) is used to reveal dependencies between components. The resulting information is used to construct an improved Fault Signature Matrix (FSM) that captures individual failures and their system-level dependencies. In this way, people with and without knowledge can use this tool to investigate failure causes after detecting malfunctions. The proposed methodology is applied to a ship propulsion technology, providing information on the parameters required to diagnose the state of the system. |
15:15 | Overview and Analysis of Publicly Available Degradation Data Sets for Tasks within Prognostics and Health Management PRESENTER: Luca Steinmann ABSTRACT. The effectiveness of Prognostic and Health Management (PHM) methods relies on degradation data that reflect the health of engineering systems over time. In particular, publicly available degradation data sets are of high value. Despite their importance, these data sets are rarely discussed comprehensively in the literature, with existing overviews often limited in scope and detail. As a result, the search process for suitable data sets is often very time-consuming for users of PHM methods. Therefore, this work provides a comprehensive overview of 98 publicly available degradation data sets and conducts a novel, detailed, PHM-specific analysis. In order to carry out the analysis, a taxonomy is developed to categorize and classify the data sets based on defined PHM-specific aspects. The resulting taxonomy classifies the data sets across 38 applications in 11 domains. The analysis provides a comprehensive overview and entry point for selecting and using the available data sets. It shows that almost half of the data sets are from the domains of electrical and mechanical components, with battery and bearing applications being the most common. However, the analysis also reveals that the number of data sets is limited for many applications. |
15:30 | Expressive Power of the Figaro Modeling Language – a Tool for Model Based Safety Assessment PRESENTER: Pavel Krcal ABSTRACT. Model-based safety assessment brings dependability modeling closer to system engineering. Specification of a dependability model focuses on the correct instantiation of pre-defined component types, correct linking and system configuration. The code for pre-defined components contains logic capturing failure behavior, failure effects and failure propagation through the system. These components can be hard-coded by a provider of the model-based safety assessment tool. Or the model-based safety assessment platform allows (expert) users to specify component behaviors in a general, domain agnostic, modeling language. Depending on the features of the modeling language, this approach opens the platform to a multitude of applications. Limits are given by the features and constructs of the modeling language. In this work, we study the expressive power of the Figaro modeling language used in model-based safety assessment platform RiskSpectrum ModelBuilder (KB3). The language includes numerical variables and arithmetical operations. From the theoretical computer science perspective, it is Turing-complete. We therefore look at the complexity of expressing certain operations or constructs. We are interested both in the length of the Figaro model (how long and complicated will an encoding of a certain functionality be) and in the calculation complexity of the model (how many steps must the simulator or fault tree generator perform when we use a certain encoding of a function). The exploration focuses on practical features that might be desirable for analysts from aerospace, nuclear safety, defense, energy, etc. |
15:45 | Holistic Simulation Model of the Temporal Degradation of Rolling Bearings PRESENTER: Fabian Mauthe ABSTRACT. Data-driven diagnostic and prognostic methods for engineering systems, especially those employing machine learning, have gained prominence due to their reliance on data rather than physical system understanding. However, industrial applications often face challenges like unbalanced data distributions or limited data availability, as acquiring data is costly and time-intensive. Although some synthetic data sets and simulation models are publicly available, they often do not represent industry-relevant scenarios. Therefore, this work introduces a simulation model for generating representative run-to-failure data, focusing on rolling bearings. The model comprises three modules: the first determines the bearing life and fault type; the second simulates the degradation progression up to the point of failure; the third generates vibration signals reflecting operating conditions and bearing degradation. Each module is designed as a random process and reproduces the inherent variation of, for example, the life under a given load. As a novelty, the model simulates the vibration signals over the entire life of bearings. Furthermore, it is publicly available and can be used to generate arbitrary data. An initial data set is also published and publicly available. |
14:45 | Explainable Decision Based on Machine Learning Methods in Reliability Analysis PRESENTER: Elena Zaitseva ABSTRACT. Artificial Intelligence (AI) and Machine Learning (ML) methods have become an integral part of the investigation in Reliability Engineering. However, special requirements are imposed on results based on AI and ML methods, in particular, the explanatory nature of the results obtained. A complex ML model is a “black box” hiding the mechanism of the result obtaining. The methods permitting the assessment of the influence of input parameters on the final result are used to turn the “black box” into a “white” or “grey” one. This direction of research is known as explainable AI (XAI). In this study, the method for developing the mathematical model of a system for reliability analysis based on uncertain data is proposed. This method allows the processing of both aleatory and epistemic uncertain data. The background of this study is the use of a decision tree-based approach, which is one of the possible approaches used in XAI. The decision trees are used in XAI, but unlike typical applications in this study, the decision tree will be developed using fuzzy logic, which in many cases is effective for different types of uncertainty (aleatory and epistemic). The application and approbation of the proposed method have been implemented for the healthcare domain, in particular, for the evaluation of medical team actions. This study was supported by projects APVV-23-0033 and VEGA-1/0090/25. |
15:00 | The reliability evaluation and selective maintenance decision for lattice system PRESENTER: Cong Lin ABSTRACT. The phased array radar composed by the T/R units in a two-dimensional surface is a typical lattice system. The failure criteria can be described as the failure units within a subarea exceed a threshold. The exact reliability evaluation method of such system is not easy to be obtained. Thus, we proposed a reliability lower bound evaluation method. We provide an importance-based principle to identify the weakest area to conduct a selective maintenance. Some numerical examples are provided to show how to use the proposed method. |
15:15 | The decision of the product’s performance margin and reliability: an evolutionary game perspective PRESENTER: Bo-Yuan Li ABSTRACT. In product trading, the buyer generally wants a product with higher reliability, which will increase the seller’s cost. As a result, there exists the buyer-seller game about the product’s reliability. Nowadays, most studies just introduced a general concept of “reliability level” when modeling the buyer-seller game, but not further explored the meaning and origin of reliability. Therefore, they cannot effectively describe and forecast the seller’s action and the buyer’s appeal to influence the product’s reliability. According to the basic definition, reliability essentially evaluates whether the function achievement of a product can meet its requirement, which can be further quantified by specific performance. Therefore, the product’s reliability derives from the product’s performance supplied by the seller and the buyer’s requirement, and can be calculated by the probability that the product’s performance margin is greater than 0. On this basis, in this work, a buyer-seller game model for the product’s performance margin and reliability is proposed, aiming to predict and guide the buyer’s and seller’s behaviors. For this game, the buyer’s strategy is the performance threshold representing the requirement for the product, and the seller’s strategy is the product’s performance. In the game model, the performance threshold profit, the performance cost, and the failure risks are introduced to quantify the payoffs of both sides. Evolutionary game theory is adopted to model the dynamics of strategy evolution and solve the stable strategy. Finally, the phase transition of different game results with the change in payoffs can be clarified. The case of a two-strategy game indicates that the proposed method can predict the both sides’ actions under different payoffs; also, the guidance can be provided to promote the buyer-seller cooperation. |
15:30 | Radiant cooktop reliability study – accelerated life test PRESENTER: Alberto Miele ABSTRACT. In general cooktop cause extreme difficulties in terms of usage prediction, due to the fact of the high frequent real time interaction between customer and appliance. Usually, hobs have four zones with around ten power levels. It can be operated on demand, resulting in a high variation of load profiles for the key components, such as the radiant heater. Connected cutting edge induction hobs allow to derive the usage radiant hobs as well and represent real usage of costumers in the field in different regions. The operating time for radiant hobs and it´s sub-components was created with the help of this data. A Radiant hob is supplying electrical energy to a radiant heater, where a ribbon wire is the component converting the electrical energy to thermal energy. The generated thermal energy is relative to the absorbed electrical following the principles of joules law. Since Radiant heater is an ohmic load an increased supply voltage allows more electrons to flow, causing the power as a product to increase quadratic. The most common failure mechanism is the burning of the ribbon, due to uneven watt density distribution caused by degradation of the material. In experiments with increased voltage levels the damage could be increased, leading to acceleration following the same failure mechanism. The agile test response time and revaluation of the mission profile enhances qualitative reliability predictions and improvements in the development of radiant heaters in general. |
15:45 | Extracting Reliability and Maintenance Knowledge from Maintenance Reports of Freight Transport Trains: a Methodology for Annotation based on Ontology and SpERT PRESENTER: Dario Valcamonico ABSTRACT. We consider the problem of extracting information from repositories of maintenance reports of freight transport trains, aiming to identify factors influencing malfunctions and failures, and assess the effectiveness of maintenance activities. We propose a methodology for automatically annotating maintenance reports, which involves assigning semantic labels to the words of the reports and identifying the relations between them. The conciseness of the texts and the extensive use of technical language pose significant challenges, which are overcome by combining an industrial maintenance ontology with the Span-based Entity and Relation Transformer (SpERT) method. Specifically, SpERT is fine-tuned in two stages: initially on a large dataset of maintenance reports from other industrial sectors, and, then, on a limited number of manually annotated maintenance reports of electrical freight transport trains. The obtained results show that the proposed methodology successfully identifies entities and relations in maintenance reports of freight transport trains. |
14:45 | A Study of Maintenance as a Cause to Incidents in the Norwegian Petroleum Industry PRESENTER: Eirik Duesten ABSTRACT. Although much attention is given to reveal the causes of hazard and accident situations, the contribution of maintenance specifically is not often investigated or reported. The purpose of this study was to investigate to what extent maintenance is considered to contribute, either directly or indirectly, to hazard and accident situations in the Norwegian petroleum industry. Our goal was not a study as to what extent maintenance contributes to hazard and accident situations, but to better understand how the stakeholders in the Norwegian petroleum industry consider maintenance as a cause to hazard and accident situations. The study is based on a project undertaken for the Norwegian Ocean Industry Authority. The complete report is publicly available in Norwegian. To investigate the above, an assessment of investigated incident reports (including anonymous summaries from such reports) was carried out and information was collected from operators of onshore and fixed offshore facilities in Norway. The information from the operator companies was collected by means of a question set and group discussions with personnel involved in maintenance management, as well as personnel from technical safety and HSE. Based on the study we see there is a potential for better systems and routines to investigate maintenance as a cause of hazard and accident situations. Since there are few systematic processes in the operator companies to investigate maintenance as a cause of hazard and accident situations, it is challenging for the operator companies to take learnings to improve maintenance management to reduce the number and severity of hazard and accident situations in the Norwegian petroleum industry. With a broader view of maintenance and maintenance related activities, more hazard and accident situations were seen to have maintenance related causes. |
15:00 | Developing a standard coding system for spare parts management to enhance operational efficiency and reliability PRESENTER: Leonardo Marrazzini ABSTRACT. In the present paper, we introduce a new classification system for spare parts within a medium to large-sized company operating in the packaging sector. This system was designed with two main objectives: to establish a well-structured product hierarchy with clearly defined technical attributes and to implement a "speaking code" system that simplifies identification. To develop this framework, we first analyzed historical Enterprise Resource Planning (ERP) data, an integrated system designed for managing business processes. We then refined the classification iteratively, ensuring that interchangeability was preserved through critical parameters. The process also incorporated multiple validation steps, gathering feedback from technical experts and international stakeholders to enhance accuracy. A key innovation of this approach is the introduction of the "speaking code," an alphanumeric string that systematically encodes both hierarchical and technical data. This eliminates reliance on free-text descriptions and ensures a structured, standardized method for classification. The code follows a three-tier structure - Category, Family, and Group - where each group is assigned essential technical characteristics, allowing for more precise classification. The first application of this system focused on the "Belt" family, demonstrating clear improvements in the organization of spare part data. The revised hierarchy introduced more detailed differentiation within groups, addressing inconsistencies found in the previous system. Looking ahead, future enhancements may include the integration of Artificial Intelligence (AI) and machine learning for automated classification, as well as efforts to standardize the system internationally to align with global operational needs. While this new classification system represents a significant improvement, its long-term success will depend on continuous optimization based on real-world feedback and evolving business requirements. |
15:15 | Choosing the Optimal Maintenance Policy Considering Cost, Quality of Service and Environmental impact. Application to a bike and a fleet of bikes PRESENTER: Gérôme Moroncini ABSTRACT. Selecting the optimal maintenance policy requires balancing multiple criteria, such as cost, quality of service and environmental impact. This work applies a structured approach to determining the best maintenance strategy, focusing on a single bike and a fleet of bikes, using quantitative methods and decision-making tools. This approach consists of 8 steps: system definition, needs analysis, criteria selection, maintenance options identification and evaluation based on the selected criteria, selection of the most appropriate multicriteria decision-aid method, application of the method and discussion of the results. In the cases of the bike and the fleet of bikes, the model focuses on the evaluation of the quality of service, environmental and economic impacts of several components of the bike, which are: the tyres, the battery, the engine and the brakes. It is assumed that the reliability of these components follows a Weibull distribution, while their environmental and economic impacts are respectively evaluated in kgeqCO2 and euros (€). A Monte Carlo simulation is run on Python, evaluating the performances of 4 maintenance policies for a single bike and 24 for a fleet of bikes generated from 4 parameters: number of weeks between two maintenance sessions (2, 4 or 8 weeks), proportion of the fleet maintained per maintenance session (one quarter, one eighth or one sixteenth), tyre repairing by the user or not and replacement of the bike every five years or not. The results obtained from the simulations are analysed with two multicriteria decision-aid methods, ELECTRE and AHP, permitting the comparison of an outranking and an aggregation method. Finally, this framework allows to choose the optimal maintenance strategy considering quality of service, economic and environmental impacts for each considered case. This work also shows the interest of simulation in the generation of non-trivial maintenance policies from a restricted number of parameters. |
15:30 | Developing a Quantitative Reliability-Centered Maintenance (QRCM) Model for High Voltage Circuit Breakers ABSTRACT. High voltage circuit breakers (HVCBs) are recognized as critical components in power systems due to their essential protective function. To extend the lifespan of these components and prevent failures, various maintenance strategies, such as time-based maintenance (TbM) and condition-based maintenance (CbM), have been implemented by power utilities. To optimize these maintenance strategies given the available resources, a quantitative modeling approach is crucial. This paper introduces a Quantitative Reliability-Centered Maintenance (QRCM) framework for HVCBs in air-insulated substations (AIS). The proposed model is developed using failure data from established sources, including CIGRE 510 and IEEE C37. It identifies the distribution of failures across key components and major failure modes. With this information, random and aging failure modes are quantified using Weibull distribution, and the General Renewal Process (GRP) is applied to assess failure patterns based on the chosen maintenance policies. The model is then simulated using a reliability block diagram (RBD), incorporating stochastic methods such as discrete event simulation (DES) and Monte Carlo simulation to provide a robust quantitative analysis. This framework offers a comprehensive approach to asset management, accounting for uncertainties within the power system, and enabling more effective decision-making. |
14:45 | Benchmarking HRA Methods Based on Method-Neutral Qualitative Analysis PRESENTER: Luca Podofillini ABSTRACT. The scope and intended use of Human Reliability Analysis (HRA) methods need to be considered in comparing and selecting HRA methods, since most methods are tailored to specific application domains and the human performance contexts characteristic to these domains. Furthermore, HRA methods may differ in the scope of the tasks that they can address, from routine tasks to responses to abnormal and emergency conditions. One of the impacts of the scope and intended use is then the set of Performance Shaping Factors (PSFs) that are used in analyzing the human tasks and estimating the probabilities of the Human Failure Events (HFEs) of interest. In a generalized HRA analysis, the qualitative analysis characterizes the human task, defining the requirements of the task and identifying the relevant performance conditions that support or detract from performance reliability, sometimes referred to as the main performance drivers. In the subsequent quantitative analysis, the performance drivers are then transformed into PSF ratings with an associated quantitative effect on a nominal failure probability for human tasks or their failure modes. A major challenge for benchmarking is then how to deal with the differences among HRA methods in the overall set of PSFs, the scope of the individual PSFs, and the performance issues that are in focus in the guidance for the rating of PSFs. A related challenge is that HRA methods differ in the degree to which the qualitative analysis is integrated in the method. This paper presents the method-neutral qualitative analysis as an essential step in benchmarking HRA methods and demonstrates its advantages with examples taken from a recent benchmark. The examples contrast the relevant benchmarking steps for two of the assessed HRA methods: IDHEAS-ECA and SPAR-H. |
15:00 | HRA-Methodology Comparison on a Practical, Realistic-NPP Model Implementation: Sensitivity Analysis on a Plant-Level Risk Contribution PRESENTER: Dusko Kancev ABSTRACT. This paper addresses the performance of HRA as part of the development of a new plant-specific, full-scope industrial-scale L1/L2 PSA-model at the NPP Goesgen-Däniken (KKG), Switzerland. The main focus of the paper is aimed at conducting sensitivity analysis on the plant-level risk contributions (delta CDF, delta LERF) given two different HRA-methods for modelling the cognitive part of selected operator actions (OA) – the THERP and the HCR/ORE method. KKG, together with their supplier Framatome GmbH, embarked on the substantially thorough project – PSASPECTRUM – of migrating KKG`s existing PSA model from Riskman® to RiskSpectrum® environment. The conduction of an updated, plant-specific HRA using the RiskSpectrum® HRA Tool as well as a relatively new RiskSpectrum® feature – the Conditional Quantification tool – is one constituent part of this project. The preferred HRA-method, used for the internal events analysis, is the THERP practical method of predicting human reliability – both for the cognitive as well as for the execution part of the human error probability (HEP). Although this method is well established and being applied world-wide, it has its strengths and limitation. Especially, the use of simple, generic time reliability correlation (TRC) for addressing diagnosis errors is an over-simplification for addressing cognitive causes and failure rates for diagnosis errors when used, by itself. On the other hand, the HCR/ORE-method would ideally use plant-specific TRCs based on simulator measurement but may rely on expert judgement or generic data to derive the TRCs, hence making use of empirical data to support the HRA is a strength for this method. Once the relevant parameters have been identified, the derivation of the HEP using the TRC is straightforward and traceable. Selected post-initiator OAs are used as basis for this comparative study. The results of the sensitivity analysis on the plant-level risk contributions are studied and discussed. |
15:15 | Modification of K-HRA Method for Fire Human Reliability Analysis PRESENTER: Sun Yeong Choi ABSTRACT. This study focuses on incorporating fire scenarios into the diagnosis error probability calculation of the existing K-HRA methodology, addressing the impact of the shift technical advisor's absence during fire incidents. Based on operator interviews, it was determined that the shift technical advisor was absent from the main control room for approximately 30 minutes to establish and stabilize the fire brigade. During this period, the joint human error probability doubles between 10 and 30 minutes, as determined by the NUREG/CR-1278 methodology. Accordingly, the nominal diagnosis error probability in K-HRA was adjusted to account for fire-related scenarios. In addition to modifying the nominal diagnosis error probability, the fire human reliability analysis incorporated fire-specific considerations into performance shaping factors. These include the simultaneous use of fire procedures with abnormal/emergency operation procedures, partial or complete human-system interface damage due to cable failures, and insufficient training related to reactor shutdown during fires. This research highlights the integration of fire conditions into the K-HRA framework, particularly addressing the shift technical advisor's absence. Future studies aim to compare the diagnosis error probability derived from the existing K-HRA and Fire human reliability analysis methodologies during the quantification of human failure events, such as operator manual actions. This work contributes to advancing fire-specific reliability assessments for nuclear power plant safety. |
15:30 | Dynamic Risk Assessment for Human-Robot Collaboration Using a Heuristics-Based Approach PRESENTER: Georgios Katranis ABSTRACT. Human-robot collaboration (HRC) has introduced a plethora of safety challenges, particularly in the context of protecting human operators working in close proximity to collaborative robots (cobots). Current ISO standards, such as ISO 15066:2016, emphasize risk assessment and hazard identification within HRC systems. However, the procedures described in the standards are insufficient for addressing the complexity of HRC environments, which involve a multitude of design factors and dynamic interactions. This paper introduces a method for dynamic risk assessment that extends beyond the scope of expert knowledge. The method employs a parametric, numerical classification and evaluation of risks in the context of HRC. To achieve this, several parameters are monitored, including the distance between human body parts and the collaborative robot, the robot's Cartesian velocity, and the forces exerted on the human operator. Furthermore, an anthropocentric parameter is introduced, with a specific emphasis on the relative position of the human head, which is a particularly critical part of the human body, within the collaborative workspace. The assessment considers the various modes of collaboration set forth in ISO 15066:2016, thus facilitating a risk analysis tailored to specific scenarios. The aforementioned parameters are transformed into numeric risk metrics through the application of heuristic functions, thereby ensuring a consistent and comparable risk classification and evaluation. Subsequently, the metrics are aggregated to provide a total risk estimate. Individual risk values, as well as the total risk, are then compared against pre-established thresholds. This results in a robust and online safety feedback that is refined further by taking different parts of the human body into account. The proposed method is evaluated in a simulation environment, which presents collaborative workflow scenarios that include a manipulator robot and a human worker. |
14:45 | A dynamic scenario-based social vulnerability triage tool for disaster planning and response PRESENTER: Kati Orru ABSTRACT. Disaster response planners face a difficult task: not only do they need to foresee a multitude of complex future hazard scenarios but also make critical decisions concerning what kind of assistance should be provided to various vulnerable groups, how quickly, and in which order. To complement existing risk and vulnerability assessment methods, we developed a dynamic social vulnerability triage system that supports such evidence-based decision-making. The novel software solution can be used to first systematically map the intersecting sources of hazard in a real or future hazard scenario, including the direct impact of the hazardous event (e.g., flood, wildfire, terrorist attack), disruptions of vital services and support structures, and communication barriers that hamper accessing or understanding risk and crisis information. Subsequently, based on this mapping of the hazard situation, the system guides the user through identifying vulnerable groups and assessing the nature and urgency of the assistance they may need. The proposed triage tool innovatively addresses disaster vulnerability as a dynamic phenomenon and is founded on a scenario-based stakeholder dialogue including representatives of a diverse society. Scenarios effectively provide a contextually relevant platform for discussing such topics, making the questions on vulnerabilities and how to satisfy various support needs relatable and tangible to stakeholders. Throughout 2022-2024, the triage tool has been applied in table-top exercises following various multi-hazard scenarios, including industrial fire, flood, pandemic, long-term black-out, cyber-attack on social care information systems, and mass evacuation due to military attack across various European countries. The triage results enable tailoring more nuanced crisis preparedness activities, communication efforts and ensure efficient resource use in rescue operations. |
15:00 | UAV for “safe” NaTech disasters management and consequences evaluation in Major Hazard industrial plants PRESENTER: Alessandra Marino ABSTRACT. The impact of a natural disaster on a facility storing or processing dangerous substances can result in the release of hazardous materials with possibly severe off-site consequences through toxic-release, fire or explosion scenarios. EU regulation, namely Directive 2012/18/EU, among its new elements explicitly requires the analysis of NaTech (natural hazard triggering technological disasters) hazards. Main issue related to NaTech accidents is the simultaneous occurrence of a natural disaster (i.e. earthquakes, floods and lanslides) and a technological accident, both of which require simultaneous response efforts in a situation in which lifelines needed for disaster mitigation are likely to be unavailable. In addition, hazardous-materials releases may be triggered from single or multiple sources in one installation or at the same time from several hazardous installations in the natural disaster's impact area, requiring emergency-management resources occupied with responding to the natural disaster to be diverted. In this paper it is proposed and evaluated the application of dedicated collision tolerant UAV systems for NaTech accident emergency management. The collision-tolerant drone is designed for the inspection and exploration of the most inaccessible places, allowing to fly in complex, cluttered or indoor spaces. By enabling remote visual inspection in any indoor complex and confined spaces environments, it prevents the need for workers to enter hazardous places or face dangerous situations avoiding at the same time the risk of collisions and injuries. The drone is equipped with a collision tolerant carbon fiber protective frame. The integrated payload is represented by Simultaneous full HD and thermal imagery recording, and adjustable tilt angle with leds for navigation and inspection in dark places. Fast connections and data processing allow real time data processing and management of the situation. This methodology represent an effective approach to NaTech disasters management and consequences evaluation. |
15:15 | Designing a Systemic Risk and Robustness Assessment for Critical Value Chains PRESENTER: Stefan Schauer ABSTRACT. Recent incidents, such as natural disasters, political turbulence, and armed conflicts, have shown that critical value chains of goods and services can be severely affected by both global and local events. As these value chains represent the backbone of our everyday life, it is crucial to maintain their functionality and improve their resilience. This requires both the identification of critical goods and the analysis of complex relationships between upstream and downstream players from different industries. Therefore, general factors, such as production locations, preliminary and auxiliary products, refining steps, as well as orthogonal factors like packaging, know-how or transport routes need to be comprehensively considered. Alongside preventive measures to avoid potential bottlenecks, a structured approach is required to minimize the – potentially cascading – effects of any disturbances when they appear. In this paper, we describe a conceptual approach for a systemic monitoring framework which integrates both risk and resilience aspects by design and is tailored to the specific requirements and challenges of critical value chains, such as food, hygiene, and medication. This monitoring framework builds on a process model for analyzing vulnerabilities and disturbances within such a critical value chain. In this model, the significant stakeholders along with their interconnections and vulnerabilities are identified in a structured manner based on domain information and expert knowledge. Furthermore, the monitoring framework utilizes a cross-sectoral simulation model that builds upon an abstract representation of the supply chain and facilitates the analysis of cascading effects across organizational and sectoral borders. This model enables the assessment and visualization of the complex relations and dependencies within a supply chain in a general manner, particularly capturing orthogonal factors such as transport, maintenance, or legal aspects. |
15:30 | Emergency preparedness analysis: method for analysing and assessing emergency preparedness PRESENTER: Morten Sommer ABSTRACT. To mount an adequate response to emergencies, organisations must adapt their response resources to existing risks. Traditionally, emergency decision-makers depend mainly on their personal experience and subjective judgement when deciding whether the quantity, quality and type of response resources are fit for purpose and can meet the demands of emergencies [1]. In the Norwegian oil and gas industry, however, it is mandatory to use emergency preparedness analyses when dimensioning the emergency response arrangements for installations and operations [2]. This strong focus on emergency preparedness offshore have contributed to the low risk level for employees in the oil and gas industry and the absence of major accidents [3]. Lately, other sectors have introduced requirements for emergency preparedness analysis, but suitable methods for this remain to be developed. In this paper, we present a new method for analysing and assessing emergency preparedness. This method is based on the method for risk assessment in ISO 31000 [4], and focus on identifying emergency situations, analysing emergency preparedness arrangements, and evaluating emergency preparedness solutions, in addition to deciding the context prior to the analysis and implementing the solutions afterwords. After we have described the method and its theoretical foundation, we will analyse a sample of emergency preparedness analyses carried out in different sectors, to examine whether recent emergency preparedness analyses are in accordance with newer research on risk and emergency preparedness. [1] Wenmao, Guangyu, and Lianfeng, ‘Emergency resources demand prediction using case-based reasoning’, Safety Science, vol. 50, pp. 530-534, 2012. [2] Standards Norway, “Risk and emergency preparedness assessment”, NORSOK STANDARD Z-013, Edition 3, October 2010. [3] Vinnem, “Evaluation of offshore emergency preparedness in view of rare accidents”, Safety Science, vol. 49, pp. 178-191, 2011. [4] Standard Norge, “Risikostyring – retningslinjer”, NS-ISO 31000:2018, 2018. |
15:45 | Supporting social capacity-building for flood risk management: the AQUASOC tool PRESENTER: Guadalupe Ortiz ABSTRACT. In the context of climate change, global institutions responsible for disaster risk reduction have proposed management models aimed at strengthening social capacities in order to strengthen community resilience to the impacts of floods. However, this focus on social capacities has had limited reach within local risk management and flood disaster response agencies, and its materialization into concrete actions is currently very scarce and restricted to a discursive o rhetoric presence. This weak implementation can be explained by factors such as the strong technocratic tradition of management agencies and technicians, or the underdevelopment of tools and guidelines that practically enable the creation of social capacities among local communities. The AQUASOC project, funded by the Spanish Ministry of Science and Innovation, aims to generate an online tool that support local risk managers in the task of self-assessing the social capacities needed to prepare, confront and adapt to flood risks. To this end, this research has managed to trace the causal linkages that connect the social impacts of floods with the social capacities that are needed to prevent, adapt and recover from them. Fed with this information, the AQUASOC tool guides the user through a self-assessment process regarding the potential experimentation and relevance of social impacts in their municipalities, as well as the current level of implementation and development of social capacities. Once the impacts and capacities have been assessed, the tool generates a results report that provides information about priority areas for social capacity building as well as about the strategic actions needed to promote and build the necessary capacities to address the previously evaluated social impacts. The tool represents a pioneering initiative for the systematization of the identification of windows for action for social and community flood risk management. |
14:45 | Eyetracking as a Tool to Understand Motorcyclists’ Accident Susceptibility PRESENTER: Petter Bogfjellmo ABSTRACT. The thematic analysis of serious accidents involving ATVs, mopeds, and motorcycles from 2015 to 2020 (Iversen, T., Njå, O. 2022) indicates an increase in the number of registered medium and heavy motorcycles from 2015 to 2020, with a total of 2067 accidents involving such vehicles during the same period. Nord University, in collaboration with SINTEF and Trygg Trafikk, has conducted experiments using eye tracker technology, interviews, and video analysis to provide a better knowledge base regarding motorcycle accidents. A total of 62 motorcyclists were divided into three different groups: Riders with less than three years of riding experience who use the motorcycle for leisure and utility riding. Riders with more than three years of riding experience who regularly use the motorcycle for leisure and utility riding. Riders with extensive professional experience on motorcycles, such as police officers, driving test examiners at the Norwegian Public Roads Administration, driving instructors, and instructors at advanced driving courses for track/road. Using eye-tracking cameras (Tobii Eye Tracker), the distribution of gaze and attention during riding was recorded on a road section with various curves, intersections, and speed levels. Additionally, GPS coordinates and speed profiles were recorded using smart tool tracking. By using eye trackers during motorcycle riding, we collected data showing video of the road and traffic ahead, as well as tracking how the rider allocates attention using gaze points. Through qualitative in-depth interviews, we aim to understand the motorcyclist’s decision-making from the participant’s own perspective. The interviews focused on both intersection and curve riding. The study examined possible causal relationships in both multi-vehicle accidents at intersections and single-vehicle run-off-road accidents. |
15:00 | Motorcyclists' Preventive Riding and Visibility Through Intersections - A Qualitative Video Analysis PRESENTER: Petter Bogfjellmo ABSTRACT. Nord University, in collaboration with SINTEF Community, has conducted experiments using interviews to provide a better knowledge base regarding motorcycle accidents. By analyzing video recordings, the study has gained insights into how motorcyclists navigate complex traffic environments, identify potential hazards, and develop measures to improve traffic safety. The concept of driver competence in the curriculum for motorcycles and other driving license classes forms the basis for assessing driver skills or traffic behavior in this report. This assessment is made against the goals and content of the curriculum, as well as the laws and regulations for road driving. The knowledge, skills, and attitudes of motorcyclists are defined as individual competence, which shapes their behavior. Through video analysis, one can observe motorcyclists’ speed, positioning, and interaction with other road users. This provides valuable information on how motorcyclists react to various traffic conditions and the factors influencing their decision-making. Using eye trackers during motorcycle riding, we have collected data showing footage of the road and traffic ahead, as well as tracking how the rider allocates attention through gaze points. Video allows for the analysis of riding at different playback speeds and the observation of various sequences multiple times. It also enables comparisons of riding in similar and different situations to identify any behavioral tendencies or single events. 62 motorcyclists were divided into three groups: Riders with less than three years of riding experience who use the motorcycle for leisure and utility riding. Riders with more than three years of riding experience who regularly use the motorcycle for leisure and utility riding. Riders with extensive professional experience on motorcycles, such as police officers, driving test examiners at the Norwegian Public Roads Administration, driving instructors, and instructors at advanced driving courses for track/road. |
15:15 | Strategic and tactical choices as a basis for understanding motorcyclists PRESENTER: Simon Minsaas-Bromstad ABSTRACT. The thematic analysis of serious accidents involving ATVs, mopeds, and motorcycles from 2015 to 2020 (Iversen, T., & Njå, O., 2022) reveals a notable increase in the registration of medium and heavy motorcycles over this period. The analysis documents a total of 2067 accidents involving these vehicles within the same timeframe. Nord University has conducted experiments using interviews to provide a better knowledge base regarding motorcycle accidents. A total of 62 motorcyclists were divided into three different groups: Riders with less than three years of riding experience who use the motorcycle for leisure and utility riding. Riders with more than three years of riding experience who regularly use the motorcycle for leisure and utility riding. Riders with extensive professional experience on motorcycles, such as police officers, driving test examiners at the Norwegian Public Roads Administration, driving instructors, and instructors at advanced driving courses for track/road. A motorcyclist is highly susceptible to accidents in many traffic situations. A motorcyclist must consider the risk factors that may arise when riding into and through various intersection situations. The study aims to elucidate: In what way will knowledge and understanding of multi-party accidents affect information gathering and behavior when approaching and passing through intersections? How does the motorcyclist prevent conflicts and risks when approaching intersections? How does the motorcyclist maintain readiness when in the danger zone with other road users? Motorcyclists often seek roads with varying curvature. Factors such as speed limits often not being restrictive, and the experiences provided by curve riding make such roads popular for leisure riding. A significant proportion of motorcycle accidents involve running off the road in a curve. The study aims to elucidate: What speed choices do motorcyclists make before and through curves? What information gathering and tactical choices are emphasized when approaching and navigating through curve combinations? |
15:30 | Continuous-state survival functions for reinforced concrete bridges based on physics-based degradation models and visual inspection PRESENTER: Francesca Marsili ABSTRACT. This study presents an approach to evaluate the Continuous-State Survival Functions (CSSF) of structural systems, considering the degradation of individual components and their arrangement within the system. System reliability is quantified using the Diagonally Approximated Signature (DAS), a framework that separates the system's topological configuration from the probabilistic behavior of its components, enabling efficient reliability computation. Although traditional survival signature methods assume binary states, this work extends the concept to accommodate continuously degrading components. Component reliability is evaluated through a physics-based degradation model, integrated with results from visual inspections of the structure. The proposed approach is demonstrated on a reinforced concrete girder bridge structure affected by corrosion. The system components - namely the bridge girders - are characterized by different deterioration processes. The DAS acts as a surrogate modeling approach and provides an efficient alternative for costly Monte Carlo simulation. The proposed procedure includes deriving CSSFs for structural elements and then propagating these through the DAS to quantify the CSSFs of the bridge. Thus, this paper constitutes a further stepping stone for stochastically simulating large-scale systems for infrastructure network reliability analyses under various degradation dynamics. |
15:45 | Driving through the simulated night: A comparison between simulator-based and traditional night driving training PRESENTER: Catharina Lindheim ABSTRACT. The current use of simulators in driver training in Norway is very limited. In this study, we explore how a simulator-based night driving course compares to the current course used in Norwegian driver training. The course is very early in the process of obtaining a license and conducted before the learner drivers are allowed to drive themselves. The goal of the courses is that the learner drivers acquire knowledge on the subject of night driving. The effects are compared using multiple-choice tests on the night driving curriculum. In the experimental setup, all participants (n = 142) performed both types of training, and they were compared to a baseline group (n = 80). The simulator-based training led to larger improvements in test scores than the current training, both when used as the first and second training received by the participants. |
14:45 | A preliminary review of risks in underground hydrogen storage PRESENTER: Nadezhda Gotcheva ABSTRACT. Underground hydrogen storage (UHS) is a promising solution for integrating hydrogen into the energy mix and ensuring a reliable and flexible energy supply. This work is part of ongoing research project Hydrogen UnderGround (HUG), funded by Business Finland, which aims at building the basis for large-scale hydrogen storage concept for Finnish hydrogen business and technology ecosystem. The objective of this preliminary review is to gain a better understanding on the risks associated with UHS, especially concerning applicability to lined rock caverns (LRC). A lined rock cavern is a constructed underground storage facility, characterized by steel, polymer or concrete impermeable lining, designed to hold gases like hydrogen at high pressures. LRC is considered a feasible option for the Finnish conditions, due to lack of suitable natural geological formations. Other types of underground hydrogen storages may vary significantly in their geological and storage set-up, including aquifer, depleted oil/gas reservoir, rock/mining caves and salt caverns. Our initial review comprised 80 articles, published between 2005-2024, with most of the articles (54) published in the last four years (2020-2024). The main risks in different types of storages were linked to technical factors, such as sealing and leakage, cavern integrity, geochemical, geomechanical, microbiological, hydrodynamic, or surface facility risks. Only one recent study mentioned economic and environmental risks, including social license to operate. Regarding LRC, a key non-technical risk identified was lack of experience with these caverns. This is in line also with results from other studies, which indicate that social factors are missing or insufficiently addressed in sustainable development reviews. The results highlight the need for a holistic understanding of different risks for UHS, specifically emphasizing the gap in understanding the non-technical risks in such facilities, including those related to human, organizational and societal factors. |
15:00 | Operational Insights into Safe Underground Hydrogen Storage System PRESENTER: Hanna Koskinen ABSTRACT. Today, our energy system is undergoing a transition to reduce greenhouse gas emissions, pursuing more sustainable and environmentally friendly energy solutions. The new renewable energy solutions such as wind and solar power are entering the system with a larger share. However, they also are weather and season dependent, demanding greater operational flexibility in the energy system. Hydrogen is a promising solution to deal with the fluctuations in the system (i.e., unbalanced supply and demand). Nonetheless, utilizing hydrogen for balancing purposes requires efficient and large-scale storage solution. Storing hydrogen safely on a large scale brings about challenges related to technical, human, and social factors that still need further research to be addressed properly. After all, hydrogen is highly flammable and prone to ignition and explosion, so well maintained and specialized storage system is needed to detect and prevent hazardous escalations such as leakages. Human errors made in design and operation are one of the main causes of hydrogen incidents and accidents. Accordingly, human factors aspects should be better addressed to guarantee storage safety. In this paper, based on Hydrogen UnderGround (HUG) research project, we focus on human and social factors in the operation of hydrogen storage. Specifically, we aim to shed light on the demands of underground hydrogen storage from the operator perspective. For this purpose, we conducted work domain analysis, included in cognitive work analysis methodology, to better understand the hydrogen storage as a sociotechnical system and its functional structure with respect to its purposes and constraints on human actors. Creating this understanding is a prerequisite for further development of the system, that is, for setting appropriate requirements and creating an operational concept for the storage system. Our results indicate the need for a comprehensive operator-centered view in system design for managing the complexity of a large-scale underground hydrogen storage. |
15:15 | From the lab to the industrial park: lessons for the energy transition from past technology policy failures PRESENTER: Sarah Maslen ABSTRACT. Production, distribution and use of natural gas in commercial and domestic settings is a well-established industry with an excellent public safety record in Australia despite the inherent risks. The industry faces a significant challenge in maintaining this in the escalating energy transition. Emerging technologies for hydrogen and other future fuels will move rapidly from bespoke, experimental, lab-based facilities to full-scale, manufactured, process plant with the necessary resources (both physical and human) stretched to the limit. A large effort in engineering research is targeting solutions to the myriad of technical issues that must be addressed, but too often we overlook the sociotechnical risks to public safety that must also be managed for the transition to be successful. This paper addresses such risks. Sociotechnical risks arise at all levels from government policy, through regulation, organisations, risks that arise from the capabilities, affordances, and constraints of the technology, risks that are epistemic in nature, and collective values, norms, and practices. We trace each of these sources of risk as they relate to the energy transition drawing on past cases of emergent technologies and failure cases for clues as to how such outcomes might be avoided. |
15:30 | On the use of the precautionary principle in the context of the hydrogen systems PRESENTER: Dikshya Bhandari ABSTRACT. The precautionary principle is a way to manage risk in situations where the risk could be characterized as high and uncertain. It suggests that precautionary measures should be taken, or the activity should be avoided if the consequences of the activity could be severe and are subject to scientific uncertainties. Over the past decade, the precautionary principle has become a fundamental part of international environmental conventions and European Union (EU) laws. However, even though the precautionary principle is widely used, the formulation of this principle remains vague and controversial, with ongoing challenges in its practical application and polarized views on its effectiveness. These potential limitations should be addressed when considering the use of the principle in the management of hydrogen systems. There is a strong drive to develop such systems, as hydrogen represents a promising clean energy source, which presents both opportunities and uncertainties related to safety and infrastructure development. To better understand the complexity and controversies around the precautionary principle, we review various ways to understand this principle and the rationale behind its use and discuss its application in different situations. Based on the discussion, we provide some recommendations for managing risks associated with hydrogen systems. |
15:45 | A Human and Organizational Perspective on Interoperability in the digitalization of safety-instrumented systems PRESENTER: Dorthea Mathilde Kristin Vatn ABSTRACT. To ensure a safe, effective, and reliable process energy sector, it is necessary to accelerate the digital transformation and simultaneously ensure that health, safety, and environment (HSE) considerations are being accounted for. While Industry 4.0 has a vision to facilitate the intelligent networking of machines and processes focusing on the technological challenges of interoperability, Industry 5.0 focuses on understanding how digitalization efforts affect human and organizational aspects. As the process energy sector is a domain where loss of safety can lead to severe accidents, it is critical to understand how implementation of new technologies influences workflows and human-technology interactions in all phases of the lifecycle. This aligns with the sociotechnical perspective, which states that in order to understand safety as an outcome of operations, both technical and social aspects should be considered. A thorough understanding of human and organizational aspects is crucial when working with complex technological challenges related to interoperability. The ongoing research project on the digital lifecycle management of interoperable safety systems (APOS 2.0) with stakeholders in the Norwegian process energy sector seeks to increase interoperability from design to operation of safety systems, while also considering human and organizational aspects. Interviews were performed with 11 informants with different roles representing vendors, engineering companies, and operators within the Norwegian oil and gas industry. The aim of the interviews was to explore challenges and opportunities arising from a sociotechnical perspective, covering both human and organizational dimensions. The interview notes were subject to a thematic analysis, and the results point towards several challenges and opportunities arising from a human, organizational, as well as a life-cycle perspective. We suggest that by paying attention to these aspects early in digitalization efforts, stakeholders both within the process energy sector and other related industries might be better equipped to maintain and improve the overall HSE. |
14:45 | Improving Safety in Hauling Operations: Predicting and Analyzing Collision Probabilities with Discrete Event Simulation PRESENTER: Malihe Goli ABSTRACT. Ensuring safety in off-highway raw material handling systems is critical, as the high risk of truck collisions poses a significant threat to both human lives and mining equipment, leading to costly damages. While various safety and risk assessment methods exist—such as probabilistic models (e.g., Fault Tree Analysis), reliability-based models (e.g., Failure Mode And Effect Analysis), simulation-based models (e.g., Agent-Based Models) , and incident analysis frameworks (e.g., Tripod)—most struggle to capture the complexity of dynamic traffic interactions. These methods often lack the flexibility to accurately model real-world conditions and require substantial computational resources, making them impractical for real-time applications. This study proposes a discrete-event simulation (DES) approach, which provides a time-based simulation of discrete events and effectively manages randomness, process interactions, and resource constraints. DES can outperform static or probabilistic models in simulating truck traffic flow and conducting real-time accident analysis, offering a more practical solution for operational safety studies compared to high-level systemic or agent-based models. The proposed model simulates truck movements across a road network that reflects a realistic mine layout. The model then develops and evaluates various accident scenarios while capturing real-time truck interactions to assess collision probabilities throughout the entire road network. The simulation modeling was performed using the OpenCLSim, an open-source library for rule-driven scheduling and comparison of cyclic logistics strategies. The results highlight areas and locations on the road network with high collision probability, particularly mid-road and junction locations. Based on these high-risk areas, different operational scenarios, along with dynamic shove-truck allocation and scheduling, can help enhance safety and decrease the probability of collisions. Furthermore, additional enhancements, including signage, speed limits, adaptive traffic control and automated vehicle-to-vehicle communication systems, are recommended to improve driver responses to changing road conditions and congestion, offering a flexible and computationally efficient approach to safety management. |
15:00 | X-Press Pearl Accident; Learning from one of the worst maritime disasters PRESENTER: Deshai Botheju ABSTRACT. The Xpress Pearl was a new-built container ship that was totally destroyed and sank due to an onboard fire and explosion event near the western coast of Sri Lanka in the Indian Ocean. This accident is regarded as one of the worst maritime pollution disasters anywhere in the world. The accident released vast amounts of hazardous and noxious substances including various persistent pollutants. For example, the huge quantity of plastic nurdles released in this accident will prevail in the ocean and the coastal environment for decades to come, contributing to significant micro-plastic pollution in the region. The event also unfolded a catastrophic socio-economic outcome, primarily due to the sensitive and populated coastal area in the vicinity of the accident. This paper discusses the plausible causes and series of mishaps that lead to this accident, and consequent disaster management failures which then magnified the initial event. The paper will also present the key lessons that must be extracted from this accident in order to prevent or reduce the likelihood of any similar events anywhere in the globe. The authors had been directly involved in the advisory activities from the onset of the accident until the final investigations were completed. According to the authors, the Xpress Pearl disaster must lead to wider changes in the regulations applicable to global maritime trade, particularly regarding the transportation of hazardous and noxious substances using ordinary container ships. Further, management practices along with decision making procedures related to the handling of this kind of accidents must be significantly improved. |
15:15 | An investigation of ship steering gear direct and root causes PRESENTER: Spencer Dugan ABSTRACT. Ship steering failures limit maneuverability and have led to very serious groundings and allisions. However, many accident reports on steering failures do not identify a root cause. This lack of cause prevents establishing trends or presenting concrete safety recommendations based on historical results. The objective of this paper is to analyze the frequency, influencing factors, and, where possible, the causes of steering failures. The paper uses accident reports from major flag states (US, UK, Germany, etc.) to extract relevant information and ship registers to collect fleet data. Results are expected to include an assessment of ship characteristics (e.g., ship type, size, machinery configuration, and flag state, among others) on failure likelihood. The analysis will also investigate the influence of inspection deficiencies and detentions on failures. An overview of identified root causes will be presented, demonstrating that a large proportion are either unknown or impossible to be determined. The discussion will focus on the trends in failures. We plan to devote a large section to the possible reasons for the failures of indeterminate origin, including but not limited to blame culture, underreporting, and steering system maintenance and testing. The results are useful for both shipping companies and coastal port states for improving the safety of operations and maritime surveillance. |
15:30 | Dynamic Risk Assessment of Maritime Encounters Using Sequential Behavior Analysis and Machine LearningDynamic Risk Assessment of Maritime Encounters Using Sequential Behavior Analysis and Machine Learning PRESENTER: Yigit Altan ABSTRACT. Quantifying risky encounters in waterways is crucial for captains and decision-makers to ensure safer maritime transportation. Although AIS data and conventional methodologies have provided valuable insights, effectively capturing the sequential behavior of ship encounters remains a challenge. This study addresses this gap by analyzing variations in key encounter parameters from a captain’s perspective, enabling a more realistic and dynamic risk assessment. Two primary parameters are selected: the Closest Point of Approach (CPA), a widely used risk indicator, and a novel complexity metric defined as the covariance of relative ship velocities. CPA measures the minimum separation between vessels, while the complexity metric captures the intricacy of their maneuvering patterns, reflecting the difficulty of maintaining a safe distance. These parameters are tracked continuously throughout each encounter, allowing for the detection of rapid changes in behavior. An unsupervised DBSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithm is employed to analyze these sequential variations and geospatial data. DBSCAN is particularly effective for identifying anomalies, as it clusters data points based on density and labels outliers as “noise.” This approach helps reveal abnormal maneuvers that might not be evident from traditional static analyses. Encounters classified as noise by DBSCAN are flagged as potentially risky, even if the individual parameters, such as CPA fall within safe ranges. The developed methodology is applied to the complex and congested Strait of Istanbul, where frequent maneuvers make detecting abnormal behaviors challenging. Results demonstrate that the method successfully distinguishes between risky and non-risky encounters, even in scenarios where CPA alone might suggest the absence of risk. This refined assessment provides a more nuanced understanding of navigational safety, aiding stakeholders in identifying hidden risks and improving maritime traffic management. |
14:45 | Leveraging Collision Avoidance Robustness to Establish Situational Awareness Requirements: A Closed Loop Simulator Approach PRESENTER: Henrik Stokland Berg ABSTRACT. Using a maritime traffic simulator to analyze the performance of autonomous navigation functions, specifically Collision Avoidance (CA), and how its performance depends on the quality of input from the Situational Awareness (SA) system is the target of this paper. To that end, a closed-loop simulator is developed that can take the SA ``ground-truth'' of a given maritime traffic situation, add modeled noise and other perturbations to ship positioning and world-states, feeding this as input to the CA system. For several scenarios, the quality (noise and error rate) of the input to the CA system is systematically varied, and performance is evaluated with respect to collision risk and compliance with a selected set of the International Regulations for Preventing Collisions at Sea (COLREG). This type of robustness assessment is crucial as it directly governs performance requirements for the SA system and its physical sensors used for sensing the environment and monitoring the condition of the ship. This paper provides examples of how quantifying the robustness of CA systems can determine performance criteria for SA systems. |
15:00 | Mitigating Unsafe Control Actions in Autonomous Navigation Systems: A SysML-Based Analysis for Enhanced Safety PRESENTER: Raheleh Farokhi ABSTRACT. The maritime industry is increasingly adopting semi-autonomous systems for safer and more efficient operations, particularly in challenging environments like winter navigation. However, these advanced systems introduce new risks, such as ice detection failures and delayed human intervention, which need to be addressed to ensure operational safety. Therefore, it is crucial to identify and mitigate potential unsafe control actions (UCAs) in such systems. This study applies Systems Modeling Language (SysML) to analyze key system components and their interactions, focusing on the Ice Detection System, Navigation System, and Human Operator. Through Block Definition Diagrams (BDD) and State Machine Diagrams, the dynamic behavior of the system is modeled to identify UCAs. Mitigation strategies are proposed to enhance safety and ensure more reliable operations in winter navigation. Furthermore, the results show that using this process may help identify critical risks early in the design phase and provide practical strategies for mitigating unsafe control actions. This approach can contribute to enhancing the overall safety and reliability of semi-autonomous ship operations, particularly in hazardous winter navigation conditions. |
15:15 | Designing HMI for remote operation of urban autonomous ferries with CRIOP PRESENTER: Jooyoung Park ABSTRACT. Autonomous urban passenger ferries are emerging as an efficient solution for public transportation, allowing operators to supervise multiple vessels from a Remote Operation Centre (ROC). A recent milestone demonstrated the remote operation of MF Estelle, the world’s first commercial autonomous ferry, from 600 km away. As operations shift from onboard to remote environments, designing a new Human-Machine Interface (HMI) becomes critical. This paper reflects on three key phases of work informing the development of a robust ROC HMI. (i) Phase 1: The milliAmpere2 trial in 2022 marked the first public demonstration of the ferry’s autonomy system, revealing the need for an HMI to display system status and decisions clearly. (ii) Phase 2: Building on lessons learned from the trial, a user-friendly HMI was designed for MF Estelle. However, pursuing a human-centred design approach was challenging due to the undefined operator role for autonomous ferries. Despite these difficulties, the HMI was successfully integrated on board Estell. (iii) Phase 3: After over a year of operation, the need for a ROC became apparent. The transition from onboard to remote control presents significant challenges. To address this, a CRIOP workshop was conducted, identifying critical issues related to human factors, such as the necessity of comprehensive task analysis and the importance of situation awareness (SA) in supporting the ROC operator. The results emphasize the importance of automation transparency, reducing cognitive workload, and systematically integrating human factors. Achieving fully remote operations requires both a well-designed HMI and a supporting infrastructure. This paper consolidates years of work in identifying and addressing HMI design challenges, offering insights to support meaningful human control and ensure safe transitions from onboard to remote operations. |
15:30 | From Uncertainty Representation to Safety Performance Monitoring for Operational Safety Assurance - A Systematic Approach PRESENTER: Nishanth Laxman ABSTRACT. Recent advancements in Automated Driving Systems (ADS), driven by substantial investments, have significantly enhanced ADS technologies. However, traditional methods for the design, development, verification, and validation of safety-critical automotive systems are inadequate for managing the increased complexity and operational uncertainties of ADS, making the assurance of their operational safety in dynamic environments an unresolved challenge. Current operational safety approaches use various approaches to incrementally challenge the validity of assurance cases but lack the integration of field data. The increasing availability of real-time vehicle data presents an opportunity to identify potential runtime uncertainties affecting safety assurance cases. By continuously refining and expanding assurance cases with field data, additional evidence or counter-evidence, and other relevant information through a DevSafeOps process, the safe operation of ADS can be assured. A crucial aspect of operational safety assurance is Safety Performance Monitoring (SPM) using Safety Performance Indicators (SPIs), which are essential for both operational safety and compliance with standards such as UL~4600 and BSI~PAS~1881 for the deployment of ADS. SPIs quantify safety performance and can be used to monitor the validity of safety arguments during operation. SPIs at sufficiently detailed sub-claim levels can proactively identify potential violations of safety case claims in a 'leading' manner, before safety-critical events occur. Additionally, they can provide supplementary evidence to address residual uncertainties after deployment. This paper primarily addresses SPM for operational safety, presenting a novel systematic approach that spans from uncertainty representation in assurance cases using Dempster-Shafer theory to employing dialectics and argument defeaters, ultimately defining useful SPIs related to various claims in an assurance case. This approach aids in concretely identifying and defining SPIs based on an assurance case and facilitates the runtime field data-based validation of assurance cases, additionally aiding in standards conformance. The approach is demonstrated through a construction zone assist case study for ADS. |
14:45 | Safety Challenges in the Built Environment in The Netherlands ABSTRACT. This paper provides an overview of the impact that societal and demographic developments have on the built environment from various safety perspectives. Safety is important in the built environment. Wherever construction is planned, underway, or completed, safety risks arise for users, residents, traffic participants, construction site personnel, and passersby. This is the case, for example, with buildings along transport routes carrying hazardous materials, or activities in, around, or within a structure during its lifecycle, such as rough or finishing work, maintenance of installations, façade cleaning, or demolition. People inevitably use the built environment — consisting of structures and their surroundings — which may also be under construction at times. Applied, scientifically grounded clear frameworks, decision-making processes, role distribution, and responsibilities among the involved organizations are essential — or rather, should be essential — to ensure safety. This necessity becomes even more pressing with societal and demographic developments on the horizon. Societal developments include, among others, urban densification (e.g., building near transport hubs), the energy transition (e.g., the use of electric equipment and installation of solar panels), climate change (resulting in extreme weather conditions), renovation, replacement, and maintenance of assets (infrastructure and real estate), and the significant construction demand (e.g., 100,000 homes per year) in the Netherlands. Demographic developments primarily include the aging population, a structural shortage of skilled (safety) personnel, the employment of migrant workers (language and cultural barriers), and the rise of sole proprietorships. Without a scientifically grounded and well-thought-out strategy and clear frameworks for integral safety, these societal and demographic developments will affect the safety of the built environment and vice versa. This paper provides insights into these issues. |
15:00 | Predictive Digital Twins for Critical Infrastructure Protection: Simulation of Hazardous Gas Transport PRESENTER: Jacopo Bonari ABSTRACT. The release of hazardous airborne substances in densely populated areas poses a significant threat to urban population as well as to critical infrastructures. In emergency situations, the prediction of the dispersion process of a gas contaminant in the atmosphere is a matter of paramount importance to put in place adequate counter-measures in a timely manner. To this scope, an equivalent mathematical model describing the problem is formulated, where the incompressible Navier-Stokes equations are used to estimate the air flow in a built environment and the advection-diffusion equation models the pollutant's transport and dispersion itself. In order to promptly obtain valid results, the solution procedure is split in two phases. In a first offline preparatory stage, computationally intensive wind flow simulations are carried out for various atmospheric conditions, leveraging model order reduction techniques. In a second online stage, the provided wind field is used as an input to model the dispersion process. The simulation combines a physics-driven model, i.e., the advection-diffusion equation, and sensor data gathered from the physical environment, this allowing also for the analysis of optimization strategies to improve the sensors location. The final target of the workflow under consideration foresees its integration in a hybrid digital twin framework, a paradigm that has been proven to be a valid tool in the context of crisis management and useful to foster the resilience of critical infrastructures. |
15:15 | Dynamic Safety and Risk Assessment of Tunnel Construction: Leveraging Systems Engineering through Sigma Modeling PRESENTER: Mirza Muntasir Nishat ABSTRACT. Constructing a tunnel is a complex process that requires engineering expertise, close coordination, safety and risk management, and careful consideration of geological factors and structural stability. The complicated process of tunnel construction calls for dynamic and comprehensive safety and risk evaluations to promise project success and safeguard stakeholders. By leveraging the power of systems engineering, the modeling of a tunnel project can be accomplished which will offer a comprehensive understanding of all subsystems, ensuring that complex relationships between all the components are well comprehended and controlled. This study investigates the applicability and compatibility of utilizing Sigma modeling language and WordLabTM Workshop modeling environment to model a tunnel project from the systems engineering point of view, demonstrating the ability to integrate multiple activities in a single cohesive model. With its graphical user interface, the WorldLab™ Workshop enables the easy execution of compilation and simulation tasks which serves as a virtual laboratory for performing interactive and stochastic simulations. The use of systems engineering facilitates the visualization and analysis of the complex interrelationships among various project components, hence enabling the identification of possible risks and operational safety issues. In addition, complex scenarios can be tested through modeling of this kind, allowing project managers to assess the effects of different factors and, consequently, create an interactive knowledge-based system that clearly defines a framework for collaboration and risk mitigation throughout the project. The results indicate that integrating system dynamics and systems engineering with Sigma modeling enhances risk management strategies and improves safety and operational efficiency in tunnel projects. This novel investigative approach addresses the complex challenges of risk and safety management in tunnel engineering, offering practitioners fresh insights. |
15:30 | Refining the safety design of rail tunnels in the EU using systems thinking PRESENTER: Jeroen Wiebes ABSTRACT. Tunnels have developed from basic infrastructure into complex systems with many interconnected components. When designing a railway tunnel, other factors like human behavior and the technical systems of the trains add to this complexity. Traditional safety analysis methods often fall short in addressing this complexity, highlighting the need for a new approach to railway tunnel fire safety. This article sets out to investigate whether methods incorporating systems thinking into railway tunnel safety design could improve tunnel safety. A framework incorporating both STPA with more traditional engineering is used to analyze a prescriptive design as part of a case-study. Based on ‘common’ fire scenarios for railway systems, results show that a prescriptive design provides inadequate control in protecting tunnel users from heat and smoke. The article reveals that while the current regulatory framework at the EU level aims to incorporate systems thinking into the design process, several critical gaps hinder its practical implementation. The integration of safety assessment methods based on systems-thinking, combined with traditional risk analysis methods, holds significant potential for improving railway tunnel safety design. By combining methods like STPA with tools such as CFD, designers can better analyze complex socio-technical interactions and provide robust, cost-effective safety solutions. |
16:30 | Importance Measures from Complex Reliability Simulations PRESENTER: Curtis L. Smith ABSTRACT. We consider the problem of estimating the importance of components and parameters in the dynamic simulations of complex systems. We aim at a conceptualization that is capable of retaining the meaning of traditional (static) importance measures in a dynamic concept. We also approach the problem by defining the importance measures in such a way that they draw from the rich output of a dynamic simulator exploiting information that is typically hidden when focusing solely on the probabilistic-level data (e.g., component failure probabilities, system-level failure probability). By incorporating detailed observable information such as failure times and component operational characteristics, additional dimensions of decision making are made available to system designers and operators that allow a focus on the margin to failure. The goal of our work is to create a scalable and flexible approach to enrich the insights from a time- and physics-informed simulation with explanations as to what are the drivers of the system behavior at a fundamental level of system behavior. |
16:45 | Developing Measures for Node Importance in Critical Transportation Networks - An Illustration to the Analysis of Switches at Finnish Railway Stations PRESENTER: Leevi Olander ABSTRACT. Estimates on the importance of nodes in transportation networks provide guidance on the allocation of resources to preventive maintenance. As a rule, those nodes whose disruptions would most affect the level of service provided by the network are particularly important and, as a consequence, merit more attention in maintenance planning. Conventionally, measures of node importance have been generated based on structural properties (e.g. degree centrality, betweenness centrality). However, these properties do not account for the extent to which the network is capable of providing its intended level of service, such as enabling the planned traffic volume between terminal pairs that are defined by relevant origins and destinations in the network. In this setting, we extend well-known measures of probabilistic risk importance to prioritize nodes in support of preventive maintenance planning. Specifically, we adapt the risk achievement worth (RAW) measure to assess how much the planned traffic volume between terminal pairs of the network would be compromised due to disruptions at different nodes. To support the applicability of our methodological advances, we develop a systematic, data-driven, semi-automated approach that makes but modest assumptions about the quality of underlying data. We illustrate this approach with a case study on the prioritization of switches at a representative railway station in Finland, based on an analysis of the reliability of the connections that this station offers to the adjacent railway track segments. We also compare the proposed RAW measure with structural importance measures and elaborate on how these two kinds of measures can be jointly examined to produce complementary sources of information. |
17:00 | Addressing Uncertainty in software tools used in Risk Management PRESENTER: Petter Johnsen ABSTRACT. Presight holds the responsibility to highlight uncertainties and unknowns by clearly presenting gaps, limitations, and variability in data. Since uncertainty are inevitable and risks are constantly evolving, acknowledging, and addressing these uncertainties can make solutions more dependable during decision-making. The key question is: how confidently can the model be trusted for making the right decisions? Presight specialises in software solutions for Barrier Management. This approach ensures that organisational, operational, and technical barriers are clearly defined and effectively managed to secure safe and stable operations. The Presight Barrier Management solution is used to visualise critical safety barriers, enhance decision-making, and improve safety outcomes. Acting as a comprehensive umbrella tool, it aggregates and visualises data from various third-party sources. Recognising there is always a limitation to how much you know based on the available data and information, the level of uncertainty is always a factor to consider in risk management. The solution is designed to visualise what you can know with certainty based on available data (the “known knowns”) and visualise areas where available information is limited. This goes beyond providing a simple green, yellow, and red status by bringing attention to key uncertainty factors. For instance, in barrier functions where human factors play a significant role, uncertainty levels may be higher compared to those dominated by technical factors. In such cases, greater weight can be assigned to the human factor, emphasizing its critical role in that particular barrier function. Additionally, dynamic weighting can be applied to reflect external shifts, such as changes in the geopolitical landscape, which may impact risk levels. When a function is marked as “grey” in the diagram, this indicates a significant level of uncertainty has been identified for that specific function. By taking uncertainty into consideration, the software ensures it provides insight into some of the unknowns. |
17:15 | Measurement uncertainty and remaining useful life prediction: A case study using testing data from shape memory alloy wires PRESENTER: Alicia Auer ABSTRACT. In a globalized and interconnected world, the importance of reliability, maintenance, and quality continues to grow. At the same time, the prediction of the remaining useful life (RUL) is essential for maintenance planning, facing Prognostics and Health Management of technically complex products. Given the importance of maintenance measures for a reliable product life cycle, it is all the more important to achieve the most accurate prediction results possible. A relevant influencing factor here is the measurement uncertainty of the database used. This paper presents a case study on the impact of measurement uncertainty on the RUL prediction. The study employs data from tests of cyclically stressed shape memory alloy wires. Real data from long-term life tests conducted in a test rig are given. Firstly, the RUL of the shape memory alloy wires is predicted using linear regression models. The training data is fitted with a Gaussian least-squared regression model, and the RUL is estimated using forecasts generated by this model and an adaptive y-target value for the failure time. The second step is a comprehensive measurement uncertainty analysis, which determines and quantifies all relevant uncertainty components of the measurement process. The extended measurement uncertainty is determined according to ISO 22514-7 and VDA 5. Thirdly, Monte Carlo simulations are conducted based on the original time series and the determined measurement uncertainty. Representative time series are generated, for which each the RUL is predicted. Subsequently, descriptive statistics are applied to the obtained set of simulated RULs, with the results compared to the original time series and the true RUL. The paper concludes with a discussion of the results and an outlook on future work. |
17:30 | Expanding the uncertainty toolkit for risk analysis ABSTRACT. Uncertainty is an inherent part of risk: where there is no uncertainty, there is no risk. However, most international standards for risk analysis provide minimal guidance on the consideration of uncertainty. In part, this is due to lack of an agreed lexicography, and the absence of comprehensive methodologies for recognising, analysing and evaluating uncertainty. To address this issue, a framework for treating uncertainty in risk analysis is presented, applicable to decision-making processes and providing theoretical completeness. This framework elaborates a three-layered approach of first, second and third order uncertainty analysis. Somewhat surprisingly, first order uncertainty analysis is the risk assessment. The definition of risk, according to ISO 31000:2009 is “the effect of uncertainty on objectives”. By implication, risk assessment requires examination of that uncertainty. Typically, the uncertainty in question relates to the likelihood (probabilities) and consequences dimensions of risk. Second order uncertainty analysis within this context is thereby viewed as evaluation of the residual uncertainty that remains from the risk assessment. Familiar methods that would count as second order uncertainty analysis include sensitivity analysis, probability bounding and worst-case scenario mapping. However, second order uncertainty analysis could also be extended to uncertainty associated with establishing the context, the risk assessment methodology and processes, risk evaluation criteria and proposed risk treatment measures. Finally, third order uncertainty analysis involves evaluation of higher order uncertainty, described here as metauncertainty, namely, uncertainty of uncertainty. It may encompass considerations such as: establishing the principles and methodologies for risk analysis, including a typology of uncertainty; deciding the acceptable levels of uncertainty; resolving undecidable concepts and foundational issues in risk analysis; and, probing relationships between risk and related concepts such as safety, vulnerability and resilience. Incorporating metauncertainty into our toolkit can provide a more comprehensive consideration of uncertainty and its application to risk analysis. |
16:30 | Risk Perception and Preventive Behaviours Regarding Occupational Exposure to Hazardous Substances: Findings from a Scoping Review PRESENTER: Eva Lindhout ABSTRACT. Workers' decisions and behaviours regarding the adoption of preventive measures are vital to mitigate health risks from occupational exposure to hazardous substances. These decisions are influenced by factors such as workers’ risk perception—including their values, attitudes, beliefs, feelings associated with, and understanding of the risks— and their beliefs about the effectiveness of safety measures. Despite its importance, workers’ perspectives are often not addressed in risk prevention strategies. To date, a comprehensive study providing an overview of (factors related to) risk perception and preventive behaviours regarding occupational exposure to hazardous substances is missing. This scoping review aims to address this gap by exploring factors influencing workers' decision making and behaviours regarding occupational exposure to hazardous substances. The review seeks to: 1) synthesise existing evidence, 2) clarify key concepts, and 3) identify knowledge gaps. We conducted a search across five databases—MEDLINE, Embase, PsycINFO, Web of Science, and Scopus—for studies published from 2010 to the present. To aid in the review process, we used ASReview, a machine learning-powered tool. This presentation will cover key findings from the scoping review and present a conceptual model of risk perception and preventive behaviours regarding occupational exposure to hazardous substances. |
16:45 | Risk perception as predictor of HPV vaccine uptake during the Dutch catch-up campaign in 2023 PRESENTER: Femke Hilverda ABSTRACT. Background: Human papillomavirus (HPV) infection may cause several forms cancer, such as cervical cancer, but also penile or anal cancer. Vaccination is an effective measure to prevent cancer development. In 2023, the Dutch government started a vaccination catch-up campaign in which, particularly, young men between the ages of 19 and 26 were invited to receive the HPV vaccine free of charge. Aims: We used the Health Belief Model (HBM) to examine the relevance of risk perception, both perceived susceptibility and perceived severity, and other determinants’ (perceived barriers and benefits) relevance in predicting vaccine uptake among young men during the catch-up campaign. Method: In April 2023, a cross-sectional survey study was conducted among Dutch young men aged 19-26 years (n = 155). All determinants were measured on a 5-point Likert scale, while vaccine uptake was measured dichotomously (no vs yes). Confidence interval-based estimation of relevance (CIBER) was performed to uncover the relevance of each determinant for HPV vaccine uptake. Results: Since the start of the campaign about 33% of the participants got vaccinated. Of the four determinants perceived benefits and barriers were most strongly, perceived severity was moderately, and perceived susceptibility was not significantly (cor)related to vaccine uptake. Also, most young men perceived the consequences of HPV infections as moderately severe, while they perceived many benefits and little barriers. Conclusion: This study provides unique insights into the motives of young men to get vaccinated against HPV during the catch-up campaign. Our results imply that public health interventions should focus on increasing risk perception by showing the severity of the potential consequences of an HPV infection. In contrast, there seems little relevance in a focus on increasing susceptibility. Besides the focus on perceived severity, simultaneously reinforcing beliefs about many benefits and little barriers is desirable in health campaigns. |
17:00 | Physicians risk perception and risk taking with AI advice PRESENTER: Hanqin Zhang ABSTRACT. The integration of Artificial Intelligence (AI) into clinical settings has the potential to assist and reshape physicians' decision-making. However, little is known about physicians' risk-taking and risk perception when confronted with advice from AI-based support systems. Therefore, an experimental study was conducted to better understand physicians' willingness to adopt AI advice under uncertainty. Physicians in this study were asked to give an initial diagnosis for developed clinical vignettes, and then they were asked to make the final decision with AI advice. Thus, this study also examines whether physicians' final decisions are influenced by confirmation bias. As a result, a total of 81 physicians(25 Interns, 30 Residents, and 26 Associate Chief Physicians) were included in this study. Overall, the physicians' risk-taking is associated with their experience level. Young physicians were more willing to accept AI advice, while experienced physicians took more time to consider the capabilities of AI advice. In addition, this study also found that physicians' final decisions with AI advice were influenced by confirmation bias. Taken together, this study provides important insights into physicians' risk perception of AI advice and cognitive bias when they face complicated clinical cases. |
17:15 | Next steps for the risk perception-disaster preparedness behaviour nexus. Case studies from Romania PRESENTER: Iuliana Armaș ABSTRACT. The link between risk perception and preparedness behaviour has been studied for more than three decades, yielding a mix of consistent and inconsistent findings. Within this heterogeneous body of literature, a prominent research gap can be identified: the absence of research conducted in nuanced contexts, particularly in European countries with complex historical and cultural legacies (e.g., former communist states) that may shape both risk perception and disaster preparedness. This study aims to investigate the relationship between risk perception and the willingness to implement individual disaster preparedness behaviour, focusing on earthquake risk perception in Bucharest and flood risk perception in Galați City, Romania. These two hazards were selected due to their occurrence frequency and severe impact in the study areas, as they pose significant threats to the human communities in question. The theoretical underpinning lies in an extended version of the Theory of Planned Behaviour (TPB), modified to include risk perception as a predictor of disaster preparedness behaviour, as proposed by Ng (2022). The connections between the theory’s constructs were examined through a robust operational framework involving linear, nonlinear, and Bayesian regressions, as well as structural equation modelling. The data were collected through a standardised questionnaire applied to 300 participants in Bucharest and 200 participants in Galați City, between June and July 2024. The results add to research on risk perception and disaster preparedness, bringing to light new perspectives on what drives preparedness behaviour. These insights can guide the creation of targeted public awareness campaigns concerning earthquake and flood preparedness. In addition, they can inform new approaches to manage seismic and flood risk through greater community involvement. Such progress is needed in both cities, given their current low level of disaster preparedness at the individual level. |
17:30 | Assessing public valuation of coastal protection solution for floods: a multi-country experimental study PRESENTER: Olivia Jensen ABSTRACT. Flooding is one of the most financially devastating natural hazards, affecting millions globally. In Asia, approximately 600 million people are at risk of coastal flooding, a number expected to rise as sea levels increase. Singapore and Japan, two low-lying coastal nations, face growing vulnerability to coastal flooding in the coming decades due to climate change. In response, both countries’ governments have been actively planning and investing in coastal protection solutions. Understanding public preferences and valuations of these solutions is critical for aligning development efforts with community interests and enhancing public support. This study, part of a broader project on coastal flood risk and adaptation in the Asia, aims to provide insights to decision-makers by examining public preferences and trade-offs related to different attributes of the coastal protection solutions in Singapore and Japan. Using a choice experiment, we will evaluate four key attributes: flood risk reduction, recreational opportunities, negative impacts on wildlife, and inconvenience, along with a cost attribute to estimate willingness to pay. Additionally, we will investigate the roles of flood risk perception and risk tolerance in influencing public decision-making. While risk perception has been widely studied, its effect on policy support has shown mixed results. In contrast, the role of risk tolerance in response to flood risk remains largely underexplored. Preliminary research and a pilot survey conducted in Singapore and Japan in 2023 (n=500 for each country) provided insights into flood experience, risk attitudes, risk perceptions, and existing protection measures. These findings have informed the design of ongoing choice experiments and surveys. The main surveys, currently being implemented, will explore (i) public valuation of long-term coastal protection solutions, (ii) the influence of risk perception and risk tolerance on decision-making, and (iii) the dynamics of preferences across different flood scenarios, subpopulations within each country, and cross-country comparisons. |
16:30 | Root cause of Critical Infrastructure Failures in the 2023 Southeast Turkey Earthquake: A case study from Hatay PRESENTER: Nazli Yonca Aydin Harless ABSTRACT. The 2023 Turkey Earthquake caused widespread collapse and severe damage to Hatay’s critical infrastructure, including the airport, water pipelines, telecommunications, railroads, roads, hospitals, and the harbor. These damages also severely disrupted search and rescue operations, delaying emergency aid. Such outcomes highlight the inadequacies in pre-disaster planning, which plays a critical role in the severity of such disasters. Furthermore, recovery policies are often formulated rapidly in response to urgent needs after disasters, which leads to “going back to baseline state” and perpetuates the existing vulnerabilities since the root causes of these vulnerabilities are unknown. Such reactive measures can, over time, exacerbate social, economic, and environmental issues, transforming natural disasters into more severe crises (Ingram et al., 2006). A proactive approach to incorporate pre-disaster assessments of vulnerabilities and infrastructure vulnerabilities is crucial for building long-term resilience. Hence, there is a significant research gap in understanding the root causes of disasters. This research fills the gap in the literature by investigating the root causes of critical infrastructure failures during a disaster. More specifically, this study explores the shortcomings in disaster management processes and critical infrastructure planning that contributed to the catastrophic outcomes in Hatay, Turkey. Through a combination of interviews with key stakeholders, in-depth event and situation research, and qualitative fault tree analysis, this research aims to identify the root causes of these failures, which remain unclear more than a year after the Southeast earthquake in Turkey. Following this pre-disaster event understanding, it then evaluates post-disaster recovery plans, ensuring that they address these identified vulnerabilities and contribute to more effective and resilient recovery efforts. Addressing these systemic issues is essential for improving disaster preparedness and response in Turkey and beyond. |
16:45 | How critical is critical? Assess the role of critical infrastructure-related sectors across the economy network. PRESENTER: Tan Phan ABSTRACT. The concept of critical infrastructure (CI) has been extensively studied, with various frameworks analyzing interdependencies between infrastructures and their ties to economic activities. However, defining infrastructure criticality remains challenging, complicating damage assessment when failures occur. Many models assume that the failure of any infrastructure halts all business activities, leading to overly simplistic and sometimes unrealistic assessments. This research aims to refine this approach by classifying the criticality of four key infrastructure groups: energy, water, information and communication technologies (ICT), and transport. Using data from the OECD Input-Output (IO) table, the study analyzes twelve economic sectors related to these infrastructure groups, categorized based on the International Standard Industrial Classification (ISIC). It combines traditional IO analysis with complex network science. First, it examines the intermediate inputs from infrastructure-related sectors to others, identifying which sectors attract the most monetary flow. Network analysis then evaluates each sector’s role through measures of centrality. Simulations of disruptions to individual and combined infrastructure groups further explore their impact on network topology. The findings highlight the critical role of transportation and energy, which account for 70% of infrastructure-related expenses. The energy sector, being the most central in the economic network, shows that disruptions could reduce in-strength centrality by 12% across the economy. Transportation is crucial for manufacturing, while ICT is essential for services. The water sector, though less centralized, plays a significant but more dispersed role. Finally, we built a heatmap to rank the criticality of 12 CI sectors across 44 economic sectors. This approach provides a relative view of the role each CI sector plays to each corresponding economic sector, offering a more nuanced understanding of their interdependencies. |
17:00 | Formalizing stakeholder’s perspectives to assess Systemic Resilience to Urban Flooding PRESENTER: Erica Arango ABSTRACT. Floods, intensified by climate change, pose a major threat to cities, especially in low-lying coastal areas. Managing flood risk is particularly complex in cities like Rotterdam and Chennai, where high population densities, aging infrastructure, and increasing hydrological extremes demand multi-perspective solutions. The Resilient Hydro Twin Project addresses these challenges by developing a framework that integrates diverse stakeholder perspectives into a comprehensive resilience assessment. Thus, the project tackles the 'knowing-doing gap,' where solutions exist but are not implemented effectively due to disjointed approaches. Stakeholder engagement is key to developing adaptable, user-friendly digital models that enhance decision-making and improve flood disaster management. While methods like participatory design incorporate stakeholder perspectives, there is a gap in integrating their input with quantitative resilience assessment. This paper presents a framework combining stakeholder participation and dynamic thresholds, translating qualitative insights into quantitative metrics via a digital twin. These thresholds are defined based on the functionalities of the city in relation to different flooding intensities, analysing how the required functionalities change under varying hazardous conditions. For example, during extreme events such as Hurricane Milton, cities are required to implement evacuation measures. However, during milder events, it is essential for cities to ensure access to shelters. Aligning these thresholds with stakeholder perspectives provides a flexible yet precise way to assess resilience. This adaptability makes them particularly valuable in managing urban resilience, as they not only highlight where resources should be focused but also help prioritize investments and responses under different hazardous conditions. Through a designed process for stakeholder participation, the Resilient Hydro Twin Project seeks to bridge the gap between knowledge and action. The integration of dynamic thresholds allows for real-time adaptation to varying flood intensities, enhancing urban resilience and demonstrating the project's capacity to address the growing challenges posed by climate change. |
17:15 | Incorporating Equity and Fairness into a Mathematical Model for Power Restoration After Natural Disasters PRESENTER: Ignacio Sepulveda ABSTRACT. The power grid is a critical infrastructure of a community, yet the increasing frequency and intensity of extreme weather events have led to longer, more frequent power outages, disproportionately affecting low-income and minority groups as well as rural communities. To enhance recovery after such events, various restoration strategies have been proposed, focusing on experience, cost reduction, and critical infrastructure prioritization. However, these strategies often overlook people’s needs, highlighting the need to incorporate equity/fairness into power restoration planning. This study addresses the power restoration planning problem for transmission networks considering customer vulnerability as captured by the Social Vulnerability Index of the United States. In this problem, we employ a Mixed-Integer Linear Programming model to dispatch a set of homogeneous repair crews to fix damaged components, combining operational and power flow constraints for transmission networks, and vehicle routing constraints, testing different objective perspectives. Exact and heuristic methods are proposed to solve the model depending on its complexity. The model's solutions are analyzed in order to understand, first, the cost of equity/fairness, so we can have a picture of how expensive is to apply these strategies; second, the reason of the disparity, if this is because of the uneven restoration schedule, or an uneven distribution of the power grid resilience; last, what is the best strategy for each instance, depending on the power grid, the community, and the goals that we want to reach. This research proposes a way to improve community resilience, not from a general perspective, but in a practical way by looking at the customers’ characteristics and needs. In general, the implementation of this strategy will help the power companies to provide a service that benefits the community in a holistic way, considering power grid operation and customers living conditions. |
17:30 | Household Resilience to Power Disruptions: A Pan-India Analysis PRESENTER: Srijith Balakrishnan ABSTRACT. Access to a reliable power supply is critical for fostering community well-being and driving economic development. However, many countries, particularly in the Global South, face challenges due to inadequate and aging power infrastructure, leading to frequent and prolonged power disruptions. These problems are often compounded by extreme weather events, which further intensify power outages for households. While the importance of power supply for community resilience is well-established, little empirical evidence exists on how households cope with power outages and which factors shape their coping capacity. This study investigates the influence of power outages on household consumption and investment choices, as well as the factors that determine households' ability to cope with an unreliable electricity supply. To do this, we combine household power consumption and preference data from the 2020 India Residential Energy Survey (IRES) with open-source weather and socioeconomic datasets to explore the complex relationships between household characteristics, power dependence, and coping strategies. First, we test the hypothesis that household vulnerability to power disruptions follows a non-linear relationship with economic status. We expect lower-income households to be less affected due to their limited reliance on electric appliances, while higher-income households may be more resilient because they have better access to backup power sources. In contrast, middle-income households are potentially the most vulnerable, as they tend to rely more heavily on electric appliances but lack sufficient access to energy backup during outages. Additionally, we explore the spatial disparities in power reliability and weather-related outages, as well as their influence on household energy consumption patterns and preferences. This research provides insights into household-level resilience against power disruptions, offering a basis for targeted interventions and policy recommendations to enhance energy resilience in the Global South context. |
16:30 | A Study on the Use of Simulation Data for Data-Driven Fault Diagnosis of Various Rolling Bearings Using Transfer Learning PRESENTER: Marcel Braig ABSTRACT. Rolling bearings are key components of numerous engineering systems and are subject to wear due to the mechanical contacts. Consequently, bearing fault diagnosis is imperative for the reliability and efficiency of these systems, such as rotating machinery. This paper explores the utilization of simulation data for training data-driven fault diagnosis methods. To this end, a self-developed bearing simulation and self-collected measurement data from test rigs are employed, considering varied operating conditions and bearing types. The study evaluates the effectiveness of simulation data in improving the diagnosis performance of real bearing faults. In particular, transfer learning methods are examined, encompassing both inductive and transductive transfer learning approaches, implemented with three types of neural networks. The findings demonstrate the effectiveness of the developed simulation model in generating data that is conducive to fault diagnosis. Already the training with simulation data alone indicates the potential benefits of incorporating simulation data. The study further demonstrates that inductive transfer learning exhibits superior performance in comparison to training with real measurement data alone. However, no improvements are achieved through transductive transfer learning. |
16:45 | Resilient Humanitarian Logistics: Planning for Relief Distribution Amidst Damaged Infrastructure PRESENTER: Yasser Almoghathawi ABSTRACT. Planning the distribution system of humanitarian relief efforts following a disaster is a crucial aspect. An optimal distribution system can only function properly in the presence of robust infrastructure networks, such as transportation networks. However, disasters, whether natural or man-made, often cause severe damages or destructions to such infrastructure networks. In this work, we examine the planning of the distribution system, which involves transporting relief materials from storage facilities to distribution centers using trucks, then delivering them to the victims' demand nodes using unmanned aerial vehicles. Accordingly, we develop an optimization model using mathematical programming with the objective of enhancing the resilience of the system. Furthermore, the model takes into account the capacity of the roads, within a transportation network, that are utilized by trucks considering their level of damage following the disaster, and the time to restore them. We solve the developed optimization model for a transportation network with various scenarios considering different levels of road damages and different restoration durations for such damaged roads. Moreover, due to the difficulty of obtaining optimal solutions with the mathematical programming for large scale problems, we propose a simple local search algorithm with a destroy and repair operator to obtain a solution within a reasonable computational time. |
17:00 | Fully Unsupervised Image Anomaly Detection with Unknown Data Contamination. PRESENTER: Matthias Wüest ABSTRACT. Image Anomaly Detection (IAD) is a highly active research area, driven by its critical role in applications such as industrial inspection, medical diagnostics, and security. Although many unsupervised IAD methods have demonstrated promising results, they typically assume that training data consists solely of normal, anomaliy-free samples – an assumption often unmet in practical settings. To address this, researchers have proposed various strategies. Among these, image-level refinement frameworks stand out for their method-agnostic approach, allowing them to remain effective as IAD techniques continue to evolve. However, these frameworks typically assume that the anomaly ratio of the training data is known, even though a reliable estimate of this value is rarely available in real-world scenarios. In this paper, we investigate the effect of contaminated training data with known and unknown contamination ratios. We introduce Cross-Split Data Refinement (CSDR), a fully unsupervised, model-agnostic image-level refinement framework designed specifically to handle unknown anomaly ratios in the training data. CSDR also addresses key limitations of previous, semi-supervised approaches, such as inefficient data use and the risk of bias from training-testing overlap. We evaluate CSDR in combination with established IAD methods on public datasets of industrial images under various scenarios, with and without prior information about the contamination ratio. Our results show that CSDR mitigates performance degradation effectively under non-zero contamination ratios, while having only minimal impact when applied to training data with no contamination – a highly relevant scenario often overlooked in prior research. In both cases, CSDR outperforms existing image-level refinement frameworks. Overall, our findings represent a significant step towards a fully unsupervised, effective, and widely applicable framework that minimizes the impact of contamination in IAD. |
17:15 | Advanced Multidimensional Vibration Signal Processing for Gearbox Pitting Fault Classification using IMPE, STFT, and a CNN-Driven Deep Learning Approach PRESENTER: Manish Pandit ABSTRACT. Gearbox fault diagnosis is crucial for ensuring the reliability and efficiency of industrial machinery. This study proposes a novel approach by analyzing multidimensional vibration signals under varying load conditions (0Nm to 30Nm) to enhance pitting fault classification accuracy. The vibration signals were decomposed into Multidimensional Intrinsic Mode Functions (IMFs) using Noise-Assisted Multivariate Empirical Mode Decomposition (NA-MEMD), allowing for a more detailed representation of fault-induced vibrations. To select the most informative IMFs, Improved Multiscale Permutation Entropy (IMPE) with a standard deviation-based thresholding method was applied, ensuring the retention of relevant features. For time-frequency analysis, the Short-Time Fourier Transform (STFT) was used to generate heat maps, providing insights into the transient behaviour of faults. From the Time-Frequency Representation (TFR), the Z-axis was identified as the most sensitive to fault-related vibrations, making it the optimal direction for classification. A deep learning-based classification framework was then developed to distinguish between healthy and faulty gearbox conditions, leveraging Convolutional Neural Networks (CNNs) for automated feature extraction and classification. Furthermore, the proposed method was benchmarked against established deep learning architectures, VGG16 and ResNet-50, to evaluate its performance. By integrating multidimensional vibration analysis, entropy-based feature selection, and deep learning, this research establishes a robust and efficient fault diagnosis framework. The findings highlight the importance of multidimensional signal processing in predictive maintenance, providing a foundation for more reliable gearbox condition monitoring in industrial applications. |
16:30 | Degradation and Reliability Evaluation for Passive RC Filters Based on Kirchhoff’s Circuit Laws PRESENTER: Wen-Bin Chen ABSTRACT. Passive RC filters are usually required with sable performance and high reliability in order to perform well in their applications of signal processing, noise suppression, and frequency selection. However, current methods cannot construct the physical relationship between the performance degradation and reliability of filters and their components. To address this problem, we take passive low-pass RC filters as the research object and propose a degradation and reliability evaluation method based on Kirchhoff’s circuit laws. Firstly, the performance modeling of passive RC filters is conducted based on Kirchhoff’s circuit laws. Then, considering the degradation of filter components, including resistors and capacitors, the degradation model of passive RC filters is constructed. Next, combined with the function requirements of passive RC filters, the margin model is constructed, and finally, the reliability model is established. A practical case of passive RC filters is applied to illustrate the practicability and advantages of the proposed method. |
16:45 | The Comparison of Mathematical Models of Binary-State System and Multi-State System Based on Reliability Assessment of UAV Swarm PRESENTER: Elena Zaitseva ABSTRACT. Traditionally, two mathematical models are used for a reliability assessment of complex systems that are Binary-State System (BSS) and Multi-State System (MSS). BSS allows the consideration of only two states in a system's reliability for a system and its components — operational and faulty. This mathematical model is often used in reliability analysis. There are many effective methods for a system's reliability assessment based on this mathematical model. MSS allows the manipulation of more than only two states for a system and its components. MSS is employed for a detailed analysis by considering multiple states beyond just operational and faulty. However, methods for reliability evaluation of a system based on MSS have higher computational complexity. There aren't any recommendations for applications of these mathematical models. The comparison of BSS and MSS is considered in this study for similar structures of UAV swarms. The study includes both homogeneous and heterogeneous UAV swarms, which can be either irredundant or redundant hot stable systems. UAV swarms are chosen for the comparison because topologies of this system (UAV swarm) can be represented by series, parallel, and k-out-of-n or in other words by all typical structures for reliability analysis. The comparison is implemented for the availability of UAV swarms using both BSS and MSS models. The structure function has been used for the representation of BSS and MSS. The comparative analysis shows that the evaluation of UAV swarm failure should be based on BSS, and the analysis of operation states should be implemented based on probabilities performance levels instead of swarm availability. These results are confirmed by quantitative and statistical examinations of UAV swarms of different types based on both BSS and MSS. The number of UAVs is changed from 2 to 20 in these examinations. This study was supported by projects APVV-23-0033,VEGA 1/0331/25. |
17:00 | A Novel Random Vibration Accelerated Life Test Model under Non-stationary Non-Gaussian Excitation PRESENTER: Wuyang Lei ABSTRACT. In the field of engineering, accurate reliability assessment and optimization are of paramount importance for ensuring the safety and longevity of structures and components. Traditional Gaussian-based accelerated life test (ALT) models have been widely used but often face limitations in dealing with complex vibration scenarios. To overcome these challenges, this study proposes a novel accelerated life test (ALT) model for random vibrations under non-stationary non-Gaussian excitation. Building upon Gaussian random vibration ALT models and incorporating the previously developed kurtosis transmission model for non-stationary non-Gaussian processes, an acceleration factor for structural fatigue life is derived. The proposed model significantly enhances the acceleration of structural fatigue failure and accurately predicts structural fatigue life. By effectively leveraging the high kurtosis characteristics of non-stationary non-Gaussian excitation, this model addresses the limitations of traditional Gaussian-based methods, offering a novel framework for reliability evaluation and optimization in engineering applications. |
17:15 | Challenges for safety and reliability of the IFMIF-DONES neutron source PRESENTER: Karol Kowal ABSTRACT. Many efforts have been made so far to demonstrate the technological and economic feasibility of nuclear fusion, but today tokamaks and other fusion-oriented facilities operate experimentally with very low availability. New facilities like the DEMO power plant and the IFMIF-DONES neutron source face major technological and engineering challenges, as they are expected to function with much higher availability and load factors. To ensure their success, safety and reliability studies need to be incorporated at the early design stages. This paper highlights three major challenges based on the Authors' experience with the IFMIF-DONES facility being designed under the EUROfusion program: (1) integrating heterogeneous reliability data for fusion-specific components, (2) modelling life-cycle availability considering ageing and imperfect maintenance, and (3) conducting seismic probabilistic risk assessments (PRA). A new method has been developed to integrate reliability data from various sources for fusion-specific components. The proposed approach involves four main steps: (1) identifying relevant data from different sources for a given component, (2) determining Probability Distribution Functions (PDFs) of failure rates for each source, (3) using Monte Carlo sampling from these PDFs, and (4) estimating parameters for the final distribution. Modelling the life-cycle availability of these new facilities requires accounting for component ageing and imperfect maintenance. The authors introduced a novel analytical method that models time-dependent failure rates by analysing statistical data from various facilities operating both new and old components. For seismic PRA, comprehensive research on component fragility is essential. However, in new facilities like IFMIF-DONES, fragility data from sources such as EPRI, FEMA P-58, academic articles, and technical handbooks are often inconsistent or unavailable. To address this, a dedicated fragility component database should be created, using a method similar to that used for integrating heterogeneous reliability data. |
17:30 | Enhancing Human Reliability in Military Resilience Training: A Fuzzy DEMATEL-Based Reductionist Approach to Critical Competencies ABSTRACT. Building resilience is a critical component of military training, aimed at enhancing soldiers' ability to adapt to combat stress and adverse conditions. The urgency of geopolitical conflicts in Europe necessitates a streamlined approach to resilience training, given accelerated pre-deployment timelines. This study focuses on identifying and prioritizing the most impactful resilience competencies to ensure soldiers are prepared for the psychological demands of combat without compromising operational efficiency. Resilience, rooted in positive psychology, is defined as the ability to adapt positively under high adversity. It mitigates the risk of mental health disorders, such as post-traumatic stress, while reinforcing long-term commitment to military units. Although programs like the U.S. Army’s Master Resilience Training (MRT) provide a foundation for resilience training, evidence suggests that the effectiveness of individual components varies across contexts, indicating potential redundancies. This study addresses these challenges by employing the Fuzzy Decision-Making and Trial Evaluation Laboratory (DEMATEL) method to analyze competencies identified through expert input. The fuzzy DEMATEL approach allowed the study to uncover interdependencies among resilience competencies and prioritize those with the greatest impact. Insights gained from expert surveys involving military professionals in Lithuania and Ukraine informed the analysis, ensuring relevance to contemporary military challenges. Key findings of this study include the identification of core resilience competencies, strategies for optimizing training under compressed timelines, and practical recommendations for balancing short-term and long-term resilience. By focusing on high-impact capabilities, the study proposes a targeted training model that enhances psychological readiness efficiently. This modernized approach aligns with the unique challenges of contemporary military operations, fostering faster pre-deployment readiness while maintaining soldiers’ long-term well-being. These contributions offer valuable insights for policymakers, trainers, and psychologists aiming to refine resilience education in high-stakes environments, advancing the broader goal of safety and operational reliability. |
16:30 | High Performance Team Development and its impact on performance within the South African Railway Maintenance Industry PRESENTER: Rina Peach ABSTRACT. The effective and efficient planning and execution of maintenance within the South African railway industry relies on infrastructure teams. Despite the critical role of these teams, there is a significant gap in understanding how to develop high-performance teams (HPTs) within this sector. Basic teams need to transition towards integrating the concept of HPTs. However, industries have been slow to adopt structured HPT development, lacking awareness of its potential to significantly enhance productivity and performance at the maintenance execution level. This research addresses this gap by theoretically identifying the key characteristics that define HPTs. The study’s originality lies in its focus on assessing team morale, evaluating employees’ perceived understanding of HPTs, and proposing tailored development strategies for the South African railway maintenance industry. The research identifies gaps between existing and ideal team morale and HPT development, proposing targeted focus areas and structured methods for achieving HPTs. The proposed methods and recommendations offer a strategic framework for senior management, human resources, and training divisions to systematically close these gaps, facilitating the transformation of standard maintenance teams into HPTs. By doing so, the research provides a novel approach to improving maintenance execution performance, which is expected to yield enhanced team morale, reduced railway incidents, and increased customer satisfaction. |
16:45 | The effect of climate condition on the CO2 Emissions of maintenance activities: a case study from Railway PRESENTER: Narges Mahdavinasab ABSTRACT. Climate conditions significantly impact the working conditions of railway assets, which may lead to increased maintenance activities to ensure their survival. This rise in maintenance activities results in additional carbon dioxide (CO2) emissions due to the use of machinery, material transportation, and energy consumption. The aim of this paper is to study the effect of climate conditions on the CO2 emissions associated with maintenance activities on railway assets. To explore this, CO2 emissions are estimated under various climate conditions. A proportional repair model is used to assess the dynamic effects of climate change on asset repair rates to calculate the CO2 emissions. A case study from a railway in northern Sweden is analyzed to illustrate the proposed approach, providing comparative insights into how environmental conditions affect CO2 emissions from maintenance activities. The results of the analysis can be used for maintenance planning to minimize CO2 emissions from maintenance activities. |
17:00 | Research on Visual Assessment Method for Maintenance in Virtual Environment Driven by Ontology PRESENTER: Yan Wang ABSTRACT. Industry 5.0 has brought about explosive growth in data related to maintenance, especially in the field of virtual maintenance. The large-scale growth of data is difficult to avoid the problems of data fragmentation and heterogeneity, which brings new challenges to data-driven maintenance evaluation work. As a knowledge management tool, ontology can standardize the definition of concepts and the relationships between concepts. Applying ontology to standardize the expression of maintenance visibility related data in virtual environments, a virtual maintenance visual accessibility evaluation method based on ontology is proposed. This method uses a unified framework to standardize the semantic information related to maintenance visibility in virtual environments, solving the problems of knowledge expression errors and low communication efficiency caused by heterogeneity. It achieves innovation in virtual maintenance analysis and evaluation at the knowledge level, and is also an effective application and verification of existing domain ontologies. |
16:30 | Enhancing Railways with Industry 4.0: AI-Driven Human-Machine Collaboration and Risk Management PRESENTER: Alberto Donini ABSTRACT. SPECIAL SESSION: Human- Machine Interaction in Industry 4.0: Ergonomics, Security, and Regulatory Challenges: Industry 4.0, based on Human-Machine Interaction (HMI), represents the evolving collaboration between humans and intelligent systems within advanced industrial environments. In the last years new technologies like Internet of things (IoT), Artificial Intelligence (AI) and machine learning, thanks to the spread of robotics, have enhanced productivity and have enabled smart-decision making. The effects of these new technologies are improvement of safety and efficiency, but on another hand have brought some questions about security and ergonomics. In this paper the authors want to inspect the Industry 4.0 technologies in the railways world. It is interesting to focus on the application of artificial intelligence that can brought productivity improvements. Nowadays this is primarily accomplished by the application of IAI to the technical rail system's operation, traffic control, diagnostics, upkeep, and modification. These productivity gains are only possible if tasks are completed correctly or more effectively in compliance with current laws. IAI can therefore alter the conventional evolutionary management of railway laws, which has a tendency to grow gradually in response to occurrences, accidents, and dangers encountered. Furthermore, IAI can assist with management. Some layers of IAI are used in this paper's integrated enterprise risk management framework and methodology for the future railway, which promotes organisational learning and continual improvement. A survey of the literature found in databases for regulations, standards, and scientific papers serves as the deductive foundation for the applied approach. |
16:45 | Condition-Based Maintenance (CBM) of Railway Safety-Related Systems PRESENTER: Tomas Kertis ABSTRACT. Railway accidents often result from a combination of factors, including human errors, inadequate maintenance, and system design flaws, leading to the realization of risks inherent in identified hazards. This paper investigates selected accidents in Europe, identifying the underlying causes—whether they come from human factors, technical deficiencies, or inadequate design. To enhance proactive safety management, we propose a systematic approach to risk mitigation starting from the system design phase. This approach combines advanced safety management techniques, including hazard identification, condition monitoring, and failure prediction, with accident analysis to design more resilient systems. Condition-Based Maintenance (CBM) is a key part of this approach, not only as a cost-reduction tool but also as a critical method for detecting potential failures early, allowing timely interventions to prevent hazards from escalating into accidents. To support the approach, we build on the results of the PRAMS group within System Pillar project by Europe’s Rail. By analyzing accidents and categorizing them as realized hazards, we establish a framework that integrates CBM and hazard identification tools to propose common design measures, including strategies to mitigate human factors. These measures aim to reduce the likelihood of accidents by addressing risks before they can evolve into dangerous situations, ultimately contributing to the development of safer, more resilient railway systems. |
17:00 | Optimizing Maintenance Strategies in Railway Systems: The Role of Human-Machine Collaboration PRESENTER: Mario Di Nardo ABSTRACT. Sepcial session :Human- Machine Interaction in Industry 4.0: Ergonomics, Security, and Regulatory Challenges The advent of Industry 4.0 has completely revolutionized the railway maintenance processes through integration of artificial intelligence, robotics and autonomous systems, providing lots of real time data, predictive analytics and automated processes. This integration has caused new problems about human-machine interaction, workplace safety and the physical and mental health of the workers in work environments where humans and intelligent machines collaborate with each other. The paper focuses on the analysis of different methodologies for monitoring psychological stress and on the introduction of a mathematical model, developed in Python, which establishes how good maintenance planning can improve worker safety and health. The main purpose, in addition to the one already mentioned, is to reduce risks and accidents of operators in the workplace. |
17:15 | Using technology management to guide the journey to Reliability 4.0 in the manufacturing industry: A South African context using a case study of a Food and Beverage company PRESENTER: Natasha Ramkirpal ABSTRACT. An effective asset care strategy is considered to be a knowledge asset for manufacturing organisations. This knowledge asset can be exploited to achieve the financial and operational goals of an organisation, by reducing process variability and enabling a synchronous supply chain. Industry 4.0 has caused the technology landscape of asset-intensive organisations to evolve. However, there exists a chasm in the body of knowledge that relates asset care systems to Industry 4.0 technologies and the subsequent opportunities that exist to leverage the results of asset reliability. Reliability 4.0 is the intended output of a Maintenance 4.0 system, i.e. maintenance-related activities that are derived from the application of Industry 4.0 technologies. A systems-thinking approach was applied to analyse the current asset care systems within a prominent food and beverage manufacturing company. The output of this process was a conceptual framework that applies technology management tools to support the development of an asset care strategy. The key characteristics of Industry 4.0, data and process integration, are highlighted. This framework shows how internal and external factors should be considered to create an asset care roadmap to guide an organisation’s journey to Reliability 4.0. |
17:30 | Human-Robot Collaboration for Industry 4.0: Managing Risk and Enhancing Performance Through Mutual Adaptation PRESENTER: Valentina Popolo ABSTRACT. Special Session : Human- Machine Interaction in Industry 4.0: Ergonomics, Security, and Regulatory Challenges Technological progress in the industrial context has encouraged the use of robots and autonomous systems within industries, introducing new problems related to human-machine interaction. The proposed analysis focuses on the introduction of solutions to monitor the safety and risks related to human-machine collaboration, using a mathematical model created on Python, which, starting from the systematic analysis of traditional risk models, analyzes the mutual interactiIon between robots and humans. It will evaluate the impact of this collaboration, quantifying mutual learning, considering human feedback to improve the adaptation of the machine to the working environment and managing human unpredictability, responsible for new risk factors. The goal is to improve performance, reduce operator stress and the risk of accidents in the workplace and consequently increase safety. |
16:30 | Navigating Complexity and Hybrid Threats: The Relevance of Resilience Perspectives in a Digitalized Water Sector PRESENTER: Jannicke Thinn Fiskvik ABSTRACT. Managing urban water supply systems is challenged by several factors, including increased pressure on water resources with respect to both quantity and quality. In addition, digitalization of the water sector has increased system complexity, including interdependencies between physical and digital systems. Considering a new geopolitical reality and serious threat landscape, we argue that the water sector, as a critical infrastructure faced with increased complexity and new risks (e.g., cyber and physical hybrid threats), can benefit from a more holistic resilience thinking in security practices. The paper explores the relevance of resilience thinking in a Norwegian water utility, engaging perspectives from Resilience Engineering. The paper builds interviews of employees that work with digital and physical security of water distribution systems, supplemented with a document analysis. The study finds that there is resonance with resilience principles in ongoing work and current ways of thinking. Simultaneously, we suggest that there is value in increased awareness of avoiding becoming ‘robust yet fragile’ and highlight the importance of adaptive capabilities. In turn, this can contribute to the water sector being better prepared to handle future challenges and growing complexity of the multifaceted socio-technical water supply system. |
16:45 | Community Risk Assessment: Connecting Scientific Knowledge with Local and Indigenous Knowledge PRESENTER: Nader Naderpajouh ABSTRACT. There are two fundamentally different approaches to community risk assessment. Bottom-up approaches incorporate time-tested local and indigenous knowledge to address recent and present vulnerabilities. Top-down approaches, on the other hand, rely primarily on scientific and technical knowledge to assess uncertainty in future states. There is a growing call to integrate these two assessments in community risk assessments. However, community risk assessments remain largely dominated by top-down approaches. Here we identify pathways associated with integrating top-down and bottom-up approaches to strengthen the connection between their associated knowledge bases. Our analysis draws on semi-structured interviews with 29 local and international professionals in community risk assessment, exploring the opportunities and challenges of such integration. We propose a set of guiding principles for developing integrated community risk assessment models that leverage the complementary capabilities of both approaches. These guiding principles are then complemented by alternatives that are driven from a systematic literature review that provide a range of common definitions, measurements, and practices. The guiding principles from the empirical study and the proposed synthesis can inform the development of community risk assessment models. Our findings highlight the importance of a bidirectional and context-specific integration of local and scientific knowledge, with attention to the required level of resource alignment and customization based on the unique needs of the context. |
17:00 | A Multi-Attribute Decision Model For Supporting Emergency Shelter Location Against Urban Flood Risks PRESENTER: Nicolas De Albuquerque ABSTRACT. The occurrence of natural disasters in recent years has made emergency disaster management one of the main challenges for society, given the need for action planning as soon as a disaster occurs. In this context, the topic in question has led to proposing a multidimensional model for assessing the risk of floods in the urban environment. The potential benefits of its application in the management, control, mitigation, and emergency actions during natural disasters illustrate its innovative character in enhancing the efficiency of urban management. The decision model considers four criteria, based on the concepts of Queue Theory, that assess the alternatives' performance in a probabilistic fashion when it comes to urban flood catastrophe evacuation routes in affected areas. It gives the DM the ability to rank possible sites for erecting makeshift shelters and organizing emergency preparation in high-risk zones according to his or her preferences. Thus, the use of the model aids decision-makers in dealing with risks arising from extreme events caused by urban climate change since this will help them to develop and implement an emergency plan consisting of measures to mitigate climate risk and to adapt them before, during, and after floods. The multi-criteria decision model can be replicated in any urban setting and might facilitate creating and disseminating crisis protocols that mitigate the effects of flooding and prepare vulnerable communities for climate change. The model allows DM to explore graphical and tabular visualization tools, together with sensitivity analysis, based on a numerical application of the suggested model, to strategically determine how to successfully plan and execute emergency shelters to lower the risk of flooding in metropolitan areas. |
17:15 | The Visualisations of the Intersection between Risks and Social Vulnerability Using Interactive Dashboards PRESENTER: Paulina Budryte ABSTRACT. There is a growing importance of visual tools in risk sociology to communicate scientific concepts and data to the public and decision makers. Here, we could observe an increasing amount of content presented by diverse tools – risk matrices, infographics, dashboards, video lessons, etc. (Chishtie et al., 2022; Goerlandt & Reniers, 2016). Project Serenity aims to establish a dashboard where different risks and social vulnerability characteristics are presented for the Lithuanian case based on objective data on risk levels and population census data. In the article, we explore the differences in the urban, peri-urban and rural areas from the perspective of the spatial intersection between risks and social vulnerabilities. Existing research shows that urban areas are presented with a complex mix of social relations, environmental hazards and commodification (Endo & Shibuya, 2017). Meanwhile, rural areas are exposed to a variety of natural risks, usually due to limited resources (Jamshed et al., 2020). Yet, the most dynamic situation is in peri-urban areas, where risks from both urban and rural are interacting (Schulz & Siriwardane, 2016). Although peri-urban areas are associated with the American dream and white-picket fence, research points out that these areas are facing the most challenging context regarding social vulnerabilities due to their morphology and functional dynamics (Schulz & Siriwardane, 2016). Lithuania is an interesting case from this perspective since the phenomenon of peri-urban is quite recent. That is why the spatial interaction between risks and social vulnerabilities is very dynamic. So, by using the dashboard tool, these areas are investigated – final visualisations hint at the possible pathways to create more resilient local communities. Acknowledgements: This presentation is based on the project “Socio-spatial determinants of societal vulnerability and resilience to crises and strengthening the crisis response potential of communities“ (SERENITY), funded by Research Council of Lithuania, no. S-VIS-23-21. |
17:30 | Contested images of residents and evacuation between disaster risk reduction and nuclear emergency management ABSTRACT. In recent years, there has been an international trend toward a more comprehensive approach to analyze and manage various hazards and risks beyond the existing academic disciplines and organizational silos. In the fields of risk science and disaster studies, many studies have been conducted based on such perspectives including, to name a few, all-hazards approach, Natech (Natural-hazard-triggered technological) accidents, and societal safety. As a disaster-prone country and having experienced the Fukushima nuclear disaster, Japan has also sought for the appropriate integration of disaster risk reduction (DRR) and nuclear emergency management. However, several differences between the management of natural-hazard-triggered disaster and nuclear accident have hindered their effective integration. This study critically examines how the institutional arrangements of natural and nuclear DRR in Japan perceive the residents and their evacuation behaviors differently. In case of natural hazards DRR, residents are encouraged to proactively take preventive actions according to their surroundings, while taking into account disaster information from the governments and experts. In contrast, in the nuclear emergency management framework, residents are assumed to be passive entities that behave according to evacuation instructions from the government, and voluntary evacuation actions are treated as deviations. Thus, while identifying fundamental differences in the image of residents and evacuation between the two disaster frameworks, this study will discuss how to overcome these differences and build a more integrated DRR. |
16:30 | Risk Assessment on Flight Test of High Altitude Paratroopers Airdrop Missions PRESENTER: Moacyr Machado Cardoso Júnior ABSTRACT. This paper presents a comprehensive risk assessment for a flight test aimed at validating a new cargo aircraft for high-altitude paratrooper airdrop missions. These operations, conducted at altitudes ranging between 35,000 and 40,000 feet, expose crew and paratroopers to extreme environmental conditions, such as hypoxia, decompression sickness, barotrauma, and frostbite. The study begins with a theoretical introduction to the physiological challenges posed by high-altitude, unpressurized flight. It then details the principles of risk assessment using a risk matrix and explores the application of fuzzy inference to refine the evaluation process. The assessment identified eight key risk factors, categorized by probability and severity, and quantified the risks through fuzzy logic and defuzzification. Mitigation strategies were proposed, including phased testing, medical support, enhanced safety equipment, and hypoxia recognition training. These measures effectively reduced the overall risk level from high to medium, ensuring safer conditions for flight operations. The analysis highlights the importance of combining quantitative and qualitative methods to reduce subjectivity in risk assessments. Future studies will validate the proposed mitigation measures and expand the framework to include cargo drops in similar conditions. |
16:45 | Safety considerations about hypersonic vehicles integration into ATM/HA PRESENTER: Angela Errico ABSTRACT. In recent years, the development of new concepts of suborbital space vehicles for passenger and/or things transportation transiting through ATM and operating at High Altitudes has led to increasingly significant progress of technologies to ensure efficient and safe integration of hypersonic flights into controlled and non-controlled airspace. Airspace segregation ensures safety but is not sustainable for long-term operations. Investigations are needed on regulating hypersonic vehicles for integration into managed and regulated airspace and at higher altitudes. Moreover, ATM must develop strategies for controlling hypersonic sub-orbital flights and enhancing systems to ensure ATCOs can handle them effectively. The technological keystone of the European Commission’s Single European Sky Initiative that aims to integrate new entrances in an innovative ATM is Single European Sky Air Traffic management (ATM) Research (SESAR) project. From ATM perspective, the trajectory-based operations requirements, developed in SESAR, can facilitate a robust ATM integration of such types of operations. In this direction, these new entrants can be considered as the conventional traffic in ATM scenarios. However, it remains to be analysed how the potential increase in traffic due to the integration of hypersonic operations and their different speed performances, characteristics and constraints can impact the current mandatory level of Safety. This paper aims to provide an overview of potential ATM scenarios including suborbital vehicles, investigating on the potential effects that these new operations can have in terms of Safety on risk models currently applied by the SESAR safety methodology. Based on the identified safety impacts of hypersonic operations on the future ATM developments, a gaps analysis has been carried out to identify potential additional requirements for contributing to the quantification of the new safety criteria related to the hazard analysis of the integrated risk picture (IRP) for ATM in causing or preventing accidents. |
17:00 | Resilience to Cyber Threats in Civil Aviation: Key Challenges, Regulations, and Strategies ABSTRACT. Building resilience against cyber threats requires a synergy of regulatory, operational, and educational efforts to effectively safeguard the aviation sector. The aim of the article is to analyze the rapidly evolving digital threats in the aviation industry and the ways to counteract them. It will include the identification of risks in civil aviation, particularly as the digitization of infrastructure and operations increases the risk of cyberattacks that may disrupt navigational systems, reservation systems, and airport infrastructure. The main international and European regulations, as well as strategies to counter these threats, will be discussed—both those implemented by intergovernmental organizations such as ICAO, EASA, and Eurocontrol, and those by non-governmental industry organizations like IATA. Key challenges and the need to develop a culture of digital security will also be analyzed. |
16:30 | Trojan-Free FPGA Hardware? The Challenge of End-User Verification PRESENTER: André Waltoft-Olsen ABSTRACT. Ensuring the trustworthiness of Field Programmable Gate Arrays (FPGAs) is essential for critical infrastructures' safe and reliable operation. However, the globalized nature of IC manufacturing introduces hardware trojan (HT) risks, creating challenges for end-users tasked with ensuring hardware trustworthiness. This work explores the nature of the FPGA HT threat across the IC supply chain, exemplifying their implications through illustrative scenarios in the power system. Detection and prevention methods are evaluated, focusing on their feasibility for end-users. Although current research offers effective approaches, these methods are infeasible for end-users. Our findings highlight a disconnect between vendor assurances, certification practices, and the actual methods for verifying hardware trustworthiness. We emphasize the need to bridge these gaps by developing end-user-feasible solutions to help verify the trustworthiness of FPGA-based systems. |
16:45 | Development of a hybrid metric for physical security assessment PRESENTER: Thomas Termin ABSTRACT. In industrial practice, physical security assessments are increasingly performed using scoring methods. However, since scoring methods involve uncertainty, users face challenges in evaluating investment alternatives. While quantitative metrics have the advantage over scorings in that precise calculations can be made, their application is not as simple and intuitive as simple scorings. From a user's perspective, the key question is how to merge the strengths of both metrics to facilitate risk assessment without compromising accuracy. The goal of this paper is to formulate requirements for a hybrid metric for assessing physical vulnerability and to demonstrate the applicability of the outlined solution approach using a specific use case. In a first step, the problems of scoring are explained using the Harnser metric as an example. In a second step, the quantitative metric used to measure the quality of scoring is defined. The third step explains how the scoring under consideration can be extended and modified to replicate the quantitatively calculated results for all calculation results. In a final step, the proposed adjustment approach is demonstrated. Finally, the results are summarized and starting points for further research are identified. |
17:00 | Virtual planning and support for large-scale events PRESENTER: Corinna Koepke ABSTRACT. Human behaviour, technical faults or natural causes can cause hazardous situations at large-scale events. Simulations can support the process of planning, approval, and implementation of such events, i.e., the assessment of the safety infrastructure, possible hazards, and countermeasures. The protection of personal rights is of particular importance here. This paper presents a modular simulation toolkit designed to assist in the planning and management of large-scale outdoor events like festivals or Christmas markets. The toolkit integrates four key modules: 3D scene reconstruction, visibility/audibility computation, agent-based modelling (ABM) of pedestrian flows, and smart video analysis. Data for the framework is obtained from various sources, including drone LiDAR scans, drone videos, ground photos, and fixed video cameras. Our methodology follows a structured process chain: starting with 3D reconstruction to create a detailed virtual environment, followed by visibility computation to assess the perceptibility of information. The ABM module simulates the behaviour of event participants, allowing for dynamic interaction of individual agents with and within the environment in different scenarios. Finally, simulation results are validated using insights from smart video analysis, which delivers person counts and density estimates. This ensures the accuracy and reliability of the outcomes. We demonstrate the use and flexibility of our modular simulation toolkit with its application to JuicyBeats, a popular electronic music festival held annually in Germany. The illustrated modular approach not only enhances decision-making for event organizers but also promotes safety and efficiency in crowd management, paving the way for more effective large-scale event planning. |
16:30 | A Surrogate Ship Trajectory Construction Method for Efficient Similarity Measurement in AIS Data Clustering Analysis PRESENTER: Shaoqing Guo ABSTRACT. Since the advent of Automatic Identification System (AIS) has opened opportunities for shipping data to be disseminated worldwide, trajectory clustering has seen increasing applications in maritime traffic pattern recognition, trajectory prediction, anomaly detection, and route planning. Trajectory similarity measurement is a central concept in ship trajectory clustering, where the majority of computational time is spent on similarity calculations. However, the exponentially growing volume of AIS messages has posed significant challenges to efficient processing, with popular trajectory simplification methods such as Douglas-Peucker (DP) algorithm showing limited effectiveness in improving trajectory similarity calculations. In this study, we propose a novel surrogate ship trajectory construction (SurTraC) method to reduce the complexity of similarity calculations, where the Geohash gridding technique is employed to aggregate spatially adjacent points. The method can generate an alternative sparse trajectory that uniformly and precisely represents the original one. A case study using one-week AIS data from Gulf of Finland indicated that SurTraC can effectively simplify the trajectory dataset while maintaining the entirety of the features. Compared to the DP-based methods proposed in previous research, a discussion from the perspectives of trajectory simplification, similarity measurement, and clustering demonstrated that SurTraC can significantly accelerate similarity measurement without compromising clustering performance. |
16:45 | Analysis of TSS as a potential risk- reducing measure for ship allision for a proposed floating bridge across the Nordfjord on the west coast of Norway ABSTRACT. This paper will look at the ship allision risk for a proposed fixed crossing of the Nordfjord on the West Coast of Norway, and the effect of different risk reducing measures. The proposed project is made up of a floating bridge with relative low clearance above the sea and a separate bascule bridge to allow for passage of larger vessels with a hight of more than 18m. The fjord has a relatively low amount of ship traffic, but during the summer half of the year, the fjord is frequented by some of the largest cruise ships in the world, with the potential to cause full collapse of the floating bridge. A list of potential risk reducing measures are discussed to find the measures with potential large benefits. An alternative with traffic separation scheme to reduce the risk to the bridge is investigated. |
17:00 | How to assess the resilience of the European container shipping network from a national perspective: A data-driven cascading failure model PRESENTER: Yuhao Cao ABSTRACT. The European Container Shipping Network (ECSN), a key component in Europe's supply chain and logistics system, is highly interconnected and vulnerable to risks. Given the complexity and diversity of Europe's geographical and trade environments, the interconnectedness exposes the network to resilience challenges, particularly to cascading failures triggered by extreme events like the COVID-19 pandemic and regional conflicts. A fundamental step in mitigating these failures involves simulating load redistribution, yet a robust modelling approach tailored to Europe’s specific needs remains undeveloped. To fill these gaps, this study aims to develop an innovative framework for resilience analysis against cascading failures, designed to rigorously assess the impact of port disruptions on the resilience of individual countries within the ECSN. The proposed framework integrates a port importance assessment model, a multi-target cascading modelling approach, and three resilience metrics from a national perspective. The detailed analysis and case studies across 172 European ports reveal that disruptions at the Port of Rotterdam could significantly compromise the network's resilience. To enhance the ECSN's resilience, this study recommends two primary strategies: expanding interregional strategic cooperation and maintaining adequate reserve capacity at critical ports. This research provides valuable insights for port and logistics stakeholders in managing unforeseen risks and in the planning and development of port infrastructure. |
17:15 | An Assessment of Alternative Fuels for Oceangoing Vessels PRESENTER: Sean Loughney ABSTRACT. During the 21st Climate Change Summit in Paris in 2015, the International Maritime Organization (IMO) pledged to adopt necessary measures to reduce Green House Gas (GHG) emissions from shipping. Several research studies and maritime classification society outlooks argue that the true path to effective decarbonization of the shipping industry could only be achieved by adopting low-carbon or zero-carbon alternative fuel sources. This research was aimed to systematically analyze the three main deep-sea alternate fuel options: Hydrogen, Ammonia & Methanol, which can potentially achieve IMO’s 2050 ambitions. Each of these fuel alternatives was assessed against Technical, Environmental, Economic and Social attributes. The systematic assessment was carried out through a hybrid Multi-Criteria Decision Making (MCDM) method, which combines Analytical Hierarchy Process (AHP). Primary data was collected through an online survey involving 57 experts in the maritime industry to compute criteria weights. The results from the AHP pairwise comparison indicated that Environmental attributes were the most preferred criterion for the assessment of alternate marine fuels, followed by Technical, Economic and Social Attributes. The findings of this research can assist the maritime sector’s decision-makers in making an informed decision on selecting the most suitable alternate fuel option for their deep-sea fleet, capable of achieving global GHG emission targets of 2050 and beyond. |
16:30 | Human-Autonomy Teams: The Case of Remote Operators and Automated Driving Systems PRESENTER: Camila Correa-Jullian ABSTRACT. Autonomous functions, systems, and operations are expected to play a significant role in a number of industries, including energy, process, and transportation. In these, human operator teams frequently monitor, supervise, and intervene in the system’s operations, acting as a safety barrier in the event of emergencies. As the Level of Automation of these systems increase, the need to study Human-Autonomy Team’s (HAT’s) performance becomes fundamental, in particular when deployed in public spaces. Recent developments in the area of highly Automated Driving Systems (ADS) deployed for passenger transport applications have highlighted the need to revisit assumptions about the role of remote operators performing driving assistance and emergency management tasks. While human factors research has explored the implications and requirements of human-system interactions in ADS contexts for drivers, the focus on HAT dynamics is still incipient, particularly in remote operations. This research draws from remote control operations in nuclear, oil & gas, and maritime industries, aiming to model fundamental aspects of HATs in remote ADS operations. Thus, as opposed to only considering human-system interaction schemes, team performance models such as the Information, Decision and Action in Crew (IDAC) context [1] can be applied to study the developing HAT dynamics [2]. This work explores the applicability of Performance Shaping Factors (PSFs) used in Human Reliability Analysis (HRA) models, identifying potential factors influencing the performance of both human and automated agents in remote ADS operations, focusing on the relationship, tasks, and challenges remote operators face when interacting with vehicles equipped with advanced ADS. [1] Y. H. J. Chang and A. Mosleh (2007), "Cognitive modeling and dynamic probabilistic simulation of operating crew response to complex system accidents. Part 1: Overview of the IDAC Model". [2] C. Correa‐Jullian et al (2024), "Exploring Human-Autonomy Teams in Automated Driving System Operations". |
16:45 | Towards the Risk Assessment Framework Conceptualization Using Fuzzy Logic for Last-Mile Cargo Drones PRESENTER: Ievgen Medvediev ABSTRACT. The using autonomous and controlled drones to deliver small cargoes at last mile is reaching a new level. This is due to improvements in the technical components of these systems, the development of artificial intelligence within them, and an increase in the maximum permissible payload. The use of drones to enhance public services and fulfill transportation needs in today's world is an undeniable prospect. However, despite the clear benefits of using drones for last- mile delivery, there are potential risks, both known and new, that could lead to disruptions in the goods delivery. Hence, there is a necessity to design a method that will consider the impact of various risks on drone delivery sustainability at significantly improved standard. Due to the diverse hazards nature associated with operating aforementioned systems, it can be argued that to address these threats and ensure sustainable delivery, it is essential to develop a flexible model using relevant approaches such as fuzzy logic or neural network modeling. Unlike probabilistic models, fuzzy logic does not necessitate extensive data. Additionally, an initial data sample on delivery risks posed by drones can be gathered through expert opinions. This research marks the initial phase in designing a fuzzy model for assessing risks in delivery systems using both autonomous and controlled drones by fuzzy logic. The study identified the key risks that primarily impact the drone deliveries sustainability due to expert survey. The risks were categorized into four groups: human factor; technical hardware issues; software failure; and weather conditions. Participants were questioned to identify two most significant risk factors for each group. This information was used to determine the inputs for the logical-linguistic model. The fuzzy model defines five evaluation levels of supply sustainability. The key benefit is recommendations about the feasibility of drone use for last-mile delivery under specific conditions. |
17:00 | Enhanced Underwater AIS for Communication-Based Collision Avoidance in Autonomous Underwater Vehicles PRESENTER: Thale Eliassen Fink ABSTRACT. The increasing complexity of underwater environments due to expanding marine research, exploration, and industrial activities has elevated the collision risk for autonomous underwater vehicles (AUVs). Traditional sensor-based collision avoidance (COLAV) systems can be constrained by environmental factors such as acoustic noise and low visibility, prompting more robust solutions. One promising approach is data sharing via the JANUS-based Underwater AIS (UAIS) protocol. UAIS could also inform decision-making when making underwater COLREG-compliant systems. This paper proposes several UAIS enhancements—including fields for uncertainty and manoeuvrability, dynamic data transmission, and hybrid acoustic–optical communication—and addresses associated security needs using encryption and authentication measures. To quantify UAIS’s potential impact, two Bayesian Networks (BNs) estimate how UAIS data can reduce an AUV’s risk of losing navigational control. Results suggest a notable drop in collision risk when UAIS is integrated. Nonetheless, challenges remain regarding cost, standardization, and the possibility of overreliance on AIS data. The proposed system marks a promising step toward safer and more efficient underwater navigation through communication-based COLAV solutions. |
16:30 | Formulation, calibration and testing of a coarse-grained heating model for urban resilience assessment PRESENTER: Benjamin Lickert ABSTRACT. Climate change causes a multitude of challenges, not only for nature but also for human lifestyle. One significant consequence of climate change, expected to increasingly threaten urban resilience, is the phenomenon of heat accumulation in cities during summer heat waves. Although small-scaled countermeasures, like cooling rooms for elderly and other vulnerable people, help to dampen the worst consequences of heat waves, the whole architecture of cities needs to change to preserve the general quality of life. An increase of urban vegetation or the installation of large water basins represent typical procedures at this point. Given that the implementation of such interventions costs time and money, simulation tools are needed to predict and compare the benefits of different implementation strategies in order to enable an effective increase of the urban resilience against heat. Still, established simulation frameworks, like PALM and ENVI-met, need elaborate input and are computationally demanding making it complicated and tedious to apply them to whole cities and multiple different scenarios. Addressing this complexity problem, this contribution presents a model framework which predicts the evolution of the air temperature in a coarse-grained manner based on large-scale factors, like the atmospheric temperature, sun radiation and local heat diffusion. The framework can be run on conventional laptops/PCs predicting a few days of temperature evolution within seconds to minutes. It will be shown how Copernicus satellite data is used to transfer local characteristics of the (urban) environment under study into a model grid of numerous small tiles. Types like “build-up”, “vegetation” or “water” are assigned to the tiles, each type is characterized by a unique parameter set representing the response to incoming heat. Using an exemplary model of Basel, Switzerland, it will be shown how those parameters are calibrated and the performance of the model is assessed. |
16:45 | The Effect of Random Spatial Variability in Masonry Bricks Unit Material Properties on the Structural Performance of Unreinforced Masonry Walls PRESENTER: Abayomi Owoeye ABSTRACT. This paper presents the numerical analyses aimed at investigating the effect of correlated random fields properties - such as elastic modulus, compression resistance, and deformation at peak compression - of masonry brick units on the in-plane behavior of Unreinforced Masonry (URM) wall using a finite element model (FEM). Non-linear analyses are developed by applying an in-plane horizontal displacement load and iteratively changing the correlated random field material properties of the wall's masonry brick units while keeping the value of the vertical compression stress applied on the wall constant. Firstly, the random field model's reliability is verified by comparing it with an established deterministic model and a documented experimental test result from the literature. The same model is then used to investigate the influence of uncertainties of the material properties of brick units on the global behavior of the URM wall. The random field numerical analysis results show the considerable effect of the spatially varying material properties of masonry brick units on the macro performance of the masonry wall in terms of ultimate strength and failure modes. The results also demonstrate the influence of the cross-correlation between the random fields of masonry brick unit elastic modulus, compression resistance, and tensile resistance on the global behavior of the wall. |
17:00 | Reliability assessment of buried gas pipelines under ground subsidence: A 3D FE-based meta-model and Monte Carlo analysis PRESENTER: Mariam Joundi ABSTRACT. In its pursuit of climate neutrality by 2050, the European Union prioritizes green hydrogen as a key energy vector, with plans to develop over 11,000 kilometers of hydrogen pipelines by 2030. Countries such as France, Belgium, and Germany aim to repurpose existing gas infrastructure for hydrogen transport, as outlined in European and national strategies (GRTgaz, 2019; European Hydrogen Backbone, 2020). However, replacing natural gas with hydrogen poses challenges, particularly regarding the mechanical integrity of steel pipelines (Boots et al., 2021). Research shows that hydrogen exposure at 100 bar can reduce pipeline steel ductility by up to 40%, increasing vulnerability to external forces, including ground subsidence. This paper addresses these challenges by introducing a new meta-model based on 3D finite element numerical simulations to assess the impact of ground subsidence on buried pipelines. The meta-model evaluates the subsidence transmission ratio from the soil to the pipeline based on their respective characteristics. This approach offers greater reliability compared to existing meta-models in the literature, which are typically based on analytical or 2D numerical models (Joundi et al., 2023). To achieve this, a parametric study was conducted using 800 3D numerical simulations, covering a range of subsidence profiles, pipeline dimensions, burial depths, and soil properties, all reflecting typical European conditions. Building on the fitted meta-model, a probabilistic approach has been followed to enable uncertainty propagation. The parameters defining the system are treated as random variables, following either normal or lognormal distributions. A Monte Carlo analysis is applied to quantify the probability of reaching specific damage levels by incorporating multiple pipeline failure criteria aligned with Eurocode standards. The results are illustrated using vulnerability curves, intended to support decision-makers in evaluating which pipeline sections are suitable for direct repurposing for hydrogen transport and which require further adaptation, based on pipeline characteristics and surrounding soil conditions. |