ESREL2023: EUROPEAN SAFETY AND RELIABILITY CONFERENCE 2023
PROGRAM FOR TUESDAY, SEPTEMBER 5TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-09:40 Session 8: Plenary session - Professor Iunio Iervolino- Seismic risk and resilience of civil infrastructure: Towards the reconciliation of time and space

Iunio Iervolino is full professor of Structural Engineering at the University of Naples Federico II and at Istituto Universitario di Studi Superiori, Pavia. He has a master in Management Engineering and a master and a Ph.D. in Seismic Risk. He has worked long time under the supervision of C. Allin Cornell. He is editorial board member or associate editor of several scientific journals, such as Earthquake Engineering and Structural Dynamics, Soil Dynamics and Earthquake Engineering, Computer Aided Civil and Infrastructure Engineering, Sustainable and Resilient Infrastructure.

09:45-11:00 Session 9A: Human Factors and Human Reliability IV

Human Factors and Human Reliability IV

09:45
The Procedure Performance Predictor (P3): Application of the HUNTER Dynamic Human Reliability Analysis Software to Inform the Development of New Procedures
PRESENTER: Ronald Boring

ABSTRACT. The Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER) software has been designed to provide a simplified framework for modelling dynamic human reliability analysis (HRA). HUNTER essentially creates a virtual operator (i.e., a digital human twin) that controls and responds to a virtual power plant (i.e., a digital twin or full-scope simulator) according to a procedural script. HUNTER has successfully modelled control room operator performance for nuclear power plant incidents, producing realistic human error probabilities, courses of actions, and time durations. Recent engagement with U.S. nuclear industry stakeholders has identified uses for HUNTER and dynamic HRA beyond traditional probabilistic safety assessment. As nuclear power plants upgrade to new digital control rooms, or as control rooms are built for advanced reactors like small modular reactors, there emerges a unique situation for the operating procedures at plants. Existing procedures for legacy plants have been vetted and validated across numerous iterations. Yet, as new technologies emerge in control rooms, there is often little operating experience to inform the development of the new procedures. A Revision Null operating procedure is of concern for both procedure writers and plant safety personnel. Building on HUNTER’s handling of procedures, a special variant of HUNTER is being developed, called the Procedure Performance Predictor (P3). HUNTER-P3 allows procedure writers to script a novel procedure to simulate operator and plant performance in the use of that procedure. HUNTER-P3 identifies potential error traps with the novel procedure, thereby creating a way to screen procedures for suitability and safety. HUNTER-P3 also includes consideration for deviations from the procedures to flag potential disparities between work as imaged vs. work as done.

10:00
Designing Future Control Environments: A Feasibility Study Approach Developed at IFE FutureLab
PRESENTER: Lars Hurlen

ABSTRACT. In this paper we present lessons learned from 10 years of research into future control environments performed at the IFE FutureLab. The paper focuses on a research-based feasibility study approach that we have developed, based on a perceived need to more effectively perform future-oriented research in this area. The motivation and rationale behind it is discussed, inspired in particular by ideas on “wicked problems” and “design thinking” approaches. Practical insights from utilizing this type of study in two concrete project cases are presented.

10:15
Benchmark Exercise on Safety Engineering Practices: Management Plan Concept
PRESENTER: Essi Immonen

ABSTRACT. This paper continues to describe the midterm outcomes of EU research project Benchmark Exercise on Safety Engineering Practices. To further support the planning, controlling and conducting of a fully integrated safety engineering effort, the authors propose a Safety Engineering Management Plan (SaEMP), which is a document that addresses the overall safety engineering management approach. This is another step towards more efficient and integrated safety engineering process in the scope BESEP project following the possibilities offered by systems engineering (SE). As an example of the topics covered by the Safety Engineering Management Plan, this paper further focuses on the flow of information between different safety analysis disciplines, namely probabilistic safety analysis and human factors engineering.

10:30
The Spatial Dimension in Human Reliability Analysis

ABSTRACT. Traditional static human reliability analysis (HRA) methods focus on producing human error probabilities based on qualitative insights derived from operating context such as performance shaping factors (PSF). Especially for field operations outside the control room, travel time between two locations largely determines how long it takes to complete tasks, which in turn affects the success likelihood of the task. While most HRA methods consider required or available time as a PSF, they do not adequately account for spatial dimensions that influence time. This paper outlines the importance of the spatial dimension for HRA. Location affects the availability of tools, the workload of the operator, and the complexity of the task. The need to travel from one location to another can considerably change the context of the task and even has implications for error dependency. This paper outlines considerations for location and movement and presents use cases to explore how spatial HRA could be treated for balance-of-plant and main control room tasks. The spatial dimension complements recent developments in dynamic HRA. Dynamic HRA, which uses simulation techniques to model human performance, implies primarily a temporal dimension. Dynamic HRA captures the evolution of an event over time; however, dynamic HRA is incomplete without consideration of location. Spatial HRA is part of a broader approach joined with dynamic HRA that is called computation-based HRA (CoBHRA).

10:45
The impact of seismic events on human reliability and Phoenix HRA methodology

ABSTRACT. Risk assessment of Nuclear Power Plants (NPPs) through Probabilistic Risk Assessment (PRA) and Human Reliability Analysis (HRA) is relatively mature for internal events – those that start inside the power plant it serves. Advances have also been made for external events, such as earthquakes and floods. Yet, particularly for HRA, more research is needed to understand the impact of seismic events on human performance and the adequacy of currently used HRA methods. According to a report from the International Atomic Energy Agency, around 20 percent of nuclear reactors worldwide operate in areas vulnerable to earthquakes, so understanding seismic impacts on HRA is significant to prevent human errors and for realistic Human Error Probability (HEP) assessment. Seismic events may add failure modes and human and organizational factors that are not fully addressed by internal event models. This paper will investigate the unique factors associated with seismic events, and how they can be incorporated in the Phoenix HRA Methodology.

09:45-11:00 Session 9B: Energy Transition to Net-Zero Workshop on Reliability, Risk and Resilience - Part IV
09:45
Risk Assessment and Reliability in the implementation of Urban Electric Mobility Projects

ABSTRACT. The study's objective is to conduct a risk assessment to identify opportunities, risks, and impacts in implementing urban electric mobility projects and evaluate the perceptions of state secretaries and government, Mayors, Manufacturers, service providers, and other stakeholders on the risks. As part of the study, it was necessary to identify technical barrier and/or financial barriers worldwide, define how to use the advantage of renewable energy technologies to bring more reliability to the project, and categorize the risks considering hardware (buses, batteries, material national vs. imported, chargers, electrical centers).

Currently, it is observed an increase in pollution, the emergence of the pandemic scenario covid-19, the opportunity for the emergence of new technologies, the ever-increasing application of the concept of recycling/reusing, the need to contribute to reducing emissions of CO2, the need to optimize costs and resources, the increased reliability, the need for resilience and the need for integration. This scenario shows it is necessary to administer and manage the risks in implementing a project that involves using renewable technologies for mobility as a solution in urban areas, such as electric buses with energy supplied from renewable sources. The study focus on the identification of these risks.

As a methodological approach, qualitative and quantitative data will be obtained from an in-depth literature review on the topic and from stakeholders, such as secretaries and government, Mayors, Manufacturers, service providers, and other stakeholders. FMEA will be used to identify risks and barriers. It will consider the vision of the main stakeholders in the Brazilian market, which can be expanded to the world scenario.

As a result, the authors propose a matrix with opportunities, risks, and impacts and a model that can be implemented to meet current regulations. The model brings the whole concept of sustainability to the center of the solution.

The conclusion is that by working proactively on risks to meeting regulations, digitalization concepts associated with using renewable sources and solutions that contribute to decarbonization can be effective, and cities can be transformed and have a promising future for future generations.

As a contribution, the proposed analysis will demonstrate the existing risks and some of the best responses and possibilities to make this transition contribute to decarbonization and transforming the planet in a better way. The present study augments the knowledge of the engineers/managers and professionals involved with Urban Electric Mobility Projects. Although conducted in Brazil, it can be generalized to other projects, whose safety is affected by lack of risk assessment. The study can change the practice and thoughts of professionals dealing with Mobility Projects.

10:00
Perception of threats in Offshore Windfarms and possible countermeasures

ABSTRACT. With the legislative amendment of German Critical Infrastructure (CI) regulations in 2021, the threshold value for energy production infrastructures falling under the CI regulation has been lowered from 420 MW to 104 MW (§ 2 Abs. 6 Nr. 2 BSI-KritisV). Consequently, nearly all offshore wind farms (OWFs) are considered critical infrastructures (CI), which leads to increased requirements for operators with regard to the security of the facilities and the provision of their core services. (Internationales Wirtschaftsforum Regenerative Energien 2021). The legislator for example requires measures against the failure of the process or to prevent damages (§13a Abs. 2 LKatSG M-V) . This paper aims to determine how the perception of the implemented measures compared to the perceived threats are. Furthermore, it should be determined how widespread the use of national, international or company-internal standards is. Therefore, a total of 19 guideline-based interviews within the German offshore wind industry have been performed. The interview partners involve operators, owners, authorities and service providers operating in German OWFs. First results from the analysis of the interviews suggest, that often-mentioned threat and risks scenarios for OWFs are collision with ships, severe weather and threats related to the occupationally safety like electrical hazards. Interesting to see is, that several of the mentioned threats in the guideline-based interviews have also been mentioned in previous works (Gabriel, Tecklenburg and Sill Torres 2022). The interview results indicate that the security measures used to date primarily comprise passive measures such as automated alarms or control rooms. These passive measures focus primarily on detecting threats rather than mitigating them or initiating countermeasures. Active security measures such as the (automated) initiation of countermeasures, e.g., (partial) shutdowns, on the other hand, seem to play a minor role in CI protection to date. For the countermeasures identified in the interviews, the mechanism of action was determined and compared with the threats mentioned. Most interviewees state that they have company-owned standards for risk management or the implementation of security measures. One possible explanation for the widespread use of company-internal standards is the lack of a uniform national standardization at least in Germany. By having their own standards, companies active in the market can draw on international best practices from the industry. With this work, a link is drawn between the threats and the countermeasures. At the same time, the importance of internal company standards was examined.

Bibliography GABRIEL, Alexander, TECKLENBURG, Babette and SILL TORRES, Frank (2022): Threat and Risk Scenarios for Offshore Wind Farms and an Approach to their Assessment. In: R. GRACE und H. BAHARMAND, Hg. ISCRAM 2022 Conference Proceedings – 19th International Conference on Information Systems for Crisis Response and Management, p. 162-173. INTERNATIONALES WIRTSCHAFTSFORUM REGENERATIVE ENERGIEN (2021): Windparks in Deutschland. Available from: https://www.offshore-windindustrie.de/windparks/deutschland, viewed 06.01.2023.

10:15
Practical Barriers in Implementing Intrusion Detection Systems in Control Systems in Electric Utilites
PRESENTER: Jon-Martin Storm

ABSTRACT. The last ten years have seen an increased Cyber-risk against Industrial Control Systems (ICS). ICS is paramount for everything in our lives, from industrial manufacturing to controlling critical infrastructure. While many cybersecurity controls are adjusted to work in these systems, some essential measures have yet to see broad implementation. One is Intrusion Detection Systems (IDS), which detect cyberattacks and incidents that preventive controls have not stopped. We have conducted a case study based on audit reports and interviews with five security experts in Norwegian electric utilities to explore barriers to implementing IDS. We have found that detection control is more commonly applied at an ICS's perimeter than through an IDS. The study implies that security experts in the utilities consider human resources the main barrier to implementing IDS. There are also differences between experts working at utilities and those working for CERTs on how they value the benefits of IDS.

10:30
Meeting new electrification needs: How a Norwegian grid operator is seeking to improve coordination by building a public capacity simulation service
PRESENTER: Lars Hurlen

ABSTRACT. Electrification is considered key to ensuring access to affordable and sustainable energy. The number of electrification projects is currently increasing and will probably continue to do so in the years to come. As a result, power grid operators are experiencing a growing volume of connection requests, and in many areas the demand is substantially higher than the grid capacity.

Currently there are few arenas for useful coordination between different actors in this area. From the supply perspective, grid operators need to justify and prioritize their efforts to increase grid capacity. To do this effectively and fairly they rely on timely access to quality information about electrification plans and projects, as well as their level of maturity/realism. Now such information is provided in a largely unstructured, case by case basis, demanding considerable time and resources for the grid operator. This leads to a growing que of applications and longer handling times. From the perspective of the public – industrial actors as well as those involved in regional planning – this is a novel situation: In the past access to desired electric power has largely been regarded as a given, so there is a low awareness of the problem of capacity. Once the problem is acknowledged there is a lack of useful information about current and planned grid capacity, and of available alternative options.

This paper describes the initial phase of the design of a front-end web solution for gathering information and coordinating electrification needs in southern Norway. The design work is part of the development of a simulation solution that aims to make electrification more socio-economically profitable as well as shortening the processing time for the customer's connection to the power grid.

Realising that electrification is intrinsically complex and multifaceted with capacity calculations, state regulation and legislation as well as the economical and environment considerations, the design work needs to embrace a wide range of input and actors. Consequently, the project has used an iterative method approach with involvement of many expertise disciplines and a wide selection of users. To ensure that the simulation solution meets the grid companies' needs, the scope for service is set to 2-20 MW power.

In the period August to November 2022, 20 semi-structured interviews and simple user tests of early prototypes have been carried out with representatives of 6 identified user groups. The paper presents and discusses how the design will gather knowledge about the different electrification needs, identify promising measures for improved information and coordination, and specifically to how to develop a service that provides easy-to-use grid capacity simulations and forecasts to the public. Further, the discussion considers input to effective ways of improving coordination and exploring alternatives (connection location, size and type).

The project is funded by the Research Council of Norway, running from 2022-24, lead by Glitre Energi Nett with IFE as the research partner.

10:45
Comparative Risk Assessment of Wind Turbine Accidents from a Societal Perspective
PRESENTER: Peter Burgherr

ABSTRACT. In 2021, solar and wind power for the first time provided more than 10% of the world’s electricity [1]. This makes wind a major and strategic part of the mix to achieve the energy transition and a green economy. Despite broad public support for renewables in general, challenges in social acceptance for wind continue to occur regionally and locally. The opposition usually focuses on aspects such as wildlife safety, biodiversity protection, noise, visibility and landscape impacts, and loss in property values [2, 3]. In contrast, risk assessment of wind turbine accidents and failures has received limited interest by stakeholders and the public. Therefore, this study presents a first comparative risk assessment for wind power, considering a public acceptance and societal perspective. A public dataset that covers more than 3000 events worldwide from 1980 to 2022 has been used for the assessment. First, data are cleaned and harmonized, and then based on the accident descriptions additional attributes are created and stored in new data fields, following the established framework used for PSI’s Energy-Related Severe Accident Database (ENSAD) [4]. For example, this included location, trigger (e.g., natural hazard, technical failure, human), life cycle phase (e.g., construction, operation, maintenance, transportation), affected components (e.g., tower, blade, nacelle), and consequences (fatality, injury, asset damage, environmental). Second, an exploratory analysis is carried out to relate accidents to different attributes. Third, selected risk indicators are calculated for human health impacts such as fatalities and injuries, considering frequencies, average (expected) risk and possible maximum consequences. Last, risk levels of wind power are compared to other new renewables (e.g., solar photovoltaic, geothermal), as well as previously published values for hydropower and fossil energy carriers [5]. In summary, the contribution of this study is threefold, namely (1) it provides useful insights and a better understanding of safety risks for policy makers, authorities, and insurance companies; (2) it complements the industry’s focus on occupational risk with a societal perspective, and (3) serves as objective and data-driven input for the discussion of risk perception and acceptance led by NGOs, media and exponents of the public.

References 1. REN21: Renewables 2022 Global Status Report. REN21, Paris, France (2022). 2. Caporale, D., Sangiorgio, V., Amodio, A., De Lucia, C.: Multi-criteria and focus group analysis for social acceptance of wind energy. Energy Policy. 140, 111387 (2020). https://doi.org/10.1016/j.enpol.2020.111387. 3. McKenna, R., Mulalic, I., Soutar, I., Weinand, J.M., Price, J., Petrović, S., Mainzer, K.: Exploring trade-offs between landscape impact, land use and resource quality for onshore variable renewable energy: an application to Great Britain. Energy. 250, (2022). https://doi.org/10.1016/j.energy.2022.123754. 4. Kim, W., Burgherr, P., Spada, M., Lustenberger, P., Kalinina, A., Hirschberg, S.: Energy-related Severe Accident Database (ENSAD): cloud-based geospatial platform. Big Earth Data. 2, 368–394 (2018). https://doi.org/10.1080/20964471.2019.1586276. 5. Burgherr, P., Spada, M., Kalinina, A., Vandepaer, L., Lustenberger, P., Kim, W.: Comparative risk assessment of accidents in the energy sector within different long-term scenarios and marginal electricity supply mixes. In: Proceedings of the 29th European Safety and Reliability Conference, ESREL 2019. pp. 1525–1532 (2019). https://doi.org/10.3850/978-981-11-2724-3_0674-cd.

09:45-11:00 Session 9C: Accident and Incident Modelling III
Location: Room 2A/2065
09:45
Fatal series of domestic gas cylinder explosions in Sri Lanka; A tragedy rooted in a failed process safety management

ABSTRACT. A series of explosions and gas leak accidents related to domestic LP gas cylinders had created an environment of fear, anger, and social unrest throughout Sri Lanka. More than 400 explosions and gas leak incidents had been reported during the first week of December 2021. In addition, a large number of observations had been made with respect to slowly leaking gas cylinder valves. The reported accidents and incidents can be divided into four major categories: (a) Sudden gas explosions inside houses and building, (b) Exploding gas cookers, (c) Major gas leaks and resulting damages associated with the pressure regulator and the hoses, (d) Minor gas leaks from the cylinder valve, regulator, or the hoses.

The number of accidents reported during a single week had far exceeded the typical gas related accidents happening within a typical year in the country. Multiple fatalities had also been associated with this series of events. More people were likely to have received severe injuries as well. The associated property damage aspect comes in addition. This was perceived as a “distributed major accident” scenario shrouded in mystery, as there was an apparent attempt to mislead or conceal the true nature of the tragedy. This article explores and analyses the surrounding facts and potential root causes with added attention on the prevailing process safety culture in the country. Lessons to be extracted from this fatal series of events can help improving process safety cultures in many other parts of the world.

10:00
Evaluating differences between maritime accident databases
PRESENTER: Spencer Dugan

ABSTRACT. Maritime accident statistics are used as a key part of the IMO’s formal safety assessment (FSA), a risk assessment methodology to guide policy decisions in the maritime industry. Under-reporting of maritime accidents can inhibit the accuracy of results derived from the FSA, therefore having a direct influence on maritime policy. The objective of this work is to perform comparisons between accident databases, and to investigate the degree to which underreporting is biased by factors including the type of accident, degree of severity, and ship type. This study analyzes databases of reported maritime casualties from 1) IMO GISIS, 2) IHS Fairplay, and 3) the United States Coast Guard CGMIX. The databases are subset to an eight-year period and for commercial ships greater than 100 gross tonnage (GT) to enable a direct comparison. The reporting rates for the GISIS and IHS databases are calculated for accident type, accident severity, and ship type. Results indicate that the GISIS and IHS databases contain significantly fewer non-serious accidents than serious accidents. Further biases were observed by accident and ship types. Founderings, fires / explosions, and strandings are more likely to be reported than other accident modes. Hull / machinery damage is the accident mode with the lowest reporting rate.

10:15
Performance analysis in accident investigations in the oil and gas industry
PRESENTER: Francisco Silva

ABSTRACT. The process of analyzing and investigating accidents, incidents and critical deviations is part of the routine of most High Reliability Organizations (HROs), which normally have a governance system defined for this purpose, that is, written procedures, forms, matrices that guide the investigation methodology to be used and internal and external communication flow, minimum internal public required for each event and other definitions. Each operating unit, even belonging to the same company, has cultural factors that are quite different from each other. Some High Reliability Organizations (HRO) have the need to carry out several event investigations, which is often conflicting with daily demands. The problem identified is that the quality of the investigation is affected due to behavioral factors of engagement of the leaders involved and insufficient data collection, which can interfere with the results of the investigation. The proposed methodology was the creation of a quantitative assessment of the engagement of all those involved and the phases of the investigation, aiming to draw a profile and obtain an assessment of each investigation and, consequently, to draw a pattern of the different operational units of the company. The contribution of this work aims to make a correlation between the investigated events, their recurrences and how the engagement considered adequate can generate resilience and impact the results of the investigations. This article is the result of a study in one of the largest industries in the oil and gas sector in Brazil, which works in the bottling and distribution of LPG nationwide. This work has more than 100 investigations that took place from north to south in Brazil in a period of 2 years, in a wide geographic and cultural distribution of 16 different states.

References

Carla L. MacLean (2022) Cognitive bias in workplace investigation: Problems, perspectives and proposed solutions DEKKER, Sidney (2006), The field guide to understanding human error. Hampshire: Ashgate. Erik Hollnagel, David D. Woods, Nancy Leveson (2006) Resilience Engineering: Concepts and Precepts, Ashgate Publishing, Ltd., HALE, Andrew; HEIJER, Tom (2006), “Defining Resilience”, in HOLLNAGEL, Erik; WOODS, David; Hollnagel, E (2004). Barrier analysis and accident prevention. Aldershot, UK: Ashgate Pub Ltd MacLean, C. L. (2022). Cognitive bias in workplace investigation: Problems, perspectives and proposed solutions. Journal of Applied Ergonomics. PERROW, Charles (1999), Normal accidents: living with high-risk technologies. New Jersey: Princeton University Press. RASMUSSEN, Jens (1997), “Risk management in a dynamic society: A modeling Problem”, Safety Science, 27, 183-213. REASON, James (1997), Managing the risks of organizational accidents. Aldershot: Ashgat

10:30
A Guide for Identifying Human Factors in Accident Investigations
PRESENTER: Stig Winge

ABSTRACT. The paper describes a guide for identifying and analysing Human Factors in accident investigations that has been developed for the Petroleum Safety Authority in Norway. The objective of the guide is to help investigators understand why humans involved in accidents acted as they did, and to identify important performance influencing factors, learning opportunities and recommendations.

The method is based on exploration of Man, Technology and Organisational (MTO) issues, including cognitive issues of Human Factors to understand the Situational Awareness of the involved actors as the accident evolved. The guide has been developed based on current best practices of accident investigation methods, to fill the gaps in the existing accidents investigations where the human perspective has been missing. It is based on a number of other methods and approaches, primarily from the Accident Investigation Board Norway (AIBN), the Chartered Institute of Ergonomics and Human Factors CIEHF (2020), Bridger (2021), Endsley (1995), The human factors analysis and classification system (HFACS) (Shappell & Wiegmann (2000), and methods used in the Oil and Gas Industry.

Exploration of accidents involving new technology indicated that there were gaps in understanding and learning from the accidents. This gap was due to poor focus on human factors issues (not considering human limitations and strengths), poor focus on underlying complex design and a too strong focus on rules instead of trying to understand how and why there was a gap between work as done vs. work as imagined. Learning from incidents and accidents has often missed the science of human factors, root causes from poor design and has not minded the gap between procedures/rules (work as imagined) and work as actually done.

The guide has been designed to be integrated in, and following the same stages as, an overall generic accident method. The guide describes (1) a model for investigating Human Factors, (2) acts that contributed to the accident, (3) situational awareness, (4) performance influencing factors divided into six sub-categories, (5) analysis, and (6) recommendations for improving Human Factors.

References: Bridger, R.S., (2021). Introduction to Human Factors in Accident Investigation. CIEHF (Chartered Institute of Ergonomics & Human Factors) (2020). White Paper. Learning from adverse events Endsley, M. (1995). Toward a theory of situation awareness in dynamic systems. Human Factors 37(1), 32-64. Shappell, S. A. & Wiegmann, D. A. (2000). The human factors analysis and classification system--HFACS.

10:45
Accidents at level crossings

ABSTRACT. The critical point of railways are the places where they cross roads, i.e. level crossings. Crossings as such have existed on the railway since its inception, but in a completely different mode. There were significantly fewer crossings and many times the number of crossings that were protected by warning crosses, previously referred to as "Unprotected". At some crossings, the only security was made up of barriers, which were operated by a barrier operator right at the crossing. There are several causes of accidents at railway crossings. The presented article lists the main causes that lead to serious accidents at railway crossings in the Czech Republic. These include: poor meteorological conditions; inappropriate location of the crossing; alcohol behind the wheel; driver inattention; using mobile phones while driving; ignorance of the law; and the impact of changes in society. Driver inattention can be caused by many phenomena. According to the survey, the most common causes of inattention are using or calling on a mobile phone while driving without using a hands-free kit. and then shows options for preventing accidents. At the end, there are proposals for measures to reduce the number of accidents at crossings, which need to be incorporated into the legislation.

11:00
ENHANCING PROACTIVENESS AND MITIGATION CONCERNING MARINE OIL SPILL ACCIDENTS WITHIN THE HELLENIC SEAS

ABSTRACT. Marine oil spill accidents pose a serious threat to various countries within the wider European region. Research so far indicates that certain countries such as Greece, UK, Germany, and the Netherlands are considerably susceptible to accidents of this nature. Extended oil spill pollution can harm marine wildlife and destroy vital habitats. This damage can have long lasting and far-reaching consequences for the surrounding environment. Protecting the Hellenic Seas from oil spills is not only a matter of economic and social importance but also an environmental and ecological imperative. The Greek seas compose a vital natural resource and play a crucial role in the country's economy, culture, and overall well-being. The economy of Greece relies significantly on the country's natural resources including, its marine environment. The spill could damage beaches, coastal towns, and marine ecosystems, leading to a decline in tourism, job losses and reduced revenue for the industry. To successfully mitigate the risk of a potential hydrocarbon leakage within the Hellenic seas, proactive efforts must be placed on two different levels; investing in preventing measures that minimize the probability of such accidents to occur, and perfecting emergency response plans to control and repair a potential marine oil spill accident. In order for the above to be achieved, meticulous analysis of all the factors that can lead to such an accident combined with its potential consequences is necessary. In the current study, focus is placed on the different locations where shipping accidents of collision and grounding have occurred throughout the Hellenic seas. The two types of accidents have been selected, as they have been linked to severe oil spill pollution. The ship’s age, type and flag are studied alongside the location of the accident. The findings of the current analysis can serve as a valuable input for subsequent research efforts that aim to safeguard the Hellenic seawaters from major spill accidents.

09:45-11:00 Session 9D: S.24: Mitigating and adapting to climate change disasters in the Arctic

This session focuses on the challenges facing Arctic organisations, companies and communities that threaten their ability to be resilient to potentially disastrous events. The purpose of this special session is to present ongoing research on the impact of these hazards and to discuss the conceptual implication the findings have for social systems’ future mitigation and adaptation. During the discussions, we will explore conceptual commonalities across the different cases and present subjects of further research and collaboration.

Location: Room 100/4013
09:45
Mitigation of Climate Change. The need for increased consideration of risk and uncertainty.

ABSTRACT. To achieve a drastic reduction of emissions and a significant increase in carbon uptake from the atmosphere, the Intergovernmental Panel on Climate Change, IPCC, in 2022, recommended a considerable number of mitigation options whose feasibility is yet to be examined in each context. The IPCC also proposed an approach to assess the feasibility of mitigation options. By analysing this approach, we discuss some issues that reflect the need for an increased consideration of risk and uncertainty linked to mitigation options. For example, more can be provided as guidance for assessing or specifying mitigation options based on the different degrees of uncertainty a mitigation option might involve. Concerns are also raised about whether the assumptions involved in specifying mitigation options are systematically assessed. These and other issues mean that the IPCC has not taken advantage of the already existing risk and mitigation frameworks that clarify these aspects of risk and mitigation. It follows that risk and uncertainty communication and mitigation achievement are potentially compromised.

References Aven, T. (2019). The science of risk analysis: Foundation and practice. Routledge, London. Aven, T. (2020). Climate change risk–what is it and how should it be expressed? Journal of Risk Research, 23(11), 1387-1404. IPCC (2021). Climate change 2021. The Physical science basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press, Cambridge, UK. IPCC (2022). Climate Change 2022. Mitigation of Climate Change. Contribution of Working Group III to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press, Cambridge, UK. Moreno, J., Van de Ven, D. J., Sampedro, J., Gambhir, A., Woods, J., & Gonzalez-Eguino, M. (2023). Assessing synergies and trade-offs of diverging Paris-compliant mitigation strategies with long-term SDG objectives. Global Environmental Change, 78, 102624. Stern, P. C., Wolske, K. S., & Dietz, T. (2021). Design principles for climate change decisions. Current Opinion in Environmental Sustainability, 52, 9-18. Stern, P. C., Dietz, T., & Vandenbergh, M. P. (2022). The science of mitigation: Closing the gap between potential and actual reduction of environmental threats. Energy Research & Social Science, 91, 102735. van der Linden, S. (2015). The social-psychological determinants of climate change risk perceptions: Towards a comprehensive model. Journal of Environmental Psychology, 41, 112-124. Wang, C., Geng, L., & Rodríguez-Casallas, J. D. (2021). How and when higher climate change risk perception promotes less climate change inaction. Journal of Cleaner Production, 321, 128952.

09:57
ENHANCING COMMUNITY-BASED SEARCH & RESCUE IN THE CANADIAN ARCTIC THROUGH RISK ANALYSIS

ABSTRACT. The Canadian territory of Nunavut’s size, scarcity of government resources, and harsh environmental conditions, coupled with the depth of traditional skills and knowledge held within local communities, mean that volunteer community responders form a vital part of the region’s search and rescue (SAR) capacity (Kikkert & Lackenbauer, 2021). Climate change has created many challenges for SAR, including unpredictable and dangerous ice conditions, later freeze up and earlier thaw times, and increased marine traffic within the Northwest Passage (Ford & Clark, 2019). These changes not only affect activities on land, ice, and sea, increasing the risk of SAR incidents, but also impede operations. Nunavut SAR is a complex socio-technical system involving a range of interdependent factors that affect outcomes. Climate change impacts potentially exacerbate many factors, including the erosion of traditional knowledge, volunteer burnout, training gaps, and technological limitations. The NSAR project is a collaborative partnership which aims to strengthen SAR prevention, preparedness, and response in the territory, leading to enhanced community resilience and support for traditional Inuit ways of life. Three roundtables held across Nunavut, allowed SAR responders to highlight challenges, share experiences and best practice. Using data from the roundtables, the project is creating a systems-based risk methodology informed by contemporary thinking (e.g., Aven, 2021) to understand and analyze potential uncertain events and their systemic impacts to inform strategic planning, asset deployment, and capital investment. A scenario-based approach is adopted to elucidate unknown futures and deep uncertainties (e.g., Bourgeois et al., 2017).

References 1. P. Kikkert and P.W. Lackenbauer, “A great investment in our communities”: Strengthening Nunavut’s whole-of-society search and rescue capabilities’, Arctic, 74(3), 258 (2021). DOI: 10.14430/arctic73099 2. J. Ford and D. Clark, Preparing for the impacts of climate change along Canada's Arctic coast: The importance of search and rescue, Marine Policy, 108, 103662 (2019). DOI: 10.1016/j.marpol.2019.103662 3. T. Aven, On some foundational issues concerning the relationship between risk and resilience: Risk Analysis, 2021, DOI: 10.1111/risa.13848. 4. R. Bourgeois, E. Penunia, S. Bisht and D. Boruk, Foresight for all: Co-elaborative scenario building and empowerment, Technological Forecasting & Social Change, 124, 178 (2017). DOI: 10.1016/j.techfore.2017.04.01

10:09
Complexity of tourist safety in the Arctic: stakeholder’s knowledge co-production

ABSTRACT. In light of the growing demand for tourism in the Arctic, it is imperative to strengthen the knowledge base, skill set, and competencies of the tourism labor force by strengthening guide professionalism and safety in the region. Complex logistics, rapidly changing weather, and remoteness, as well as the effects of climate change, play a significant role in field practices, especially for tour operators. With growing interest in the region, the likelihood of accidents increases, leading to stress on limited local emergency services. Recent findings show that local knowledge, experience and training have been recognized as essential in ensuring safety, while there is limited data on knowledge exchange between local stakeholders. Knowledge on relationships and interaction between the stakeholders, such as tourism boards, rescue services, academia, tour guiding schools and guiding companies is essential in the coproduction on knowledge; phenomena of ensuring tourist safety in the Arctic. By exploring stakeholders’ capacity and standpoints related to issues on safe tourist operations, we seek to explore the possible ways to collaborate in knowledge coproduction. Hence, our study sought to address literature gaps by examining: 1)What are the safety concerns related to extreme weather events for stakeholders operating in the Arctic environment? 2) What is the current state of safety-related knowledge exchange between the rescue services and tourist companies? 3) What strategies and resources are needed to establish collaboration between the rescue services, tourist companies, and guiding schools in the Arctic? With an aim to understand the process of potential collaboration on tourist safety, we aim to organize workshops to collect the data. Workshops organized by Arctic Guides Safety Education collaboration will take place in winter, spring and fall of 2023 in Iceland, Svalbard and Greenland. Arctic Guide Safety Education collaboration is a project with a focus on enhancing knowledge sharing in the development of guide education in the Arctic environment bringing attention to increased involvement and integration of research on tourist safety in the Arctic. Working together across educational levels creates an opportunity to link and transfer knowledge and experiences between different tourism educators in the Arctic, thus preparing the ground to produce materials for teaching development and continue transnational collaboration in the field. Data collected during workshops will be recorded, transcribed, and organized in thematic themes. We aim to analyze the findings with a theoretical approach of complexity and collaborative research theories. The research contributes to the knowledge of tourism management and the safety field, giving insights into a process of potential collaboration in the Arctic, building resilient infrastructure, promoting knowledge sharing and enhancing safety practices, while addressing the importance of cooperation of various stakeholders, including researchers, and local communities in tourism destination development in the polar regions.

10:21
The limits of organisational resilience

ABSTRACT. This paper explores an alternative to contemporary organisational resilience research through the ‘temporary adaptive capacity’ concept. Such an approach shows how loosely coupled socio-technical systems can unite under a joint governance structure to increase their combined capacity to protect themselves against a time-limited common threat. This conceptual framework differs from contemporary approaches to organisational resilience by utilising networks of systems, contrary to what has traditionally been an organisational-centric understanding of how to build organisational resilience. The conceptual shift is found in the ability of otherwise unrelated socio-technical systems to combine their resources, management, and governance systems to increase their overall capacity to identify, manage, and recover from what would otherwise be a disastrous event for the individual organisation. The proposition is that such a system maintains the initiative during an event, as it can adapt when norms and practices no longer have agreed outputs. It develops innovative solutions and workarounds using existing organisational resources to achieve its desired output. The socio-technical systems can work independently without compromising the overall network goals and adapt by making changes within the technical and organisational domains. To illustrate the utility of such a network approach, an example from Greenland is where six communities face a possible catastrophic landslide and tsunami event.

10:33
Climate adaptation: How can leaders maintain and improve their trust whiles managing a creeping climate change crisis?

ABSTRACT. The creeping climate crisis has become one of the most challenging to affect humanity as the associated impacts increase over time. This requires that people have to learn to adapt to climate change. It may be an asset for people to adapt if the leaders in charge are perceived as trustworthy. This qualitative study draws on five semi-structured interviews conducted in Longyearbyen-Svalbard to explore the research question: How can local leaders maintain and improve their trust while managing the creeping climate crisis? The study shows that people perceive local leaders in Longyearbyen as trustworthy in managing the creeping climate crisis. The local leaders are using ways such as frequently sharing climate information, taking effective and efficient action, and involving diverse stakeholders to maintain and improve their trust among the population. However, the study shows that leaders may need to pay attention to some socio-political policies, such as Voting rights, as such policies may influence their trust negatively in the long run. Therefore, the study has suggested a perspective on how leaders may maintain and improve their trust whiles managing the creeping climate crisis.

10:45
Identifying and managing uncertainty in governance of climate-related risk: Lessons from an Arctic society
PRESENTER: Siri Holen

ABSTRACT. The Arctic experience climate changes at a much higher pace than the rest of the world which also impacts societal safety in the settlement in Longyearbyen. To deal with natural hazards in the age of climate change a range of measures have been implemented. Uncertainty, i.e. a state, even partial, of deficiency of information related to, understanding or knowledge of an event, its consequence, or likelihood (ISO31000), is predominant in dealing with natural hazards and societal safety in the age of climate change. Climate prognoses are uncertain by their inherent variability and by the choice of level of expected greenhouse gas emissions. Other uncertainties are related to lack of knowledge and experiences related to types of natural hazards and lack of knowledge of the effects of risk reducing measures. The paper 1) provides a categorization of sources to uncertainty in different steps in risk governance both for short-term climate disaster handling and long-term climate adaptation and 2) discusses approaches to manage the identified sources to uncertainty. A framework for risk governance is used in the categorization. Sources to and handling of uncertainty is found in all parts of risk governance: framing, assessment, decision-making and communication. The paper is based on a three-day workshop about uncertainty in risk governance of climate-related risk in Longyearbyen at the Norwegian archipelago Svalbard at 78 degrees North.

09:45-11:00 Session 9E: New Foundational Issues in Risk Assessment and Management
Location: Room 100/5017
09:45
Interfacing risk logic, riskification, and risk governance: some research implications

ABSTRACT. There is an increasingly widespread scientific recognition that there are systemic risks, for example climate change, which are studied as security issues. This paper addresses possible interfaces between the constructivist approach of riskification and the realistic approach of risk governance by proposing three analytical categories of exploration of security topics like climate change: how the two approaches understand 1) risks, 2) actors, and 3) tools and practices. Riskification builds on securitization theory and argues that securitization has not been able to clarify what distinguishes threats from risks. Risk governance combines normative political theories with risk science and promotes a realist perspective on risks. Riskification is supported by a risk logic, which posits that the identification and management of risks can be governed through the purposive guidance of the public towards a particular way of thinking and acting. Bringing together these two perspectives improves the analysis of contemporary risks and threats phenomena, such as climate change or pandemics, in addition to expanding the number of explanations and the understanding. Furthermore, this paper promotes linkages between riskification and risk governance to increase knowledge on which risks are prioritized, which actor constellations deal with these risks to develop proper policies and planning.

10:00
REDEFINITION OF RISK IN NORWEGIAN PETROLEUM: RISK MANAGEMENT CONSEQUENCES

ABSTRACT. In 2015, the Norwegian Petroleum Safety Authority introduced a new definition of risk, formulated as ‘…the consequences of activities with associated uncertainty.’ This paper explores how the new definition is understood in the industry and possible changes in risk management the change has led to in the Norwegian petroleum industry. Semi-structured interviews with seven operational decision-makers from operator companies in Norway were conducted to gain insight into their understanding of the new concept, and its implementation Decisions were primarily based on available knowledge, including tools, analyses, industry experience, and joint assessments with experts. Sources of uncertainty highlighted by the informants included new technology, a lack of competence and knowledge, and external factors such as pandemics and global conflicts. Implementing the new concept of risk has increased awareness of the uncertainty dimension of risk and has led to the adoption of new tools and assessments. However, there is a need for further development of tools and methods to address the uncertainty dimension of risk across companies and actors. The study highlights the importance of communication and discussion between decision-makers and knowledgeable personnel for dynamic and effective risk management, and a shared understanding of risk is necessary for managing it effectively.

10:15
Challenges of Hazard Identification

ABSTRACT. In the FLHYSAFE research project, our responsibility was to assess the risk of a fuel cell system in its early stages of development. The fuel cell will be used in an Emergency Power Unit (EPU) and will someday be able to replace the Ram Air Turbine (RAT). ARP4754 requires that risk be considered for the EPU also because the RAT is a safety-critical system.

Studies such as [al01] and [da99] have looked at different challenges of hazard identification and assessment. From the standpoint of a distributed project like ours, the identified challenges only cover a portion of our experience. This could be attributed to distributed development together with the development of the state of the art.

The purpose of this work is to investigate the various aspects and characteristics of hazard identification in a distributed project environment. The focus will be on the early stages of development, as well as the process and methods for hazard identification. The results will be presented and discussed using the EPU as an example. In addition, recommendations for pragmatic hazard identification will be provided.

10:30
Manageablity of risk - a literature study
PRESENTER: Torgrim Huseby

ABSTRACT. In current practice for analyzing risk, the concept manageability is frequently used to characterize how difficult it is to treat risk. The manageability concept is intuitively easy to understand. In practice however, we experience that there is a lack of consistency in how the concept is understood and used. In risk assessment applications where the manageability concept is included, a definition is often lacking. This may result in different understandings of what manageability is, for example among participants in risk workshops. The result may be poor assessments and poor decision making. In this paper, we have elaborated on the manageability concept and studied how this concept is used in the research literature. In particular, we have studied to what extent is the manageability concept defined in the research literature and how is the manageability concept defined, including variations.

09:45-11:00 Session 9F: Aeronautics and Aerospace I
Chair:
09:45
Application of Framework for Risk Assessment in Ultrasonic Testing (UT) of Critical Parts – A Case Study

ABSTRACT. This paper presents the application of a framework for identifying risks in the Ultrasonic Testing of critical parts. This topic is significant because failing to correctly inspect critical parts with UT in the industry may lead to operational failure. The correct selection and use of an acceptable method are critical to the success of the inspection. If risks are not identified and proper responses are not provided, catastrophic accidents can happen. In the European Congress for Reliability and Safety held in 2022 in Dublin, a framework proposal was presented based on the Analytic Hierarchy Process (AHP) and Bayesian Belief Network (BBN). This study complements the one presented in Esrel 2022, focusing on the demonstration of the application of the framework and the definition of response actions for the risks identified. As a methodological approach, a survey was prepared to elicit experts' probabilities. These were uploaded into a Bayesian Network software to combine the risk factors contributing to an inspection failure. AHP was used to define to prioritize the impact of risk categories. The combination of probability and impact identified the most significant risk categories. As a result, the method revealed the most significant risk factors in the Ultrasonic Testing of critical parts, and actions are proposed to respond to the risks. The conclusion is that the model proved to be adequate to significantly reduce the risk of hardware failure. As a contribution, the proposed method is an invaluable source of information for safety engineers and decision-makers in companies. It can augment their knowledge and helps identify risks in UT of critical hardware, implement actions to avoid critical parts failure, and improve the safety in inspecting these parts. Although conducted in a specific repair station facility, it can be generalized to other industries and fields of work whose safety in the UT testing is affected by risk issues resulting in waste, rework, and unnecessary energy consumption. The study can change the practice and thoughts of professionals dealing with UT in companies' operations.

10:00
Analysis of Shrinkage Estimators and Bayesian Decision Rules for Bioburden Density Estimation in Planetary Protection Probabilistic Risk Assessment
PRESENTER: Andrei Gribok

ABSTRACT. The planetary protection (PP) discipline aims to minimize microbial contamination on spacecraft in order to prevent inadvertent contamination of other planetary bodies. Understanding the number of microorganisms, or bioburden, launched with the spacecraft is fundamental to achieving this, and is calculated using estimates of the bioburden density (bioburden per unit area or volume) across the spacecraft. Due to economic and engineering considerations, about 10% of a spacecraft’s surface is directly sampled to estimate bioburden densities. To generate the bioburden density estimates for components not directly verifiable through sampling, a NASA-defined bioburden estimate is applied, based on the components’ manufacturing and/or assembly environment. This approach utilizes a prespecified bioburden density estimation that applies a maximum value across the total surface area of the specified component. For hardware components that underwent similar assembly processes, an implied bioburden is adopted for all components, based on direct verification of a representative component within the same lot. These NASA-specified and implied bioburden density estimators can be considered as an extreme class of shrinkage estimators: deterministic estimators, i.e., not dependent on the sampling data of the components being estimated. However, similar to any other statistical estimators, the risk of deterministic estimators can be estimated. The risk of an estimator quantifies both accuracy (bias) and precision (variance) of the estimate. The objective of this paper is to analyze the risks of these prespecified estimators to gain insight into sampling procedures and volumes of sampled data to optimize amount of data collection for future missions. Mathematically, the problem of bioburden density estimation from sampled data can be formulated as simultaneous estimation of means from independent Poisson distributions. While extremely simple, the deterministic estimators based on NASA-specified and implied bioburden densities may, under certain conditions, have quadratic risk, i.e., the sum of the estimator’s bias squared and variance, lower than data-driven estimators, with no data-driven estimators being uniformly better (i.e., the estimators are admissible). By comparing risks of deterministic estimators and data-driven estimators, different sampling schedules and sampling volumes can be analyzed to optimize the performance of data-driven estimators as well as deterministic estimators. This paper compares and contrasts two approaches used for bioburden calculations ― a frequentist approach and Bayesian approach ― and evaluates their performance using data collected from the InSight mission. Specifically, we calculate quadratic risks of different types of shrinkage estimators and compare the risks with the Bayesian approach. Analysis is performed for different regions of parameter space to find estimators having the lowest risks for bioburden values most frequently occurring in practice.

10:15
Acceptance of residual risk in a Brazilian military aeronautical project through the application of a method

ABSTRACT. Military aeronautical projects in Brazil need to demonstrate compliance with mission-related requirements during the certification phase. When designing a system, the System Safety Analysis (SSA) is an essential part of the engineering activities related to initial airworthiness. Such an analysis requires a process to determine whether the system is secure enough and to identify an acceptable balance between security, cost, and military capability. The concern with military capacity is sometimes equal to or greater than the aspects related to the security of the operation, as they aim at a specific and innovative objective. Ensuring the safety of systems and products used in fulfilling their missions rests with the competent authorities, who need to conduct a reasoned assessment of the risk of specific operations. To fill this gap, developing a guide that serves as a support tool for risk assessment decisions in a given operation is necessary. In this article, the authors draw on their experience in certifying military aeronautical products to (1) briefly review the concept of risk acceptance applied to aeronautical design; (2) present the risk analysis method applied in military aeronautical projects in Brazil; (3) present a method that substantiates the acceptance of residual risk for military aeronautical projects in Brazil; and (4) Finally, this work presents the use of the method that substantiates the acceptance of residual risk through an application in the operation of uncrewed aircraft.

10:30
The Functional Resonance Analysis Method (FRAM) on Aviation: A Systematic Review

ABSTRACT. The development of the Functional Resonance Analysis Method (FRAM) has been motivated by the perceived limitations of fundamentally deterministic and probabilistic approaches to understand complex systems’ behavior. Congruent with the principles of Resilience Engineering, over recent years the FRAM has been progressively developed in scientific terms, and increasingly adopted in industrial environments with reportedly successful results. This paper aims to summarize available documents published between 2017 and 2022 about FRAM on Aviation through a Systematic Literature Review (SLR). Circa of 15 articles were reviewed, disclosing characteristics of the FRAM research regarding the method’s application on aviation as well as proposing potential future research directions.

FRAM is a method-sine-model, whose purpose is to build a model of how things happen rather than to interpret what happens in the terms of a model. In the aviation domain, the FRAM is being applied for retrospective analyses, like the SAS flight 751 crash at Gottröra in 1991, and prospective analyses, mainly in Air Traffic Management (ATM). The SLR also reports proposals to enhance the FRAM, such as the development of (semi‐)automatic data collection approaches, including function identification and aspect specification, as well as method(s) for quantifying variability.

The quantification of the variability is an interesting approach to the aviation domain, especially regarding cockpit and flight operations. FRAM focuses on everyday performance, which may be explored through the analysis of flight data. Flight data is the information coming from aircraft sensors, onboard computers, and other instruments that are recorded into a crash-survivable Flight Data Recorder (FDR) and occasionally also into an easily accessible Quick Access Recorder (QAR). They are already used in Flight Data Monitoring (FDM) programs, which are designed to enhance safety by identifying airlines’ operational safety risks. FDM is based on the routine analysis of flight data during revenue flights. The combination of FRAM and FDM techniques seems to be promising.

10:45
On how to capture uncertainty in drone design verification: A critical look at the EASA requirements
PRESENTER: Selcuk Yilmaz

ABSTRACT. Drones (Unmanned Aerial Vehicles) are widely used in the industrial sector with a range of benefits. However, such systems are complex and prone to failures. To ensure acceptable quality before operations, design verification and EASA (European Union Aviation Safety Agency) certification is required. Acceptable risk performance being then one of the aspects addressed. For drone operators considering to apply for certification, EASA has developed a document (SC Light-UAS 01.) giving descriptions and requirements for process and technical specifications. A premise in this being strong foundations for the assessments. Hence, information about significant uncertainty, or weak strength of knowledge, may influence the validation process and decision-making. In this paper, the aim is to clarify handling of uncertainty in the verification, and in particular discuss how strength of knowledge is and should be expressed, with respect to the EASA requirements. For this discussion, we compare the standards defined by EASA with key standards used in the aviation industry, including ASTM (American Society for Testing and Materials) and ISO (International Organization for Standardization) documents. The results from the comparison are discussed considering also experience data collected by a Norwegian drone operator. Based on the findings, some suggestions are given on how to strengthen design verification for drones.

09:45-11:00 Session 9G: S.16: Digitalisation and AI for managing risk in construction projects
09:45
Using NLP for Automated Contract Review and Risk Assessment
PRESENTER: Irem Dikmen

ABSTRACT. Contracts are legal documents that define responsibilities of parties and allocate risks. Contractors conduct tedious contract review processes to identify risks retained by them so that a proper risk management plan can be prepared. In this research, it is proposed that Natural Language Processing (NLP) can be used to automate contract review and facilitate risk assessment process. To demonstrate its applicability, FIDIC standard form of contract was selected, and all sentences were labelled with sentence type and risk ownership in order to create a training dataset for machine learning (ML) applications. The test dataset was created by using a real contract of a construction project. The performance of 12 machine learning models, which were created with 5 different machine learning algorithms and 6 NLP-based text vectorization techniques, were compared. Selected machine learning algorithms are logistic regression, decision tree, support vector machine, recurrent neural network and bidirectional encoder representations from transformers (BERT). As a result, 87% accuracy score for sentence type and 80% accuracy score for risk ownership were achieved with BERT model. Accuracy scores were increased to 89% for sentence type classification and 83% for risk ownership classification after the implementation of the competitive voting method. Findings demonstrate that NLP and ML based text analysis can help contractors to review construction contracts and identify risks with less time, reduced employee workload and high accuracy. On the other hand, there are some challenges of using NLP for contract review. Developing a training dataset that can be used for all types of construction contracts is a major challenge. Ambiguity of contract clauses in non-standard contracts may cause a bottleneck in the labelling process. Creating labelled datasets requires expert knowledge on construction contracts and risk ownership. More research is necessary for expanding the dataset with different types of standard forms of contract. Lastly, it is acknowledged that integrating a rule-based approach to the proposed model has a potential to increase classification performance which can be explored in future studies.

10:00
A BIM-Based Risk Management Model for Construction Waste Reduction and Control
PRESENTER: Huseyin Erol

ABSTRACT. The construction industry has long been criticized for being responsible for a significant portion of waste generation worldwide. Although different approaches have been developed over the years for waste reduction and control, they are not sufficiently implemented in practice, and waste generation in the construction industry still shows an increasing trend. As a digital technology platform, Building Information Modelling (BIM) offers promising solutions for the broader adoption of waste management practices. It has the potential to be utilized as a supporting technology to manage the risk factors affecting waste generation. It can also serve as a knowledge management platform to facilitate decision-making processes in waste management. Using BIM in this way may help reduce uncertainties about waste prediction and management in construction projects. From this point of view, this research aims to develop a novel model that benefits from BIM to manage risks and knowledge related to construction waste. Accordingly, a process model was proposed, which integrates the steps of estimating material waste, performing risk assessment, analyzing the impact of waste, developing waste reduction strategies, and monitoring & controlling the waste management plan.

10:15
Using Web Crawling for Automated Country Risk Assessment
PRESENTER: Irem Dikmen

ABSTRACT. The latest and most accurate host country information is crucial for international market selection and bidding decisions. In today’s digital world, this information can be timely gathered from websites on the Internet, the globally connected network system facilitating worldwide access to a wide range of information resources through a massive collection of private, public, academic, and government networks. Within construction companies, the traditional risk assessment process is generally based on searching for country information on the web and necessitates human input, which is subjective by nature and prone to misinterpretation and errors. With the help of automation, the traditional way of identifying and assessing country risks can change for the better. Better, latest, and most accurate data collection with web crawling algorithms, which are used for automated browsing, may be possible for country risk assessment with less human involvement, minimizing human errors and saving effort and time. Studies that explored the automated collection of information from the Internet for country risk assessment in construction are limited in the literature. In this paper, we assert that the web crawling technique can help search for data from the web and can be used to automatically develop country risk registers. Within this context, first of all, the web crawling algorithms will be explained, and how they can be used to automate the risk assessment process will be discussed. Then, a demonstrative example of an international construction company carrying out a country risk assessment process while preparing a bid for a highway project in Serbia will be depicted. The benefits of web crawling for automated country risk assessment will be discussed as well as its challenges.

10:30
The Applications of Smart Wearable Sensors to Improve Safety in Construction Projects
PRESENTER: M.K.S. Al-Mhdawi

ABSTRACT. The construction industry faces significant occupational risks and has higher rates of worker illnesses, injuries, fatalities, and near-misses compared to other sectors. The advent of smart wearable sensors offers promising opportunities for real-time collection and analysis of safety data for construction workers. This literature review aims to identify research gaps in the current applications of smart sensors in construction projects. Electronic databases, including Google Scholar, ScienceDirect, and IEEE Xplore, were searched for relevant articles published in English between 2018 and 2023. Selected articles were evaluated based on their relevance, research quality, and use of smart sensors in construction site safety. By identifying these research gaps, valuable insights can be gained regarding the effective application of wearable sensor devices to enhance construction workers' safety. This knowledge can contribute to evidence-based practices and inform decision-making in construction safety. Bridging these gaps would promote the adoption of new innovative technologies and their integration into construction work environments, ultimately leading to the creation of safer conditions for construction workers.

09:45-11:00 Session 9H: System Reliability I
09:45
Preliminary reliability analysis of autonomous underwater vehicle in the polar environment based on failure mode and effects analysis and fault tree analysis
PRESENTER: Hyonjeong Noh

ABSTRACT. Autonomous underwater vehicles (AUVs) are often used in extreme environments, making reliability assessment important. To evaluate the preliminary reliability of AUVs operating in polar environments, this study utilized failure mode and effects analysis (FMEA) and fault tree analysis (FTA). FMEA was conducted based on the functional analysis of the AUV, identifying nineteen potential failure modes and thirty failure causes. The results suggest correlations between major failures, accidents, and their causes. Using the FMEA results, a fault tree diagram was constructed by defining the loss of the AUV as the top event (TE) causing the greatest loss. The study categorized basic events (BEs) into two categories: BEs related to equipment reliability, such as equipment aging, equipment failure, and manufacturing defects, and BEs caused by polar environmental factors, such as collision with ice, low-temperature environment, and thermocline. By conducting a literature review, it was obtained failure probabilities of BEs. Using the obtained information, preliminary reliability analysis was conducted. This research can be useful for designing and testing AUVs that can perform reliably in extreme environments.

10:00
QUANTUM OPTIMIZATION FOR REDUNDANCY ALLOCATION PROBLEM CONSIDERING VARIOUS SUBSYSTEMS
PRESENTER: Isis Lins

ABSTRACT. One of the most well-known reliability engineering NP-hard Combinatorial Optimization (CO) issues is the Redundancy Allocation Problem (RAP). The objective of the Redundancy Allocation Problem (RAP) is to assign many components parallel to serial subsystems to obtain the highest level of overall system reliability while preserving a given budget. It has, for example, been extensively researched in electrical power systems and computer networks. Several RAP configurations have already been examined, including multiobjective, bi-objective, and mono-objective configurations. Additionally, series, parallel, and parallel-series equipment arrangements. This study considers a mono-objective formulation of the RAP with a series-parallel system with multiple subsystems. Several approaches could be used to solve the RAP, for instance, exact, exhaustive, and (meta-)heuristic methods. However, solving algorithms is advancing in new ways. Studies are increasing in the Quantum Computing domain. For example, recent developments in quantum technology have made it possible to create small-and medium-scale quantum processors. D-Wave Systems’ quantum annealing processors have undergone considerable research and testing in academic and commercial environments, including combinatorial problems solving. Quantum optimization algorithms have a specific way of processing the CO, i.e., the problems must be formulated as a QUBO: Quadratic, Unconstrained, Binary. This study will test instances via quantum algorithms, such as Variational Quantum Eigensolver (VQE) and Quantum Approximate Optimization Algorithm (QAOA) using Qiskit® simulators and through quantum annealing algorithm in a D-Wave computer. The results show a concept proof of the quantum algorithms’ usability and a promising way to speed and improve the RAP solutions with the advance of quantum computers in the future years.

10:15
Identification of risky parts in a product fleet in the usage phase based on cluster analysis – case study light electric vehicle in the urban environment
PRESENTER: Alicia Puls

ABSTRACT. The increasing complexity of product functionality and manufacturing processes often leads to complex failure modes and reliability problems within the product usage phase. The mass production of consumer goods results in product fleets – like electric light vehicles - with comparable construction revision level. A failure mode leads to a spreading failure behaviour with regard to the product fleet and to an increasing percentage of customer complaints. The percentage of failures respectively complaints with regard to the product fleet (population) in the use phase depends on the failure mode as well as the usage load profile. The goal of the original equipment manufacturer (OEM) is the early detection of the risky parts of the product fleet for the initiation of measures like garage or recall actions. State of the art are the use of Weibull distribution models [1] in combination with candidate prognosis. The Weibull distribution model describes the failure behaviour, the candidate prognosis (e.g. Kaplan-Meier estimator or Eckel-candidate method) considers the non-failed units (because of the censored data) [2]. But the precondition of these methods are the assumption, that every non-failed unit of the product fleet is a potential candidate (potential damage case). But this assumption is not fulfilled in every damage case, rather the number of potential candidates is depending on failure mode and usage load profile. The use of Cluster analytics [3] allow the determination of risky parts in product fleets, based on product operating data and environmental data in the usage phase. In focus are operating data like switching cycles (life span variables), force, rounds per minute and environmental data like temperature, location coordinates. This paper outlines an approach to determine and identify risky parts in product fleets based on cluster analytics with respect to product failure behaviour and usage load profile. The approach contains: 1) Analysis of operating and failure data of a product fleet in the use phase, 2) Correlation of life span variables regarding to the failure mode, 3) Cluster analyses for identification of risky parts of the product fleet and estimation of potential candidates, 4) Reliability and risk prognosis. Here, different algorithms for cluster analysis are compared and the impact on the detection of risky parts within the product fleet, the similarity characteristics of products and the determination of candidates are worked out. The theory and application of the approach is shown with the help of a data base of a light electric vehicle (LEV) product fleet in the usage phase.

References: [1] W. Weibull (1951). A Statistical Distribution Function of Wide Applicability. ASME Journal of Applied Mechanics Vol.18, No. 3, pp.293-297. [2] S. Bracke (2022), Technische Zuverlässigkeit: Datenanalytik, Modellierung, Risikoprognose, 1st ed. Berlin, Heidelberg: Springer Berlin Heidelberg, Imprint: Springer Vieweg, https://doi.org/10.1007/978-3-662-65015-8 [3] K. Backhaus, B. Erichson, W. Plinke, R. Weiber (2010). Multivariate Analysis. Springer Verlag Berlin, ISBN 3-642-16490-0.

10:30
Experimental Analysis of a Lithium-Ion Battery Pack after Long Service Life in a Conventional Electric Vehicle Considering Second-Life Applications
PRESENTER: Tobias Scholz

ABSTRACT. The global ramp-up of electromobility can only be sustainable if the efficient use of energy and materials is ensured. Despite steadily decreasing costs in the process chain to produce lithium-ion traction batteries since 2010, these can still account for up to 40 % of the total costs of an electric vehicle.[1-2] After initial use in the electric vehicle (EV), it may make sense to use the battery pack for another application, for example as stationary storage. Which can be the case if battery capacity and/or performance are no longer suitable for the user's mobility requirements, or the traction battery has been removed due to other damage to the EV. Standardised measurement methods can be used to determine the capacity and internal resistance in relation to the Begin of life (BoL) of the accumulator using power electronics. Due to the long use in EV’s, the geographical location as well as the specific usage profile, all vehicles age individually. The construction and cooling of the battery in the EV, as well as production tolerances, can lead to inhomogeneous ageing within the battery pack. Accordingly, in this work, the traction battery of an eleven-year-old Peugeot iOn was tested for use as second life (SL) storage. For this purpose, standardised measurement procedures according to ISO 12405 were tested for applicability to aged traction batteries. The traction battery was removed from the vehicle and all twelve modules were tested on a test bench. To be able to make a further statement about inhomogeneity and possible reconfiguration of the traction battery, an overall statement was then made based on cell voltage differences at pack, module and cell level using curve fitting. The traction battery pack was found to have a remaining capacity of 51.75 % with respect to the nominal capacity, with the best module 56.16 % and the worst 51.75 % with a standard deviation at cell level of 1.33 % at module and 1.36 % at cell level. Furthermore, it was determined that the HPPC (High Power Pulse Characterisation) for determining the internal resistance could no longer be carried out under standardised conditions due to safety issues and had to be adapted accordingly. Therefore, this paper presents the results of the capacitance measurement, the resistance test for the charging and discharging resistance, as well as the problems encountered in the measurement of aged batteries. Based on these findings a categorisation for SL application or recycling could be made.

[1] BMWK-Bundesministerium für Wirtschaft und Klima, ‘Darum geht’s beim Batteriepass für Elektroautos’, Apr. 25, 2022. https://www.bmwk.de/Redaktion/DE/Artikel/Industrie/Batteriezellfertigung/batteriepass.html (accessed Jan. 03, 2023). [2] M. Wietschel et al., ‘Perspektiven des Wirtschaftsstandorts Deutschland in Zeiten zunehmender Elektromobilität’, doi: 10.24406/PUBLICA-FHG-298570.

10:45
Reliability analysis of telecommunication network from customer damage perspective: An overview of two pivotal techniques

ABSTRACT. Nowadays telecommunication networks, especially 5G, extensively infiltrate our daily life and their outages exert a tremendous influence, costing carriers millions and contributing to huge customers' economic loss. The significance of network reliability is demonstrated by a large body of literature, unfortunately this kind of research is scattered and reliability analysis of the network from customer damage perspective is scarce. To fill this gap, this paper analyses two key facets and sheds light on pivotal techniques to assess the reliability of the network. In the article, a review of reliability metrics and customer damage functions is given. The reliability metrics of telecom networks are summarized according to the course of network development. Furthermore, customer damage functions for several customer sectors are also categorized into six groups to cover the parties affected and various usage scenarios. We believe that this work represents an important and missing starting point for academy and industry to understand and contribute to this wide and articulate research area.

11:00-11:30Coffee Break
11:30-12:45 Session 10A: Prognostics and Systems Health Management I

Prognostics and Systems Health Management I

Location: Room 100/3023
11:30
Data Preparation for Precursor Identification in Unstable Approach Events
PRESENTER: Jie Yang

ABSTRACT. Unstable approaches have been identified as the main factor in most aviation accidents, making the identification of precursors to achieve such event prediction critical for ensuring the safety and reliability of flights. However, data preparation before precursor identification is challenging due to high-dimensional variable-length time series in a specific flight phase. In this study, we propose a pipeline for flight data preparation that offers standardized inputs for the precursor mining phase and labeled outputs for the unstable approach identification phase. The raw inputs are processed by an automatic feature selection based on correlation analysis. Additionally, a uniform dynamic time warping method is proposed to transform inputs with variable lengths into equal lengths for modeling, addressing the challenge of input variability caused by different tasks and weather conditions. The effectiveness of the preparation method in flight data is validated using flight data collected from regional aircraft. It is also possible to be extended to other adverse events occurring in flight phases in terms of precursor identification.

11:45
A multi-scale LSTM with multi-head self-attention embedding mechanism for remaining useful life prediction
PRESENTER: Ting Zhu

ABSTRACT. Remaining useful life(RUL) prediction of intelligent equipment plays a crucial role in avoiding major safety accidents and substantial economic losses from degradation failure. Recently, many studies focused on deep learning-based data-driven methods, such as long short-term memory (LSTM) neural network, that used multi-dimensions condition signals or features to predict the RUL. However, most existing methods are inability to acquire valid temporal information from long-term time series. Moreover, the input data containing much redundant information leads to imprecise RUL prediction results. To overcome the aforementioned weakness, a multi-scale LSTM neural network with multi-head self-attention embedding mechanism(MLSTM-MHA) is proposed in this article for RUL prediction. Firstly, the memory cell of LSTM is divided into several parts according to different temporal trend types, such as local trends, medium trends, and long trends. Fusing all types of memory cells can capture different trend information and improve the performance of LSTM to learn time series data. Secondly, multi-head self-attention is embedded in the forgetting gate and input gate structure of LSTM. This attention mechanism can participate in the training of the MLSTM-MHA network and adaptively recalculates the weights as the network parameters. And the redundant information is assigned lower weights due to lower values by the attention module. Finally, a commercial modular aero propulsion system simulation dataset and an industrial hot strip mill rollers dataset are used to validate the superiority of the proposed method. Compared with the existing typical data-driven RUL prediction methods, the proposed method has a more accurate predictive ability. The main contributions of this work include (1) the proposed novel structure of memory cells can deal with multi-scale temporal information (2) the impact of redundant information is attenuated by the proposed embedding multi-head self-attention mechanism.

12:00
Degradation modeling and RUL estimation of feedback control systems using stochastic diffusion process
PRESENTER: Yufei Gong

ABSTRACT. In this paper, we consider the case of feedback control system (FCS) suffering from stochastic internal damage and investigate a methodology for modelling and forecasting this degradation at system-level only via system output and reference input information. A general age and state dependent stochastic process embodies the inner damage. Like any system, a FCS can suffer unobservable stochastic internal damage. These internal damage latent in the system can be the stochastic degradation of unknown degradating component caused by age and/or system state. Its accumulation renders the entire FCS a deteriorating FCS. A well-built controller embedded in FCS has the ability to counteract degradation impacts hence to accomplish a fault-tolerant control. However, when the system degradation accumulates until a certain level, this FCS fails. The need for investigating a degradation index for monitoring system health becomes obvious. To explore the system-level degradation evolution of the deteriorating FCS, in [1], the authors extract system-level degradation indices from the FCS transfer function based upon output and reference input, which are the only available information. That is, from the observable output and input at each inspecting date, they estimate the transfer function of the entire deteriorating FCS. Then the distance of pole shift and maximum gain extracted from this transfer function are selected as two degradation indices. These indices increase with the accumulation of the degradation and grow with low speed at the beginning deteriorating stage but with high speed when the FCS approaches failure. It reveals the high complexity for modeling the designed indices with stochastic processes commonly used in system reliability. Therefore, unlike the work in [1], the system-level degradation index is extracted from the peak of system step response but without considering the influence of the controller. That is, at each inspecting date, the controller is removed while preserving the closed-loop structure and stability of the system with the aim to reveal more the effects of inner damage on system performance. We then observe the system output and input from this controller-unconfigured close-loop system and estimate the transfer function of it. The peak absolute value of the system step response observed via this transfer function is thus selected as a new degradation index because it is subject to less control action. Thereafter, an age- and state-dependent stochastic differential equation (SDE) with nonlinear drift and diffusion models the evolution of this index that reflects the one of the inner damage. A standard Brownian motion (SBM) transformation method [2], which transfers a general SDE to a SBM, is thus used to assess the probability density function of the RUL of FCS applied for degradation prognostics.

[1] Gong,Y., Huynh, K. T., Langeron, Y., Grall, A., Health indices construction for stochastically deteriorating feedback control systems, in: International Workshop on Service Orientation in Holonic and Multi-Agent Manufacturing, Springer (2022), 483–494. [2] Ricciardi, L.M., On the transformation of diffusion processes into the wiener process. J Math Anal Appl 54(1) (1976), 185–199.

12:15
Towards a probabilistic error correction approach for improved drone battery health assessment
PRESENTER: Jokin Alcibar

ABSTRACT. The revolution in robotics and autonomous systems (RAS) is unstoppable. The advance of autonomous system applications, such as autonomous transport [1, 2] and autonomous inspections [3], generate multiple benefits for the industry and society, including the improved driving security in autonomous transport, and improved reliability of critical and remote infrastructure through specialized robots and drones.

However, the reliability assurance of RAS is complex, as it requires incorporating advanced intelligence that should evolve according to run-time operation [4]. Focusing on inspection drones for offshore wind turbine inspections, the challenging yet exciting operation context hampers the reliability assurance of drones.

Different technological solutions have emerged to improve the design and reliability of drones [5]. Most of the technological configurations include a combination of mechanical and electrical components, along with onboard software intelligence to adopt decisions without direct human intervention. In this context, using the ever-increasing prognostics and health management (PHM) solutions, it is possible to develop a prognostics modelling approach for drone health monitoring through the use of machine learning and uncertainty modelling methods.

In this context, this paper presents the health estimation of drones through a novel hybrid prognostics solution, which combines physics-based and probabilistic data driven models in an error-correction configuration. The focus of the paper is on drone battery health assessment under uncertainty, which is achieved through correction of physics-based model errors through data-driven probabilistic forecasting strategies. Results are validated with real data from an offshore wind inspection drone.

References

[1] Feng, S., Yan, X., Sun, H. et al. Intelligent driving intelligence test for autonomous vehicles with naturalistic and adversarial environment. Nature Communications 12, 748 (2021). https://doi.org/10.1038/s41467-021-21007-8 [2] Ellefsen, A. L., Æsøy, V., Ushakov, S., & Zhang, H. (2019). A comprehensive survey of prognostics and health management based on deep learning for autonomous ships. IEEE Transactions on Reliability, 68(2), 720-740 [3] Floreano, D., & Wood, R. J. (2015). Science, technology and the future of small autonomous drones. nature, 521(7553), 460-466. [4] Aslansefat, K., Kabir, S., Abdullatif, A., Vasudevan, V., & Papadopoulos, Y. (2021). Toward Improving Confidence in Autonomous Vehicle Software: A Study on Traffic Sign Recognition Systems. Computer, 54(8), 66-76. [5] Elghazel, W., Bahi, J., Guyeux, C., Hakem, M., Medjaher, K., & Zerhouni, N. (2015). Dependability of wireless sensor networks for industrial prognostics and health management. Computers in Industry, 68, 1-15.

12:30
ROTATING MACHINERY HEALTH STATE DIAGNOSIS THROUGH QUANTUM MACHINE LEARNING
PRESENTER: Lavínia Araújo

ABSTRACT. Monitoring and analyzing vibration in rotating machinery provides crucial information regarding internal faults created in the equipment. In this sense, academia and enterprises have explored Prognostic and Health Management (PHM) to improve those equipment maintenance activities. One of the PHM domains is the diagnosis of failure modes. Diagnosis has been explored with several traditional Machine Learning (ML) and Deep Learning (DL) methods. However, the computation scenario is heading toward new advances, which include Quantum Processing Units (QPUs). Due to its representational strength, flexibility, and promising results in terms of speed and scalability, quantum computing is a new area that has recently taken researchers from various fields. Research centers worldwide began experimenting with models that lay at the intersection of machine learning and quantum computing. In this sense, a new technique that has already been applied in different scenarios is Quantum Machine Learning (QML), which aims to improve conventional ML methods in terms of performance and results. This work aims to apply QML models for the fault diagnosis of rotating machinery components, such as bearings and gearings, by vibration signals. Hybrid models will be applied that involve the encoding and construction of parameterized quantum circuits (PQC) to be trained by a classical neural network. Among the contributions of this work, we can mention the exploration of different databases available in the literature. Also, we will diagnose several failure modes. This study brings models with combinations of the Variational Quantum Eigensolver (VQE) framework with three different entanglement ports (CNOT, CZ, and iSWAP) as well as the variation of the quantum circuit layers quantities. The results will be compared with ML classical models, for example, SVM and MLP. Despite the limitations of quantum environments, the models are promising and can be used to support maintenance decisions within the PHM context.

12:45
Deep learning-based method optimized by an improved variational sparrow search algorithm for remaining useful life prediction of rolling bearings

ABSTRACT. For dealing with the problem of low prediction accuracy and poor generalization ability in the prediction of remaining useful life (RUL) of bearing, a novel prediction method based on an improved variational sparrow search algorithm (ISSA) and long short-term memory (LSTM) was proposed. The hyperparameters of the LSTM were automatically optimized by using the ISSA with dynamic search ability. The trained ISSA-LSTM model is used for the RUL of bearing. The generalization ability and performance of the proposed method are verified by the PRONOSTIA bearing datasets from the IEEE PHM 2012 data challenge and the XJTU-SY rolling bearings datasets. The ISSA-LSTM model outperforms the other benchmark methods in bearing RUL prediction, which demonstrates better performance of the proposed method. Long Short Term Memory networks (LSTM) are a special kind of RNN, capable of learning long-term dependencies. LSTM is used to cope with time series and the problem of gradient disappearance and explosion. An improved variational sparrow search algorithm (ISSA) is a variant of sparrow search algorithm(SSA). For the ISSA, There are two main contributions: (1) To cope with the difficulty of global search because of the small number of explorers in the early stage and the need of local search due to an excessive number of explorers and a relatively small number of followers in the later stage, an explorer-follower number adaptive adjustment strategy is used to automatically adjust the number of explorer and follower to improve the accuracy. (2) To jump out the limit and expand the local search ability, the cauchy mutation and Tent chaos disturbance are introduced to avoid populations that are too concentrated or dispersed. It is worth noting that the detailed steps of ISSA-LSTM are as follows: 1. Extract the features in frequency-domain processed by the fast fourier transform (FFT) method and obtain the amplitude signal from bearing vibration signals. 2. Normalize the signal and map the remaining useful life(RUL) of bearing between 0 and 1. 3. Construct the predicted model ISSA-LSTM and optimize the hyper-parameters by ISSA. 4. Input the amplitude signal obtained by FFT into ISSA-LSTM model for training and prediction and smooth the fluctuations of the results to highlight the degradation trend. 5. Use root mean square error (RMSE), mean absolute error (MAE), Scoring function, as the evalution metrics to assess the performance and compare the proposed model(ISSA-LSTM) with other benchmark methods i.e. SSA-LSTM, SSA-ELM, ELM and SVM to verify the results.

11:30-12:45 Session 10B: Human Factors and Human Reliability V
11:30
Estimating Performance Time of FLEX Implementation Based on Staffing Level Considering Multi-Unit Accidents
PRESENTER: Yochan Kim

ABSTRACT. After the Fukushima Daiichi accident, many nuclear power plants are developing Diverse and Flexible Coping Strategies (FLEX) plans and implementing portable FLEX equipment for enhancing defense-in-depth for the beyond-design-basis events such as extended loss of AC power and loss of ultimate heat sink. For understanding the feasibility and usefulness of the actions utilizing the portable equipment, it is beneficial to probabilistically estimate the reliability of the FLEX equipment implementation. In beyond-design-basis events, there are several distinctive considerations compared with the reliability evaluation of main control room operators in general emergency situations. For example, (1) communication failure relevant to communication means, (2) cognitive errors related to organizational factors and different types of procedures, (3) performance time utilizing mobile equipment, and (4) possible lack of staff due to environmental issues and accessibility could be counted during the human reliability assessment for the FLEX actions. In this study, we aim to present a human reliability model based on the human performance time of portable devices according to the composition of regular and minimum employees. Like the time-based reliability models in CBDT, EMBRACE, IDHEAS, and other methods regarding severe accidents, this model estimates the failure probability of how the time required will exceed the time available. The time available is typically determined with consideration of the thermal-hydraulic characteristics of the plant and its accident. The time required is quantified by the characteristics of the tasks to be performed for the given events and the expected performance time for each task. However, the number of staff available for portable equipment is a very important factor in determining the time required. In particular, in the case of multi-unit accidents, the available manpower for each plant varies depending on whether radioactive materials are emitted from a specific power plant, what kind of external initiating events occurred, how many on-site employees are summoned, and how the decision-making organization allocates employees. Therefore, this paper provides a procedure for calculating the feasibility and the expected time required for a human event according to the number of employees mobilized for portable devices. In addition, this paper describes quantitative statistics referable to estimating the time to operate FLEX equipment. Since time data for FLEX equipment is not sufficient, this study analyzed the response time of the firefighters for reference. These data and procedures are expected to be useful in estimating the time-based reliability of using FLEX equipment in the future.

11:45
Experimental Set-Up for Evaluating Operator Performance through Operations Control Room Simulation in the Oil and Gas Industry
PRESENTER: Plínio Ramos

ABSTRACT. Human factors have been identified as major causes and contributing factors of serious accidents and can affect safety performance in a process industry. In recent years, the oil and gas (O&G) industry has initiated active discussions about monitoring fatigue and detecting operator drowsiness, as well as their relationship to industry-associated risks. In these industries, control room operators must respond to abnormal situations through a variety of cognitively demanding activities, e.g. monitoring, detection, diagnosis and response. While fatigue, in turn, can slow operator reaction times, reduce attention or concentration, and impair judgment. Thus, situational awareness and safety awareness of the operators must be ensured throughout the entire operation. However, the operator's performance and prompt response to a situation can be impacted by factors related to “screen fatigue” and other factors, such as boredom due to long monitoring times and execution of low complexity tasks. Although these factors are well discussed in the Human Factors Engineering literature, their objective impact on operator performance is not fully understood Therefore, experimental studies with simulators have been increasingly adopted to assess the factors that influence human error. In particular, low-fidelity and small-scale simulators are an inexpensive alternative to high-fidelity control room simulators, with the added advantage of faster experimental setup. This article discusses the requirements for such a simulator to assess the impact of monitoring time and task complexity on operator performance in the context of an oil and gas industry operations control room. It discusses the literature on human performance in control rooms and the use of simulators for analyzing human error. It then proposes a non-intrusive method to quantify the impact of monitoring time and task complexity through the operator's sleepiness level. A model for sleepiness detection using artificial intelligence is presented, and its potential benefits and limitations for the experimental setup are discussed. The key elements of the proposed simulation are the allocation of tasks and responsibilities to the operator with temporary and unexpected workload peaks, referring to control room and emergency scenarios, and workstation configurations that include installation-specific overview screens. For drowsiness detection, computer vision techniques and deep learning models will be used. Performance measurement and observations such as reaction time, action correctness, and equipment/instrument correctness will be some outputs of the simulation. The findings provide a roadmap for future experiments that can contribute to an objective and quantifiable assessment of the factors that influence human performance and thus further discuss control room functioning and its reliability requirements.

12:00
Understanding the effect of broadband and babble noise on operators’ cognition

ABSTRACT. Background: Ambient or background noise present in many work environments and in safety critical roles (i.e., aviating) has the potential to adversely affect operator performance. For example, noise generated from aircraft engines (i.e., broadband noise) during flight has been shown to adversely affect recognition memory (Molesworth et al., 2014). Babble noise, a function of many speakers in a closed environment (i.e., office) has been shown to adversely affect sustained attention and memory (Smith-Jackson, & Klein, 2009).

Aim: The aim of the present study is to investigate the effect of two different types of noise, babble noise and broadband noise, at different signal-to-noise ratio (SNR) on two aspects of cognition, namely working memory and recognition memory. The level of 65dBA was chosen as representative of a reasonably noisy workplace and midway between the ‘acceptable’ office noise level of 45 dBA and the workplace noise exposure limit with respect to hearing damage of 85 dBA.

Method: The study contained two experiments. The first experiment, used a Modified Rhyme test, involved 40 participants, and was designed to eliminate any potential extraneous variables such as masking (i.e., where one sound covers another). The second experiment (repeated measures) used two different tests, examined the cognitive effect of noise on working memory (via the Alphabet Span task) and recognition memory (via Cued- Recall task), only in the cases where it was certain that masking could not account for any differences observed. 18 participants completed experiment two.

Results: The results of the first experiment revealed masking occurred in conditions with a SNR of -10dBA. Therefore, in experiment two, noise effect was tested at 0dBA and -5dBA SNR levels only. The results revealed, with recognition memory, recall performance decreased as the target signal became more difficult to hear for both noise types. A similar effect was noted with working memory in the presence of broadband noise, but recall performance was unchanged at the two SNR levels in the presence of babble noise. Subjective responses to annoyance and perceived effect on performance reflected the results of the objective tests.

Significance: These findings demonstrate that the detrimental effect of different noises on memory is not equal. Understanding these differences is an important first step in designing systems and controls to manage their effect in the workplace. In safety critical workplaces such as aviating, understanding these differences is important to maintain high levels of safety.

References

Molesworth, B. R. C., Burgess, M., Gunnell, B., Löffler, D., & Venjakob, A. (2014). The effect on recognition memory of noise cancelling headphones in a noisy environment with native and non-native speakers. Noise & Health, 16(71), 240-247. DOI:10.4103/1463-1741.137062 Smith-Jackson, & Klein, K. W. (2009). Open-plan offices: Task performance and mental workload. Journal of Environmental Psychology, 29, 279-289. DOI: 10.1016/j.jenvp.2008.09.002

12:15
Developing a real-time mental workload assessment method of Air Traffic Controllers based on behavioral measures.

ABSTRACT. Air Traffic Controlers’ (ATCos) Mental Workload (MW) is likely to remain the single greatest functional limitation on the capacity of the ATM System (de Frutos et al. 2019). MW in the ATM domain has been attempted to be estimated and monitored using subjective, physiological and behavioral measures. However, there is currently no accepted single method deployed in the industry to assess and monitor MW, fatigue and the effect it has on performance, even if the industry has now issued a requirement for active fatigue risk management processes as suggested by the International Civil Aviation Organization (ICAO) (Rangan et al. 2020). The disadvantages highlighted within the State-of-the-art for subjective and physiological measures is related to how obtrusive and impractical they can be to use in real work scenarios (Longo and Leva, 2018). First, they both interfere with ATCos’ performance: 1) online measuring of MW require attentional resources to be focused on introspection every time subjective reports of MW are requested and 2) most physiological measures need to be collected by using intrusivemequipment which would ultimately interfere with task development and even with experienced MW (Moray, 2013). Secondly, one outstanding feature of subjective measures is that they may be distorted (Hancock, 2017). For these reasons, the industry and scientific community need to develop a MW calculation model that can be based on an assessment of ATCos’ recordable behavioral measures (considering ATCos’ communications patterns and their interactions with the ATM systems) that can be deployed unobtrusively in an ecologically valid environment. The main advantage of this model is to overcome current limitations primarily because communication patterns and interactions with the ATM systems can be analyzed indirectly (and in real-time) though the logs of the ATM system automation, with equipment that is already an integrated tool of ATCos’ tasks; in addition those behavioral measures cannot be distorted. The aim of this research work is to develop a computational model of ATCo’s MW based on the behavioural data recordable through the ATM automation, about ATCos’ communication patterns and their recorded clearance actions and choices. This will enable the industry to develop and test supporting mechanisms for task complexity variations mitigating the disastrous effect of drops in human performance.

12:30
Preliminary Safety-Critical Scenario Analysis of Semi-Automated Train Operation
PRESENTER: Yang Sun

ABSTRACT. SNCF plans to introduce the automated train operation (ATO) system into the manual driving trains to make the train driving experience more efficient, eco-friendly, and precise. Human roles evolve with this increasing automation of the system. With this transition to automated trains, the organizational and human factors and railway safety must be re-examined. The most important change is the tasks for train drivers and SNCF is conducting safety studies to assess the impact of ATO on the train safety.

Preliminary Risk Analysis studies have been performed at SNCF but they are theoretical and have not been validated by simulation. In addition, in the current literature, there are a few work efforts devoted to ATO safety analysis, but none of them really integrate the human factor in the loop.

As a contribution to the state of the art, we propose to use the PRODEC method [1] to achieve a safety-orientated human-centered design. This method is based on the comparison between declarative and procedural scenarios. Declarative scenarios are built by the system designers, procedural ones are obtained with human-in-the-loop simulations. By performing this comparison, we observe the bias between tasks described in the scenarios and activities realized in the simulated scenarios. The bias will uncover emerging features that can be dangerous to safety.

To implement the method, one must first construct and select scenarios. This is the main contribution of this paper. We propose to present our methodology and results based on:

1) The analysis of the incidents that occurred in the French railway system during the past years (https://ressources.data.sncf.com/explore/dataset/incidents-securite/information/?sort=date) with their classification, occurrence, and severity.

2) Expert judgement. Today’s train drivers are trained by performing different scenarios on simulators and in real driving cabins. The experts have designed the procedures to guide railway operators in different situations. These procedures are made according to the preliminary risk analysis based on the technical system's specifications and feedback from the field. In these scenarios, the train drivers are supposed to follow pre-established procedures, deal with all possible situations while driving, and ensure the safety of the train. SNCF is then owning a huge data base of scenarios and experts can help to select those which are the most adapted to assess the impact of ATO on the train safety.

By doing so, we aim to define the human and organizational factors or the technical elements which have significant impact on system security and to build the most relevant scenarios.

As a result of our analysis, we have selected and built 12 scenarios that will be presented in the paper.

[1] G. A. Boy and C. Morel, “The machine as a partner: Human-machine teaming design using the PRODEC method,” WORK, vol. 73, no. s1, p. S15, Oct. 2022, doi: 10.3233/WOR-220268.

12:45
Perceived usefulness and usability of overview displays for nuclear control rooms
PRESENTER: Alf Ove Braseth

ABSTRACT. In the nuclear industry, both digital and hybrid (a mix of analogue and digital controls) main control rooms are becoming common in new builds and modernization projects. One example are overview displays, either as human scaled large displays, or smaller sized overview displays supporting specific systems or tasks. Even though overview displays have been present in the industry for several years, there is still a lack of empirical data documenting the impact of overview displays on human performance. An analysis of the current industry practices reveals that there is high variation on the motivations to integrate overview displays, their design concepts, and their specific implementations. It is then difficult to assess and make a clear analysis of “overview displays” as an overarching design feature, since each specific implementation is intrinsically different. We argue that the discriminative impact of overview displays on human performance is highly dependent on the specific characteristics of each display, including its design and operational implementation. In this paper, through three studies in different control room settings with three different versions of overview displays, we explore this complex relationship by analysing the connections between display usability, its reported use, and perceived usefulness by the operating crews. We collected data with 2 crews of 4 licensed operators who experienced complex full-scale scenarios at a research simulator (each crew with a specific version of the control interfaces, both workstation displays and overview display); as well as data from 6 crews of 2 operators each in a research reactor main control room who had long-term experience with an overview display. All participants responded to the Systems Usability Scale (SUS), as well as a questionnaire on the use of the overview displays designed specifically for the studies. This questionnaire focused on the concrete experience of use of the overview displays, and the perceived support from the overview display in different operational situations. We are currently analysing the data, and we plan to further elaborate on the findings from the questionnaires through the integration of comments from the participants in debrief group interviews. Preliminary results show a high usability rating for all the overview displays used in these studies (average ratings in the 95% percentile for SUS). Concurrently, the operators had also high ratings on the usefulness of the overview displays, as well as on the perceived support from the displays on both planned and alarm-driven types of tasks. The discussions during the debrief interviews corroborated these findings and revealed that the crews used the overview displays as a communication tool to plan actions and achieve shared awareness of plant status. The findings will be discussed in the context of the impact overview displays have on human performance in nuclear main control rooms. Limitations of the presented studies will be highlighted. Notably, all evaluated displays lead to a very high usability score, thus constraining the variety of the ratings and compromising hypothesis testing in the current sample.

11:30-12:45 Session 10C: Energy Transition to Net-Zero Workshop on Reliability, Risk and Resilience - Part V
11:30
Connection capability of distributed generation units in a power system under Active Network Management
PRESENTER: Juan Sun

ABSTRACT. In the energy transition, more decentralized production is connected to the medium-voltage (MV) power system. Consequently, when the power produced is not consumed locally, reverse power flows are injected to the high-voltage (HV) grid. More frequent line congestions and voltage problems might then occur. To overcome this issue and optimize both the present grid infrastructure use and the number of connected Distributed Generation (DG) units, Active Network Management (ANM) can be envisioned [1]. It aims to control the injection of energy produced by DG units into the grid, in almost real time, by possibly curtailing their production in case of grid congestion, in order to retain the grid secure.

The impact of ANM on each DG unit can be measured by its Utilization Factor (UF), which accounts for the expected curtailment on its more classical Capacity Factor (CF), i.e. the ratio of the actual production over the ideal production of the DG unit at its installed power. Given the variability of DG, a probabilistic methodology is used, in order to propagate the uncertainties on loads and generations on the grid model, while using an Optimal Power Flow (OPF) to estimate the most economical curtailment resolving an observed congestion case.

As a direct Monte Carlo sampling of the input distributions is prohibitively time-consuming, an alternative approach is proposed. Possible congestions occur, not on the basis of the detailed generations and loads, but on the net balance (i.e. the algebraic sum of all productions minus the total load) at each node of the grid. A congestion-free domain in the net balance space can then be identified. The optimal curtailment of DG units must on the contrary be calculated using the detailed variant of generations and loads associated with any unsafe net balance variant. This is achieved based on a targeted (systematic) importance sampling of only those detailed variants (all individual productions and loads) lying out of the congestion-free domain [2]. Risk indices can therefore be calculated efficiently.

The current work further develops this approach, in order to optimize the connected capacity in a MV grid, by selecting the most relevant nodes to which new DG units should be connected. This is illustrated on a typical test grid. [1] P. Järventaustac, S. Repo, A. Rautiainen, J. Partanen, Smart grid power system control in distributed generation environment. 6th IFAC symposium on power plants and power systems control, 2009, Tampere (Finland). [2] P.E. Labeau, F. Faghihi, J.C. Maun, V. De Wilde, A. Vergnol, 2014. Mathematics of PRA applied to Distributed Generation curtailment in saturated grids. European Safety and Reliability Conference (Esrel 2014), Wroclaw (Poland).

11:45
Optimal operational planning of wind turbine fatigue progression under stochastic wind uncertainty
PRESENTER: Niklas Requate

ABSTRACT. Fatigue damage is one of the major design drivers for structural components of wind turbines. These machines are required to operate continuously over a lifetime of more than 20 years, during which the fatigue damage progression is influenced by control-induced loads and site-specific environmental conditions. Loads can be influenced in various ways through the wind turbine controller, e.g., by derating the power or by operating in partial overload. Since fatigue progresses slowly over the lifetime, each component or even failure mode has an individual fatigue budget that can be utilized optimally. To obtain the maximum long-term benefit from each individual fatigue budget, the trade-off between energy production and load-induced damage needs to be balanced over the complete life cycle. For each failure mode, we can compute an optimal long-term operational planning that allows for optimal distribution of the damage contribution over the entire or remaining lifetime. This is conducted using deterministic assumptions about wind conditions. Now, we use uncertainties of annual wind distribution parameters as the basis for a probabilistic assessment of the lifetime of each component. This allows for combination using a reliability model, which yields the lifetime of the entire wind turbine system. The impact of individual component optimizations on overall system reliability is evaluated. Results show that all approaches yield a potential for extended lifetime, however the margin and the secondary impact differ greatly. Simultaneously, the span of probabilistic lifetimes emphasizes that uncertainty has a significant impact on the selection of an optimal strategy. Our findings provide a step towards a probabilistic and reliability-based long-term operational planning for an entire wind turbine system that is composed of multiple components.

11:57
From Expert Judgment at the Early Design Stage to Quantitative Resilience Curves Using Fuzzy AHP and Dynamic Bayesian Networks

ABSTRACT. The early design stage is the most effective time to introduce cost-effective measures to increase the resilience of the engineering systems against accidents and disruptive events, yet, since the current resilience assessment methodologies require sufficient knowledge on system characteristics and accidents scenarios, the resilient design is usually overlooked at this stage. This paper proposes a practical methodology for resilience assessment at the early design stage and links the qualitative assessment of system characteristics and expert judgment to a dynamic quantitative resilience assessment. In particular, the resilient characteristics of a system is identified and evaluated by the experts. Then, Fuzzy Analytic Hierarchy Process (AHP) is used to evaluate the contributing factors of the resilient design. Finally, Dynamic Bayesian Network (DBN) is used to make a dynamic mathematical model that represents the system response to disruptive events and integrates the identified system characteristics and expert judgment into a model that quantifies the dynamic resilience curve. The application of the methodology is demonstrated in a carbon capture and storage (CCS) system against the loss of containment accident. This paper presents a feasible methodology for the industries to introduce the system resilience at the early design stage and helps them to design safer, more reliable and available systems.

12:12
Knee-deep in two “bathtubs”: Extending Holling’s Ecological Resilience Concept for Critical Infrastructure Modeling and Applying it to an Offshore Wind Farm
PRESENTER: Lukas Halekotte

ABSTRACT. Multistability is a common phenomenon which naturally occurs in complex networks. Many engineered infrastructures can be represented as complex networks of interacting components and sub-systems which possess the tendency to exhibit multistability. For analyzing infrastructure resilience, it thus seems fitting to consider a conceptual framework which incorporates the phenomenon of multistability -- the ecological resilience provides such a framework. However, we note that the ecological resilience misses some aspects of infrastructure resilience. We therefore propose to complement the concept by two model extensions which consider the generation of perturbations to the infrastructure service and the remedial actions of service restoration after regime shifts. The result is a three-layer framework for modeling infrastructure resilience. We demonstrate this framework in an exemplary disturbance scenario in an offshore wind farm. Based on this use case, we further demonstrate that infrastructure resilience can benefit a lot from the notion of multistability.

12:27
Korea SDP Regulatory Process based on Regulatory PSA Model
PRESENTER: Dongwon Lee

ABSTRACT. Recently, the public interest in nuclear safety has been heightened due to the serious accident at the Fukushima nuclear power plant, the site black out(SBO) in Korea, and the forgery of quality documents for safety-related facilities. As a result, there is a need to comprehensively evaluate the safety level of nuclear power plants (NPP) in the course of safety issue investigations, reactor shutdown accident investigations, and regular inspections, and reflect them in the regulatory decision-making process.

The Korea Atomic Energy Research Institute(KAERI) and (Korea Institute of Nuclear Safety) KINS developed a PSA model (MPAS, Multi-purpose Probabilistic Analysis of Safety) for regulatory verification by representative operating NPP type (Westing House 600, Westing House 900, OPR1000, CANDU, FRAMATOME) through mid-to long-term research in the past (2007-2012). These models had been updated with the latest reliability data (2015~2018). And the rest of MPAS model have been recently developed (2020 ~ 2021). In order to improve the MPAS model, representative NPPs developed as a prior research work were used to confirm reliability data improvement reflecting plant-specific data, design changes, and procedure changes. In addition, errors such as some event trees and fault trees were also confirmed through verification and reflected in improvement. At that time, KINS developed a Risk-Informed Periodic Inspection (RIPI) program that improved the existing periodic inspection items by utilizing the PSA model for regulatory verification, and partially reflected the results in the periodic inspection guidelines. In order to develop Significance Determination Process(SDP) evaluation factors and evaluation methods, a method of using risk models for regulatory verification was identified through case studies of foreign regulatory agencies such as the US, Japan, and Europe. Based on this, a risk model for regulatory verification is used to evaluate the importance of events in the investigation of nuclear power plants.

Through this study, KINS created an opportunity to switch to a regulatory supervision system that systematically monitors and manages NPPs with reduced safety, and makes regulatory decisions by including information acquired through PSA in addition to deterministic information. KINS want to make it available for use. Specifically, in order to refer to PSA-based information in decision-making in regulatory activities, SDP evaluation systems such as 1st phase screening evaluation with web-SEM and 2nd phase detailed evaluation with PC based module RYAN were developed based on MPAS model. In order to confirm the validity of the SDP evaluation results and the reliability of the evaluation model, a pilot evaluation was conducted by reflecting the findings from the regular inspection of the existing OPR1000 nuclear power plant, and based on the results, improvements in the SDP evaluation procedure were identified and reflected.

11:30-12:45 Session 10D: S.33: Challenges and Opportunities for Risk and Resilience of Industrial Plants in the Management of Socio-Technological Systems I

The goal of this special session is to explore the relation between the resilience of an Industrial Plant and the human factors in a socio-technical setting. Deepening the resilience analysis is strongly linked to human factors, control theory and safety engineering, not excluding the integration with the new Industry era. Precisely from this consideration arises the need to understand how people can adapt to an environment full of dangers and pitfalls and, in other words, how people who are part of a system can present “resilient characteristics. ”, Such as modifying the very Resilience of the system with the new technologies. The result that will be presented will describe how the whole industrial system and the human factor are connected to Resilience.

Location: Room 2A/2065
11:30
HOW TO MEASURE ORGANIZATIONAL RESILIENCE THROUGH LEADING AND LAGGING INDICATORS OF THE SAFETY MANAGEMENT SYSTEM PERFORMANCE
PRESENTER: Marianna Madonna

ABSTRACT. Based on the study of the scientific literature on the performance indicators of safety management systems, it emerged that to date there is no standardization of indicators capable of providing a systemic assessment. However, in the literature, several indicators have been proposed to evaluate the performance of the safety management system: lagging, monitoring and leading indicators. The lagging indicators are result indicators in terms of the consequences deriving from situational and contextual factors. The monitoring and leading indicators, on the other hand, have the function to direct (guide) the activity of an organization towards proactive safety. The monitoring indicators provide a view of the dynamics of the organization in terms of practices, skills and motivation of staff, or the organizational potential for safety. In a previous scientific paper, the authors have proposed a correlation table between the elements of a safety management system according to UNI EN ISO 45001 and the elements that characterize a resilient organization according to ISO 22316. In this paper, the authors want to identify, for each correlation element, the leading indicators that are used to monitor same aspects of the safety management system, since, providing useful information to “anticipate” the behaviour of the system. Therefore, based on correlation table, these indicators should be able to provide indications on the resilience of the organization.

11:45
THE CHALLENGE OF TODAY’S INDUSTRY: SAFETY AND RELIABILITY IN RESILIENT COMPLEX SYSTEMS
PRESENTER: Silvia Carra

ABSTRACT. In the most recent years, industry has been touched by the simultaneous introduction of 4.0 smart technologies and new elements of complexity mainly related to mutated interactions between humans and machines. This change has made the issue of safeguarding safety and reliability an increasingly central and challenging topic. At the same time, complex industrial systems need to be resilient, that is able to absorb shocks and facing changes or uncertainties, through adequate reaction and adaptation. A vision of each system as a whole, supported by model-based integral approaches [1], can generally help. Technologies introduced by Industry 4.0 can increase workers safety and improve reliability and resilience of processes against system failures. For example, the system adaptive capacity can be measured and improved through machine-learning techniques [2]. However, new technologies have also introduced unexpected risks. Identifying critical components of an industrial system and considering its different elements with their dependencies can allow to balance safety advantages and risks while keeping resilience ability. Previous literature researches already showed that different typologies of resilience can be distinguished at different company levels (from frontline activities to macro-level organization) [3]. The authors have recently proposed a multi-level representation of the company decisional activity, expressly dedicated to establishing the feasibility of a collaborative use of machines [4]. In the present study, such concepts are taken up and extended in order to describe, all along a specific company multi-level representation, (i) the main 4.0 technological solutions for safety, (ii) their possible contraindications (e.g. new emerging risks) and (iii) the corresponding effects on the resilience capacity of the overall system. The elements in the proposed model are firstly detailed for every identified company level; then, results are also synthesized in table form.

Partial bibliography

1. Salzano, E., Di Nardo, M., Gallo, M., Oropallo E. and L.C. Santillo (2014). “The application of system dynamics to an industrial plant in the perspective of process resilience engineering”. Chemical Engineering Transactions, 36, 457-462. 2. Salehi, V., Veitch, B. and M. Musharraf (2020). “Measuring and improving adaptive capacity in resilient systems by means of an integrated DEA-Machine learning approach”. Applied Ergonomics, 82, art. no. 102975. 3. Macrae C. (2019). “Moments of resilience: time, space and the organisation of safety in complex sociotechnical systems”. In: Wiig, S. and B. Fahlbruch (eds), “Exploring resilience”, SpringerBriefs in Applied Sciences and Technology, Springer, Cham. 4. Carra, S., Monica, L., Vignali, G., Anastasi, S. and M. Di Nardo (2022). “Machine safety: a decision-making framework for establishing the feasibility of the collaborative use”. Proceedings of the 32st European Safety and Reliability Conference (ESREL), Dublin, Ireland.

12:00
Safety culture for resilience in nuclear safety management. Analysis of the relationships of its variables through a bibliographic review and AHP.
PRESENTER: Gregorio Acuña

ABSTRACT. Objective: Based on previous literature reviews, this study examines the evidence about the key elements of the safety culture constructs applied in nuclear reactor operating organizations. Background: Safety culture (SC) is an organizational concept born in the nuclear industry to the behavioral elements in the safety management of nuclear facilities. This concept has been continuously revisited by academics and practitioners and has even been widely adopted by the conventional industry; Methods: systematic bibliographic review to identify key concepts of safety culture. Analytical hierarchy process (AHP) is applied to rank these concepts through an online questionnaire delivered to experts; Results: six articles were found, and four key concepts of safety culture were identified. These concepts were ranked in the following order after expert AHP online questionnaire results analysis: top management leadership, communication management, safety climate, and hazard and risk analysis; Conclusions: Based on this research leadership actions are the most important nuclear safety management action to achieve the nuclear safety goals of the nuclear reactor operation.

12:15
Modeling a cyber-resilience-for-manufacturing ecosystem through causal loop diagram
PRESENTER: Saloua Said

ABSTRACT. From script kiddies to sophisticated and organized cybercriminals, cyber threats are constantly growing, adding new and further challenges. Therefore, overseeing and proactively responding to this severe and emergent cyber risk is of the utmost importance. This is called cyber resilience, which designates the ability to withstand and quickly recover from any cyber-related incident. Without this fundamental capability, a cybersecurity failure could inflict significant damage on organizations and even drive them out of business. According to a recent IBM cyber security intelligence index survey, manufacturing is one of the most targeted industries for cyber-attacks. Manufacturing 5.0 is increasingly being adopted by companies as a transformation pillar to harness the potential of data and create value at scale. This includes connected factories, moving from insights and decisions to operations and actions driven by analytics and artificial intelligence, scaling valuable use cases and solutions and multiplying the impacts through a leverage effect, and considering human resources as value creators, accelerators of innovation, and architects of change. These ambitions are accompanied by several cyber risks, such as system vulnerabilities, social engineering, malicious insiders, data loss, etc. The present paper introduces a cyber-resilience-for-manufacturing ecosystem, and cause-effect relationships, identified within this ecosystem, will be illustrated through a causal loop diagram to understand how an industrial site could be more resilient.

12:30
Applying Functional Resonance Analysis Method to strengthen resilience in the Norwegian customs infrastructure
PRESENTER: Lars Gregersen

ABSTRACT. As with any other country, accelerated globalization, climate change, and armed conflicts generate emerging security challenges at borders for the Norwegian custom administration (CA) authorities. CA has a critical role in governing management efforts to mitigate threats related to hazardous materials and illegal goods at borders and territorial security. Nevertheless, besides daily operational challenges, dealing with some events (e.g., a Covid-19 outbreak) may go beyond the standard procedures and propagates over multiple interconnected functions delivered by other governmental agencies such as police- and health department. Managing such complexities in a CA's operational context requires a holistic management system capable of addressing uncertainties and interconnectivity between involved agencies. In this regard, resilience-based thinking (Chuang et al., 2020; Thekdi & Aven, 2019) and its design in the system under study has been acknowledged to be promising for dealing with dynamicity and managing risk proactively (Patriarca et al., 2018; Steen & Ferreira, 2020). This study applies concepts and approaches from the resilience engineering field and explores the Norwegian Customs' ability to co-locate and coordinate with other government responding agencies at the border. We examine the interoperability between involved agencies in joint operations through the lens of the Functional Resonance Analysis Method (FRAM) (Hollnagel, 2012). We explore how different functions are coupled and whether they can be sustained in the wake of expected and unexpected events. Using FRAM provided us with a deeper understanding of the underlying factors that might enhance the resilience of CA's border facilities. Thus, our findings support the advantages of FRAM in studying a system's attributes. To illustrate key topics in our discussion, we use a real case study at a border facility, Junkerdalen, located in the north of Norway. Two main thought-provoking findings indicate that first, there needs to be a more holistic approach to risk management and governing documentation in an interoperability context, such as emergency planning for cooperation and co-location at the border facilities. Second, there is no evidence of joint training activities for agencies involved with joint operations at a border facility. We conclude by outlining recommendations for strengthening resilience in the Norwegian customs border facilities and proposing further research endeavours. References Chuang, S., Ou, J.-C., & Ma, H.-P. (2020). Measurement of resilience potentials in emergency departments: Applications of a tailored resilience assessment grid. Safety Science, 121, 385-393. https://doi.org/10.1016/j.ssci.2019.09.012 Hollnagel, E. (2012). FRAM, the functional resonance analysis method : modelling complex socio-technical systems. Ashgate. Patriarca, R., Falegnami, A., Costantino, F., & Bilotta, F. (2018). Resilience engineering for socio-technical risk analysis: Application in neuro-surgery. Reliability Engineering and System Safety, 180, 321-335. https://doi.org/10.1016/j.ress.2018.08.001 Steen, R., & Ferreira, P. (2020). Resilient flood-risk management at the municipal level through the lens of the Functional Resonance Analysis Model. Reliability Engineering & System Safety, 204, 107150. https://doi.org/10.1016/j.ress.2020.107150 Thekdi, S., & Aven, T. (2019). An integrated perspective for balancing performance and risk. Reliability Engineering and System Safety, 190, 106525. https://doi.org/10.1016/j.ress.2019.106525

12:45
The Potential of Decentralized Autonomous Organizations for Enhancing Inter-organizational Collaborations for Critical Infrastructure Resilience
PRESENTER: Boris Petrenj

ABSTRACT. Ensuring Critical Infrastructure Resilience (CIR) hugely relies on decisions and actions made by networks of public and private stakeholders and their inter-organizational collaborative capabilities. Public-Private Collaborations (PPCs) are currently the most prominent approach for building CI resilience all around the world, but still face many obstacles and challenges. The Decentralized Autonomous Organization (DAO) paradigm, enabled by blockchain technology and smart contracts, provides the conceptual and technological means for new kinds of decentralized systems and allows for the emergence of new ways of governance and coordination for CIR. The paper explores the potential of DAO for enhancing governance, decision-making, and coordinated resource management in order to tackle the current challenges of cross-organizational collaboration in CIR. It does so by critically comparing the traditional multi-actor governance models and the innovative DAO governance approach, taking the main objectives of PPCs and their current challenges in CIR as conceptual lenses. The key aspects of network governance are discussed, along with the advantages/shortcomings of different approaches, and their implications in the context of PPCs for CIR. This explorative study paves the way for both new streams of theoretical research and blockchain pilot projects in real contexts.

11:30-12:45 Session 10E: S.15: Digital twin: recent advancements, challenges and real-case applications I

This special session aims to bring together experts from academia and industry on digital twinning in order to address the following challenges:

  • Computational cost (and stability in fact) to propagate uncertainty through a high-fidelity simulation: intrusive vs non-intrusive approaches.
  • Quantify the uncertainty in the simulation and model: how twin is the twin?
  • How to assimilate data from different sources and different quality and different representations into the model?
  • How to control the different fidelity levels of the digital twin in different tasks or analyses?
Location: Room 100/4013
11:30
Predictive Modeling for Asset Availability using Artificial Intelligence

ABSTRACT. Reliability, Availability and Maintainability (RAM) study been conducted in every equipments, systems and fields to determine its availability targets via development of reliability block diagram and inclusion of reliability analysis on equipment's run time. The simulated availability target usually been used as target by plant operation as the target to be achieved collectively. Currently most of RAM study been via external software (ie MAROS or Reliasoft Block Sim) and is been done individually on system, equipment or field level. There is no integration on the fields and needed to updated manually, thus resulted in longer duration needed to deliver required result (low efficiency) and inaccuracy of result

In this study which anchored on the theme of Integration and Automation, it aims on to improve the response time and the visibility of  availability data for field which has the challenge on inconsistent data as the data come in various format and forms and insufficiency of resources (manpower). The end goal is ability to predict field next month availability accurately which impact the decision making for intervention

It utilized Regression model, by using boosted decision trees, has been developed by using availability data of field from 2018 – 2020 with the full utilization of Microsoft Azure Machine Learning and R Programming(which able to replicated reliability block diagram development in programming). It involved data extraction, data correlations, Exploratory Data Analysis (EDA) and feature importance which is portrayed in Microsoft POWER BI.

Under machine learning methodology, it involved with feature creation, feature transformation, feature reduction and feature selection. It involved 24 set of testing and combination of different feature before final combination that constitute the model been selected based on lowest mean absolute error (MAE) with the lowest different between test and validation data.

The model also been validated via Model Validation Strategy using hyperparameter tuning and cross validation which resulted in generation of MAE result

For the continuation of the model and as part of machine learning improvement, it is recommended for model to be expanded include other fields with more data and continuous validation using other programming software such as phyton or even improvement of existing R program coding.

The model developed has been presented to PETRONAS Citizen Analytics committees and has been selected as the 2021 Top 10 Best Citizen Analytics Project in PETRONAS besides been certified as PETRONAS Citizen Analytics Level 3 (Skill) Analytics Practitioner

11:45
Comparative Study on Optimization Methods in Finite Element Model Updating
PRESENTER: Jaebeom Lee

ABSTRACT. Finite element model updating has been widely studied in the civil engineering field owing to its applicability to damage detection, system identification, and digital twin construction. Its general scheme is to find the best-fit model with sensor data by updating various model parameters, such as elastic modulus, mass density, and rotational stiffness. It is definitely an optimization problem where lots of algorithms can be introduced. Not only first-order optimization methods (e.g., gradient descent method and momentum method) but also second-order methods (e.g., Newton’s method that introduces a Hessian) can be utilized. In many studies, zero-order optimization methods (e.g., evolutional algorithm, particle swarm method, simplex method) have been introduced to find global optimum even though those are computationally expensive. In this study, the optimization methods are compared in terms of accuracy and computational cost in a finite element model updating problem with a numerical example of Euler-Bernoulli beams.

12:00
Life-cycle Considerations for Trusting a Digital Twin for Safety Demonstrations

ABSTRACT. Increasing electrification and digitalization is an ongoing objective of the oil and gas industry. SFI SUBPRO, a collaborative industry-academia research-based center, researches the electrification and digitalization of subsea production and processing.

A proposal for increasing electrification is an all-electric subsea actuation system, controlling a valve that stops the flow of hydrocarbons in the event of production or emergency shutdown. The all-electric subsea actuation system could mitigate costs, increase controllability, and improve diagnostic capabilities compared to hydraulic-based actuation systems.

Demonstrating the functional safety of new critical systems like the all-electric subsea actuation system, which heavily relies on software, requires testing and verifying the software. A digital twin of a physical system can be used to simulate and test the functional safety of the system, including the software-based controls and diagnostic features, through virtual demonstrations. A digital twin would enable the virtual testing and verification of the system's functional safety before implementation in the physical world. The mechanical forces and dynamics of the all-electric actuator system can be mirrored by a digital twin throughout the lifecycle, enabled by an interface for transferring information from the physical twin [1].

An interface for transferring information allows the digital twin to be updated and adjusted according to the latest available data and provides improved insight for decision-making. Prediction capabilities, comparing expected and actual data for detecting unexpected deviations and potential fault causes, are examples of decision-making tools[1]. However, the quality of the digital twin is crucial to the quality of the evidence produced, making the development of the digital twin subject to qualification and assurance processes [2]. The qualification and assurance of the digital twin can be determined by following guidelines and recommended practices. A digital twin for safety demonstration would require adjusting the internal parameters of the underlying models to enable up-to-date demonstration capabilities of software updates.

A study was carried out to examine the techniques and procedures for re-qualifying and providing continuous assurance of an operational digital twin with altered internal parameters. Under the assumption of an available digital twin, previously qualified and assured for safety demonstrations, the research evaluates the process of re-qualifying and re-assurance the digital twin. The findings reveal a re-qualification and continuous assurance process for enhancing trust in the digital twin. A high level of trust is required for the digital twin to provide evidence of the functional safety of the corresponding physical system.

In summary, physics-based digital twins have the potential to provide valuable insights regarding the functional safety of novel software updates before uploading the update to the physical asset.

[1] Rasheed, Adil, San, Omer, and Kvamsdal, Trond. "\textit{Digital Twin: Values, Challenges and Enablers From a Modeling Perspective.}" IEEE Access 8 (2020): 21980-22012. doi: 10.1109/ACCESS.2020.2970143.\\[2pt] [2] DNV GL. "DNV-RP-A204 Qualification and assurance of digital twins." DNV GL, 2020.

12:15
Digital twins or equivalent infrastructure models? The role of modeling granularity in regional risk analysis of infrastructure

ABSTRACT. The economic and social well-being of communities depends on the functionality of large-scale critical infrastructure, including water, transportation, and power. Infrastructure are intricately interdependent to meet community needs in “normal”/business-as-usual times. Infrastructure face several natural and anthropogenic hazards, including climate-change effects, that need to be identified, assessed, communicated, and managed appropriately. Such needs are emphasized by the societal consequences of hazards (e.g., food insecurity, population displacement, and unemployment) that could be significant. The consequences of past disasters showed the need for infrastructure hazard mitigation and recovery planning. While high-impact, natural hazards are low-frequency events. So, decision-makers need to rely on mathematical models to simulate the impact of hazards on infrastructure. There is a need for mathematical models that can properly translate physical damage to infrastructure into disruptions and, ultimately, into socioeconomic consequences. Models of infrastructure are required to understand their behavior under stress (e.g., after the occurrence of a hazard) and plan to improve their performance. Mathematical models can only be useful if they can mimic reality by providing accurate results to support policy decisions.

One of the main challenges in developing mathematical models of infrastructure is defining their modeling granularity., i.e., the level of detail in the topology of the model. Different modeling granularities affect our ability to capture the spatial variability of the impact arising from the changes in the capacities of infrastructure and service demands. A recent trend in infrastructure modeling is to develop detailed digital twins to mimic all aspects of the real infrastructure. However, detailed digital twins might require data not readily available, and their analyses often have prohibitive computational costs, making digital twins not always the most suitable option to model the performance of infrastructure. On the other hand, past studies defined simplified infrastructure models at an arbitrary level of granularity based on the available information with limited considerations of the granularity’s appropriateness for the intended analyses. The goal of selecting the optimal modeling granularity is to allocate computational resources to the model that best delivers the desired information with the desired accuracy level.

This work presents a mathematical formulation to systematically select the appropriate modeling granularity of infrastructure. The formulation adaptively increases the granularity starting from a low-granularity infrastructure model until we reach the desired tradeoff among accuracy, simplicity, and computational efficiency. To define the tradeoff, we introduce novel metrics that measure the level of agreement between estimates of the quantities of interest computed using different levels of granularity. Such metrics include global measures that assess if a model is insufficiently detailed to capture the quantities of interest and local measures that identify specific regions of the model that may require further refinement. As an example, we apply the illustrated formulation to select the granularity of the potable water infrastructure model in Seaside, Oregon, to quantify its performance following a seismic event.

12:30
Design of a digital twin for prognostics health management of an O&G asset

ABSTRACT. The use of digital twin has been pointed as one of the most important breakthroughs in the predictive maintenance and performance monitoring. This is particularly critical for high-cost industry, such as the O&G exploration fields. For the oil well permanent installed equipment, the PHM becomes essential, given the difficult of maintenance and the productive losses caused by equipment failure. In this context, the present work showcases the implementation of a digital twin for the interval control valves (ICVs) responsible for switching productive zones. The digital twin is implemented in the MATLAB/Simulink environment, and artificial intelligence algorithms are used to adjust model parameters. After validation with field data, deep learning methods are used to construct a health indicator (HI) and to predict the ICVs useful life, based on the digital twin output for a pre-defined operational profile.

11:30-12:45 Session 10F: Risk and Asset Management
Location: Room 100/5017
11:30
Research on Multi-factor Coupling Analysis and Risk Warning of Operation Activity in Petrochemical Enterprises
PRESENTER: Xiaoyan Guo

ABSTRACT. Operation activity has always been the biggest threat to the safety of petrochemical enterprises. There is continuous dynamic interaction at operation area among operators/supervisors, machine and tools, hazard materials and the environment. Abnormal changes in any factor may lead to accidents. However, failure sequence of operation risk prevention measures is disordered in the evolution of operation accident. Thus, traditional static or sequential risk assessment methods are not suitable for this field, which restricts the assessment and control of operation risk. Therefore, this paper uses System Dynamics (SD) to construct a multi factor coupling feedback model for operation risk identification firstly, and describes the coupling and interaction relationship within the subsystems and between subsystems. Secondly, key indicators and corresponding monitoring techniques are determined, i.e., operators’ unsafe behaviors, gas leakage and so on. Finally, Bayesian theory (BT) is adopted to fuse multi-source information collected by the monitoring techniques, and then assessment operational risk. What’s more, a hierarchical early warning rule is built to determine the priority of risk control. This method can guide the realization of operation risk intelligent management and control.

11:45
Challenges in Functional Modelling for Safety and Risk Analysis
PRESENTER: Jing Wu

ABSTRACT. Safety and risk analysis in a system’s life cycle is a core activity to ensuring a sound safety basis for the system. It is one type of problem-solving process. The functional value of a solution is indispensable for the understanding of its being a solution. It is also important for defining failures and their possible hazardous consequences. The functional modelling is motivated for development and has also been applied to safety and risk analysis in different industrial domains. However, the research on functional modelling for safety and risk analysis is not that widespread. The purpose of this paper is to draw attention to this area for researchers and point out the challenges in functional modelling for safety and risk analysis. In the first place, it is explained why functional modelling is needed for safety and risk analysis including their challenges. Then for each challenge, a literature review is conducted, and our proposition is summarized. It is hoped that the paper can serve its purpose of making its contribution to opening up more potential research by using functional modelling for safety and risk analysis.

12:00
Real-Time Data-Driven Utilization Affecting Business Processes on the Operation and Maintenance of Power Generation

ABSTRACT. Looking to the use of Operation Technology, Internet Technology and to include the vast amount of data generated in business processes is increased and it might generate many benefits for almost all field of organizations. On the other hand, these benefits might be difficult to realize if organizations are not able to handle and interpret these vast amount of data. The purpose of this paper is to study how operation technology, internet technology and vast amount of data utilizations affect business processes in organization which operate and maintain power generation. In this study the authors analyzed the business process on the operation of few power generations which located separately in Indonesia. A case of the development of Reliability and Efficiency Optimization Centers (REOC), a centralized system that based on real-time data-driven to analyze reliability and efficiency of power generation were investigated. The study results show that business processes on the operation and maintenance (O&M) power generation organization have been transformed to deal with the real-time data-driven. In this study, it is discovered that data sources and its governance, developing new capabilities, organizational changes, and work flow standardization have changed. The organization should understand the value and the quality of the data and its governance, often generated by various equipment of power generation or various divisions of the organization. New capabilities to be able to interpret the data should also be adapted by organization. The organization transformation has implications in the field of decision making, performance of power generation is increased, and it also reduces the operation and maintenance cost. This paper demonstrates that real-time data-driven system, like REOC, can provide a valuable support to business processes on the OM power generation organization. In particular, the organization transformation can be adopted and applied to similar OM power generation organization.

12:15
Methodology for processing the risk management plan for selected assets of the transport infrastructure
PRESENTER: Jan Prochazka

ABSTRACT. To ensure the safety of human society, managers of critical elements of the transport infrastructure need to have a tool to ensure a quality response, because major failures of elements of critical transport infrastructure also mean an impact on the functionality and prosperity of the territory, sometimes even in the long term. Response must be ensured in all aspects: organizational; technical; personal; knowledge; financial and methodological. In accordance with the ISO 31000 standard, a risk management plan for a specific critical item contributes to the preparation of a timely and rapid response at the manager side. We present a methodology for processing the risk management plan for followed items of the transport system with aims to prepare high-quality responses to manage serious risks for selected items and determine clear responsibilities for initiating and implementing a quality response. The methodology has been certified by the Czech Ministry of Transport, and its implementation is currently being prepared. The risk management plans for selected critical elements of transport infrastructure (such as tunnels, bridges, railway stations, airports, and traffic control systems) enable the maintenance of safety at the required level by preventing delayed or inadequate responses to failures or accidents.

12:30
The appropriation by crisis managers of the “Business Continuity Plan” during the pandemic crisis.

ABSTRACT. A crisis is a disruption in a balance leading to the risk of losing control. The challenges that await managers are relatively identical regardless of the organization. In particular, they have to deal with surprise in a complex environment (an event with little or poorly known effects, exceeding benchmarks, a break in capability) and make strategic decisions even though the information available is fragmentary, sometimes erroneous or contradictory, often evolving, and various pressures (major consequences induced by the decisions to be made, time pressure, media pressure, and hierarchical pressure) can lead to multiple cognitive biases. The SARS-Cov-2 crisis illustrates the evolution of threats and the need to prepare to serious disruptions. This crisis emphasized the importance of ensuring the continuity of a company's activities in order to guarantee its sustainability by adopting continuity strategies. The purpose of this study is to detect the role of the Business Continuity Plan (BCP) in the decision-making process of crisis managers. We seek to understand how they handled with covid-19 crisis, what the skills, strategies and tools were used to detect and respond to this extraordinary situation. All these reflections will allow us to project the axes of improvement that can be envisaged, a renewed strategic approach that must be imposed to crisis managers. To achieve this goal, we used a grounded theory approach as one of our qualitative research designs and analysed data collected from crisis managers (17) through directional centered interviews. We combined the results of our analysis to what the literature has to provide (e.g. Norms) in a bottom/up approach to enrich the manager's toolbox by proposing a model of anticipation and business continuity that will help him to have a more holistic understanding of crisis issues that are increasingly complex.

11:30-12:45 Session 10G: Aeronautics and Aerospace II
11:30
CFD Simulation and Experimental Study on Outgassing and Damage Characteristics of Multilayer Insulation During Ascent
PRESENTER: Shouqing Huang

ABSTRACT. Multilayer insulation (MLI) is widely used on the outer surface of spacecrafts. In the ascent stage, the air pressure outside the spacecraft decreases rapidly from 1 atmosphere to 100 Pa within approximate 120 seconds. Hence, the inert gas of the spacecraft expands and then releases rapidly through the MLI, which may result in the damage of MLI. In this paper, a simplified geometrical model of MLI is established, and computational fluid dynamics (CFD) method is applied to simulate the outgassing process, the pressure contour and aerodynamic force of each MLI layer during ascent are calculated. The results show the pressure decreases gradually from the inside to outside layers, and the flow induced force generally rises from the inside to outside layers. The maximum aerodynamic force and stress occur at approximately the intermediate moment of the depressurization process. In addition, a test rig is designed to simulate the rapid depressurization process during ascent. MLI damage phenomena are observed in some geometric and fixation conditions. It is interesting that all the damage occurs at the outer layers of MLI firstly, and the maximum differential pressure between the upstream and downstream of MLI occurs at intermediate moment of the rapid depressurization process. These findings agree with the CFD results. Besides, the influence of layer number, fashion of outgassing holes and fixation on damage characteristics are discussed.

11:45
Impact of the arrival of micro/mini-launchers and micro-satellite constellations on RAMS activities
PRESENTER: Loriane Bourjac

ABSTRACT. The space transportation is entering an area of expansion and innovation. Micro/mini-launchers projects are multiplying to meet the needs of New Space players to put into orbit large quantities of small satellites. In order to reduce their costs, they are counting on a simplification of operations for high launch rates, and are looking for flexible launch sites. The Europe's spaceport, the Guiana Space Center, will offer many advantages for hosting this launcher’s family, while guaranteeing the safety and reliability of flights. The deployment of large constellations of small satellites raises the problem of orbital debris on usual Earth orbits. Currently, only a small percentage of satellites is deliberately deorbited. The resulting risks of collisions and explosions in orbit has led to the implementation of preventive and corrective actions at national and international levels. In this context, the improvement of the satellite reliability model during its life is a key to choose the best moment and guarantee the operations of passivation and deorbitation for satellites at their End-of-Life. To face these new Reliability and Safety problematics, the RAMS departments at the French Space Agency (CNES) are working on new methods to cope with the New Space background.

12:00
SAFEST: the static and dynamic fault tree analysis tool
PRESENTER: Matthias Volk

ABSTRACT. We present SAFEST, the new Static And dynamic Fault trEe analySis Tool.

Fault trees are widely used in industry to assess the reliability of systems. While standard (or static) fault trees (SFT) appeal as a relatively simple tool, they are limited in their modelling capabilities. To express more complex dependability scenarios, Dugan’s dynamic fault trees (DFT) have been developed, extending SFTs by new support for modelling spare management, order-dependent failures and functional dependencies. While various analysis approaches for DFT have been developed – e.g. via Markov models, Bayesian networks, Petri nets or Monte Carlo simulation – tool support is still limited today.

We developed SAFEST, a modern, state-of-the-art tool for modelling and analysing DFTs. SAFEST provides several efficient DFT analysis techniques based on probabilistic model checking as well as dedicated analysis for SFTs.

SAFEST's web-based interface provides a drag-and-drop editor for the graphical creation and modification of fault trees, supporting all dynamic gates from the literature. In addition, SAFEST provides an interactive step-by-step simulator for DFTs that visualises how failures – given by the user – affect the state of DFT elements.

SAFEST supports analysis with respect to all important quantitative dependability measures such as the reliability of the system, the mean-time-to-failure or the criticality of components. We offer users the flexibility to define custom measures of interest – to the point of specifying complex measures in mathematical logics. SAFEST can also automatically simplify a DFT to make its structure more comprehensible and amenable to efficient analysis – while still preserving its behaviour [1].

Our tool employs different analysis approaches. SFTs are best analysed using binary decision diagrams (BDD). Evaluations have shown that our BDD-based analysis performs comparable to existing tools for SFT analysis [2]. DFTs are analysed via state-based techniques by translation into a Markov model [3,4]. Our translation yields small models by exploiting irrelevant failures and symmetries in the DFT. The Markov model is then analysed with the state-of-the-art probabilistic model checker Storm, yielding exact results efficiently. Comparison to existing tools such as DFTCalc and Galileo has shown that our tool significantly outperforms its competitors – up to orders of magnitude [4]. SAFEST allows modular analysis by analysing different parts of a fault tree via the techniques best suited for them. Lastly, the tool provides an approximation approach that builds only the most “important” parts of the DFT’s behaviour, thus requiring less computational resources. This approximation provides an upper and lower bound on the exact measure of interest, and its precision can be tuned according to the user’s needs [4].

We showed the modelling capabilities of DFTs as well as the performance of our tool in several practical and industrial case studies. Examples include DFT models for vehicle guidance systems in the automotive domain [5], and analysing infrastructure failures in railway station areas [6]. DFTs with up to several hundred elements have been successfully analysed with SAFEST.

SAFEST is available online at https://github.com/DGBTechnologies/SAFEST.

References: [1] https://doi.org/10.1007/s00165-016-0412-0 [2] https://doi.org/10.1007/978-3-031-06773-0_38 [3] https://doi.org/10.1109/TDSC.2009.45 [4] https://doi.org/10.1109/TII.2017.2710316 [5] https://doi.org/10.1016/j.ress.2019.02.005 [6] https://doi.org/10.1007/s10009-022-00652-4

12:15
Risk analysis for in-flight refueling missions between a jet-powered aircraft and helicopters

ABSTRACT. Since the beginning of military aviation, the development of projects and their operation associated with innovation and the assumption of a certain degree of risk. One example was the development of projects capable of carrying out the in-flight reconfirmation operation (REVO). In order to remain at the forefront of international aviation, Brazil invested in developing a jet aircraft capable of performing REVO with helicopters. In this context, the present work focuses on the risk analysis of the REVO operation, based on Safety II, through the FRAM model. A model is presented from the perspective of the Safety I, applying the bow tie and risk matrix techniques to compare and discuss with the Safety II approach. Finally, it was possible to notice that the safety I approach has relevant techniques regarding component failure and automated procedures. However, Safety II complements the analysis, especially when it involves sociotechnical systems, where the human and organizational factors are prominent in operation success. In this sense, the two approaches are complementary and can be used in other similar contexts: modern medicine, machine operation, air traffic management, and others.

11:30-12:45 Session 10H: S.16: Digitalisation and AI for managing risk in construction projects II
11:30
Development of Risk Management framework for Digitalization and AI use in Engineering projects

ABSTRACT. The Inclusion of Artificial intelligence and other digitalization technologies in Engineering projects has enormous potential to fulfil project objectives that prioritize low risks and better end quality. However, using Artificial Intelligence and other digital technologies in projects was being less implemented practically on-site, especially in the construction sector in some countries, for many Socio-technical reasons. Recently, many projects worldwide have undergone critical constraints; due to these, the stakeholders are much concerned with evaluating risk as the nature of constraints is unstable, making it challenging to execute existing risk management processes. Therefore, proposing a new risk analysis that can resolve the complex constraints with higher positive benefits and minimize adverse impacts has become imperative. The proposed research paper is to design an Artificial Intelligence Risk Management Framework by the algorithm for Engineering projects which can identify and analyse all possible risks and their consequent range of impact, with a broader performance study of each component of the AI framework with high-quality, minimal negative impacts, and cyber security with a high-end response to any new risk generation. Secondly, to build a library of AI Risk Management frameworks based on the Type of Project, Sector which helps to choose the most appropriate risk management which can balance all Socio-technical systems, especially for multi-constraint projects. Thirdly, detailed working of the AI Risk Management Framework was presented along with few details about the level of Risk Management execution in the future under Digital technologies. Fourth, a theoretical case study evaluation of the proposed AI Risk Management Framework with the NIST (National Institute of Science & Technology, USA) framework is conducted to evaluate the algorithm's efficiency. Finally, designing AI Risk Management Framework with detailed functioning is the main and is in the scope of this paper. The validation of framework is out of scope, as it requires practical machine learning programmes which takes time to learn. The methodology followed in this paper is a Qualitative research method to review the current practises of AI in Risk Management framework, used and applied in Engineering Projects. The systemics review will use the Boolean Operators. The comparison method will identifies the key characteristics of the framework, and Critical analysis will identify the limitations, which support the new AI Risk Management framework. There are three main limitations which cannot be able to resolve because of the lack of validation, which is Cyber security, the Adoption of AI in the practical project, which requires a Digital Intelligence Quotient from employees and fewer risk indicators which are generated by human decisions/errors which could be treated as future research.

11:45
Building Information Modelling (BIM)-Based Quality Management System for Mitigating Building Failures and Collapse: A Case Study of Nigeria
PRESENTER: Ebere Okonta

ABSTRACT. Building collapses have become a serious issue in developing countries due to the rapid growth and concentration of people in urban areas. Nigeria has been particularly affected, with over 115 building collapses in Lagos alone over the past decade. To address this issue, the study evaluates the potential use of Building Information Modelling (BIM) as a tool for Quality Management Systems (QMS) to mitigate building failures and collapses in Nigeria. The study conducted a comparative analysis of BIM implementation and processes in the UK and Nigeria through a literature review, and the survey of QMS of 23 Architecture, Engineering, Construction, and Operations (AECO) companies in Nigeria. The survey found an alarming lack of QMS practices in Nigeria, with only 39% of respondents indicating that their organization has formal quality management in place, only 50% of respondents indicated that they have an up-to-date quality management training manual, while nearly half of the respondents (48%) do not have an effective QMS reporting structure. These findings suggest that there is a significant need for greater implementation of QMS practices in Nigeria's AECO industry, particularly in the context of building collapses. The potential use of BIM as a tool for QMS represents a promising avenue for mitigating building failures and collapses in Nigeria and other developing countries.

12:00
Expert Evaluation of Chat GPT Performance for Risk Management Process based on ISO 31000 Standard
PRESENTER: M.K.S. Al-Mhdawi

ABSTRACT. ChatGPT is widely known for its ability to facilitate knowledge exchange, support research endeavours, and enhance problem-solving across various scientific disciplines. However, to date, no empirical research has been undertaken to evaluate ChatGPT's performance against established standards or professional guidelines. Consequently, the present study aims to evaluate the performance of ChatGPT for the risk management (RM) process based on ISO 31000 standard using expert evaluation. The authors (1) identified the key indicators for measuring the performance of ChatGPT in managing construction risks based on ISO 31000 and determined the key assessment criteria for evaluating the identified indicators using a focus group session with Iraqi experts; and (2) quantitatively analysed the level of performance of ChatGPT under a fuzzy environment. The findings indicated that ChatGPT's overall performance was high. Specifically, its ability to provide relevant risk mitigation strategies was identified as its strongest aspect. However, the research also revealed that ChatGPT's consistency in risk assessment and prioritisation was the least effective aspect. This research serves as a foundation for future studies and developments in the field of AI-driven risk management, advancing our theoretical understanding of the application of AI models like ChatGPT in real-world risk scenarios.

12:15
Machine Learning in Construction Industry: Opportunities and Challenges for Decision-Making and Safety Management

ABSTRACT. As the construction industry continues to embrace digitalization, machine learning is emerging as a powerful tool to improve efficiency and enhance decision-making throughout the project lifecycle. This study presents an exploration of the potential of machine learning in the construction industry, focusing on reducing accident risks through decision support. Through a combination of workshops and analysis of machine learning applications in other sectors, this study provides insights into the opportunities and challenges associated with machine learning in construction. Results suggest that machine learning tools can enhance information-gathering, visualization of trends, prediction of outcomes, and evaluation of effectiveness. However, challenges such as increased complexity, criticality, and lack of trust in machine learning must be addressed. The study recommends developing a theoretic safety model for machine learning tools, focusing on finding the correct parameters and addressing challenges associated with machine learning. Overall, the study concludes that adopting machine learning can benefit construction, but it is essential to consider these challenges carefully.

11:30-12:45 Session 10I: System Reliability II
11:30
Towards a Graphical Specification of Operational Rules in RiskSpectrum ModelBuilder
PRESENTER: Pavel Krcal

ABSTRACT. Model Based Safety Assessment tools encapsulate dependability expertise in the definition of high-level components. Detailed (formal) description of component behavior and interactions can be created by an expert and exposed to users only on the level required for building system models. Knowledge Bases in RiskSpectrum ModelBuilder (KB3) implement this separation by offering an analyst a library of graphical components with their properties and possible connections. Component behavior and interactions are pre-defined in the knowledge base. Certain dependability analyses, for example studies of production plant behavior or spare part storage optimization, can be performed analytically or exhaustively only with coarse approximations that might hide complex but highly relevant interactions within the studied system. Typically, one resorts to Monte Carlo simulations in such cases where precise modeling of the system state and its evolution in time affected by stochastic failure behavior of components plays a central role. The Knowledge Base approach using the modeling language Figaro offers all flexibility in adapting component behavior and interactions exactly for the purpose of the dependability study. This includes not only rules for individual components, but also encoding relevant rules determining plant behavior based on the global state – operational rules of the plant. One challenge when developing or maintaining Knowledge Bases is keeping consistency and correctness of the implemented rules. As an addition to already existing design and debugging tools, this paper proposes a new method of operational rule specification – a graphical language capturing operational decisions. The proposed method offers a visual aid for structuring component interactions, information flow and plant decision steps. It shall make the development easier and less error prone. We also evaluate to what extent can the Figaro code be automatically generated from such high-level descriptions. This work is based on experiences from real-life projects. We investigate used patterns of interaction rules, successful strategies for structuring of the information flow and learn from typical mistakes. Our goal is to generalize these experiences and arrive at a graphical language that does not limit the expressivity of Figaro and at the same time helps to structure the Knowledge Base creation and is instrumental in maintaining correctness of the final code. Possible application areas for the Knowledge Base approach include production analysis of processing plants, power plants or downstream oil&gas industry. Analyses can evaluate and compare different designs by modifying the plant itself (additional redundancies, etc.) or reliability parameters (more reliable components).

11:45
A new approach to time Petri nets modelling with an example of transportation system performance analysis

ABSTRACT. In any real system, every event takes some amount of time, no matter how small. Therefore, conducting analyses related to evaluating the level of any system's performance very often refers to investigations of given systems' timing parameters. In practice, the timing analysis can be typically performed with the use of such techniques as timed automata, timed state charts, and Petri nets with time extensions. The main types of Petri nets models that allow the analysis of temporal aspects include time Petri nets (TPNs), timed Petri nets, stochastic timed Petri nets or coloured timed Petri nets. In general, based on the Petri nets approach implementation, the time parameters use are connected with three typical situations: 1) time parameters express the delay between the time when the transition is ready to fire and its firing, or 2) time parameters express the duration of firing the transition, or 3) time parameters are connected with tokens (appropriate age of tokens). In the presented paper, the authors introduce an alternative time Petri net model with dynamic time intervals. In the proposed approach a dynamic firing time interval is assigned to tokens. Following this, In the Introduction section, the authors present a short literature review of the investigated research area. Later, in section 2, the classical time Petri nets are presented. Section 3 introduces the new time Petri nets with time tokens. The proposed approach is presented based on the case example of transportation system performance. The paper ends with conclusions and a definition of future research directions.

12:00
A GSPN-based dynamic reliability modeling method for UAV data link system

ABSTRACT. The reliability of data link system of unmanned aerial vehicle (or unmanned system) is the key to determine the success of unmanned system mission execution, and is also a difficult point of reliability modeling and analysis of unmanned system. Aiming at the characteristics of multi-service integration, complex structure and dynamic reconfiguration of UAV data link system, this paper proposes a method to build a dynamic reliability simulation model of data link service based on generalized stochastic Petri net (GSPN). By analyzing the characteristics of the UAV data link system, this method uses the resources, places, transitions and other elements in the GSPN model to describe various resources, states, behaviors and collaborative relationships in the data link system, and uses specific random variables to represent the component states and their durations to simulate the dynamic operation process of the data link system. Three typical services of UAV data link system as uplink remote control, downlink telemetry and downlink mission load transmission are modeled and simulated by using the Pipe software tool, which could accurately characterize the synchronization, concurrency, distribution, conflict, resource sharing or competition, and reliability of the data link with dynamic reconfiguration behavior under the multi-service background could be evaluated. Finally, this paper proved that the GSPN could obtain method has been reliable reliability evaluation results for the data link by a case analysis.

12:15
Physics-Informed Neural Network for Online State of Health Estimation of Lithium-ion Batteries
PRESENTER: Fusheng Jiang

ABSTRACT. This paper presents a novel approach for estimating the state of health (SOH) of lithium-ion batteries, which addresses the challenge of being unable to measure the internal cell temperature during operation. The proposed approach, termed physics-informed neural network (PINN), integrates physical prior knowledge with measurable actual data to estimate the SOH of the batteries. To achieve this, an equivalent circuit model is established to characterize the electrical behaviour characteristics of the batteries. Additionally, an electric-thermal partial differential equation is established to describe the batteries' heat generation mechanism and heat transfer process, and the batteries' instantaneous temperature field is reconstructed based on the PINN model. Finally, the online estimation of the lithium-ion batteries SOH is realized using the piecewise Arrhenius model. The simulation and experimental results show that the proposed approach achieves an average error of 0.37% in the temperature field reconstruction of the lithium-ion batteries and an average error of 0.15% in the online SOH estimation, even when the internal cell temperature cannot be measured.

12:45-13:45Lunch Break
13:50-14:30 Session 11: Plenary session: Professor Ingrid Utne, NTNU. Title: Risk-aware autonomous systems for safe and intelligent decision making

Dr. Ingrid Bouwer Utne is a Professor at Department of Marine Technology at the Norwegian University of Science and Technology (NTNU). Her main research area is risk assessment and modeling of marine and maritime systems. Utne started her career in the marine domain as a young officer onboard two Norwegian frigates, and among other things she sailed with NATO’s Immediate Reaction Force.  Later, she has worked in the research institute SINTEF, in the industry, and she has been a visiting Scholar at UC Berkeley where she was a member of the Deepwater Horizon Study Group (DHSG) at the Center for Catastrophic Risk Management. The DHSG served as advisor to the US Presidential Commission, authorities, and the public on issues related to the Macondo blowout. In recent years she has specifically focused her research on improving the safety and intelligence of autonomous systems, as part of interdisciplinary work in the Center of Excellence on Autonomous Marine Operations and Systems (NTNU AMOS).

14:35-15:50 Session 12A: Prognostics and Systems Health Management II

Prognostics and Systems Health Management II

Location: Room 100/3023
14:35
Confidence interval for RUL: a new approach based on time transformation and reliability theory
PRESENTER: Pierre Dersin

ABSTRACT. Remaining useful life (RUL) is the key reliability performance metric driving decisions and strategies for predictive maintenance and asset management. From a risk-informed perspective, RUL estimates must take data uncertainty into account and adequately quantify this uncertainty via confidence intervals. A new approach, recently introduced in [1], allows confidence intervals for RUL to be derived analytically by exploiting a nonlinear transformation of the time variable. This new approach uses time transformation to make the mean residual life (MRL) a linearly decreasing function of the transformed time. Then, for linear MRL, explicit confidence bounds for the RUL are easily derived [2] and mapped back to the physical domain with an inverse time transformation. A case study tests the method, on the prediction of the remaining useful life of LEDs that have undergone accelerated degradation tests. A confidence-bound analysis is provided. The example demonstrates the usefulness of the proposed method in situations where there is a limited amount of degradation data available. LEDs fail when the luminous flux depreciation exceeds a maximum threshold and, therefore, the corresponding time to failure is the first hitting time. A realistic assumption for the modeling of that random variable is a Weibull distribution, for which the above time transformation method is carried out explicitly. It is then shown that, if an alternative model (a Gamma distribution, with an appropriate shape factor) had been adopted instead, the results would be quite close to those obtained initially. The key parameter is the slope of the MRL in the transformed time; that parameter can be explicitly related to the shape factor of the Weibull or Gamma distribution. That parameter, comprised between 0 and 1, is used to build the confidence interval, and it is shown that the larger its value (the steeper the slope, i.e., the faster the degradation), the narrower the confidence interval is. Similar results can be obtained with a Wiener or a gamma process. References [1]. Modeling Remaining Useful Life Dynamics in Reliability Engineering, Pierre Dersin, Taylor & Francis, CRC Press,; to appear, 2023 [2] Dersin, P., “The Class of Life Time Distributions with a Mean Residual Life Linear in Time: Application to PHM”, in Safety & Reliability: Safe Societies in a Changing World, Eds S. Haugen, A. Barros et al., CRC Press, 2018. , pp. 1093-1099

14:50
Uncertainty quantification of different data sources with regard to a LSTM analysis of grinded surfaces

ABSTRACT. To improve the conventional methods of condition monitoring, a new image processing analysis approach is needed to get a faster and more cost-effective analysis of produced surfaces. For this reason, different optical techniques based on image analysis have been developed over the past years.

In this study, fine grinded surface images have been generated under constant boundary conditions in a test rig built up in a lab. The gathered image material in combination with the classical measured surface topography values is used as the training data for machine learning analyses.

The image of each grinded surface is analyzed regarding its measured arithmetic average roughness value (Ra) by the use of Recurrent Neural Networks (in this case LSTM). LSTMs are a type of machine learning algorithms which can particularly be applied for any kind of analysis based on time series.

In this paper a possible optimization potential of the available databases is analyzed. For this purpose, two different sets of images with various resolutions were taken under the same conditions. Since the data plays an essential role for the training of machine learning models, the challenge in the application is often to find cost-efficient, fast and at the same time process-adaptable measurement methods that also have sufficient accuracy. Thus, the target values recorded with tactile measurement method are compared to a more precise confocal / optical measurement method. This results in two data sets with unequal imbalanced distributions and different statistical variance.

The entire parameter study regarding the network topology and parameter settings was performed prior this study. In this paper, only the most performant settings are used as starting points for the further optimization and uncertainty quantification. The approach of optimizing the algorithm results and identifying a reliable and reproducible LSTM model, which operates well independent of the choice of the random sampled training data, is presented in detail. Finally, the performance of the models trained with the optical measured data is compared with the models from the tactile measured database.

15:05
Fully Unsupervised Fault Detection in Solar Power Plants using Physics-Informed Deep Learning

ABSTRACT. Machine learning algorithms for anomaly detection often assume training with historical data gathered under normal conditions, and detect anomalies based on large residuals at inference time. In real-world applications, labelled anomaly-free data is most often unavailable. In fact, a common situation is that the training data is contaminated with an unknown fraction of anomalies or faults of the same type we aim to detect. In this case, training residual-based models with the contaminated data often leads to increased missed detections and/or false alarms. While this challenge is rather common, in particular in technical fault detection setups, it is only rarely addressed in the scientific literature.

In this paper we address this problem by introducing a data refinement algorithm that is capable of cleaning the contaminated training data in a fully unsupervised manner, and apply the algorithm to a problem of fault detection in grid-scale solar power plants. The data refinement framework is based on an original physics informed deep learning classification algorithm that would require healthy data as its input, in order to generate from it synthetic faulty data and train a binary classifier. We show that in order to achieve high fault detection performance, it is essential to avoid contamination of the original healthy data with unlabelled faults. To this end, we introduce an algorithm that isolates the healthy data in a fully unsupervised manner prior to training the binary classifier. We test our algorithm with field data from an operational solar power plant which includes contamination of unlabelled faulty data and demonstrate its high performance. In addition, we demonstrate the robustness of the proposed refinement method against an increasing fraction of faults in the training data.

15:20
Interpretation of influential factors for AI-based anomaly detection
PRESENTER: Sheng Ding

ABSTRACT. This paper focuses on analyzing the significant factors that affect the accuracy and dependability of AI-based time series anomaly detection. The objective is to provide comprehensive insights into interpreting these factors, and we explore their impact on performance. Our study's outcomes can assist researchers and practitioners in selecting the most appropriate approaches for anomaly detection tasks in diverse domains.

15:35
Investigation on the capacity of deep learning to handle uncertainties in remaining useful life prediction
PRESENTER: Zeina Al Masry

ABSTRACT. Remaining useful life (RUL) prediction is subjected to multiple uncertainty sources, such as measurement errors, operating conditions, and model representation capability. The quantification of the prediction uncertainty is important for assisting decision-making. In literature, stochastic processes have proven their efficiency in handling uncertainties in prognostics by providing RUL distribution. However, they have limitations in their adaptability to capture the dynamic behaviors of complex systems. To address this issue, it is recommended to employ deep learning (DL) methods that usually generate point-wise RUL predictions instead of RUL distribution. Therefore, the objective of this work is to investigate the capacity of DL methods to manipulate uncertainty in RUL predictions. Particularly, the probabilistic deep learning (PDL) framework is used to predict the RUL distribution instead of a point-wise RUL value. The obtained results by PDL are compared with the analytic solutions of the stochastic processes to highlight the uncertainty management capacity of PDL.

14:35-15:50 Session 12B: Innovative Computing Technologies in Reliability and Safety

Human Factors and Human Reliability V

14:35
Improvement of a method of jam detection on conveyor belt in a waste sorting plant.
PRESENTER: Calliane You

ABSTRACT. Context: In France, waste produced by human everyday life is increasing (+35% from 2005 to 2017) [1], so new waste sorting plants are built every year. Meanwhile, the current waste sorting plants have to handle this increase that causes failures up to 10% production time. And less production time means, less recyclable waste sorted, then it means more incinerated or buried materials, so more new raw materials consumed. Also this is not good for the environment. In waste sorting plants, when waste is stuck, a jam starts leading the conveyor belt motor to force until the conveyor belt stops. These slow dynamic jams on conveyor belts and failures needing maintenance are the main issues, and they need to be detected the earliest possible. Multiple aspects cause failures. Waste is pre-sorted by citizens who often choose the trash can that is available, so input waste is composed of non-recyclable waste and recyclable waste, and bulky items. And a waste plant can recycle a material while another cannot, depending on the installed machines and legislation. These constraints add up to a variable flow of waste of different humidity, density, shapes, materials, subject to be stuck inside the conveyor belts. This makes the production unpredictable.

Problematic: A supervised Machine Learning method has been developed in a previous work (a k-Nearest Neighbors) [2] with positive results to detect potential jams before they occur. However, it requires expertise and a time-consuming manual process to build a specific labelled training data set. In addition, it is difficult to collect usable jams data [3] to build a training data set. This is not scalable to hundreds of conveyor belts to monitor, whereas it is an industrial need. Also, it is essential not to have to build as many training data sets as conveyor belts to monitor and to adapt one training set to multiple conveyor belts.

Proposed method: Artificial Intelligence applied to time series to detect breaks in signals [4]. This work use real-world data from an industrial environment made available by Aktid, a waste sorting plants builder.

Results : The results are compared to each other based on the accuracy of the prediction of known traffic jams and the experts' interpretation.

References: [1] Ademe. (2020). Déchets Chiffres-clés - Edition 2020, 39. La librairie ADEME, Angers. [2] You, C., Adrot, O., and Flaus, JM. (2022). Jam detection in waste sorting conveyor belt based on k-Nearest Neighbors. In Leva, M.C., Patelli, E., Podofillini, L. and Wilson, S. (©2022 ESREL2022 Organizers), Proceedings of the 32nd European Safety and Reliability Conference (ESREL 2022), 8. Research Publishing, Singapore. [3] Bruggemann, D., Hinz, M., and Bracke, S. (2022). AN APPLICATION OF SEMI-SUPERVISED LEARNING TO SPARSELY LABELLED DATA. In Leva, M.C., Patelli, E., Podofillini, L. and Wilson, S. (©2022 ESREL2022 Organizers), Proceedings of the 32nd European Safety and Reliability Conference (ESREL 2022), 8. Research Publishing, Singapore. [4] Šprogar, M., Colnarič, M., and Verber, D. (2021). On Data Windows for Fault Detection with Neural Networks. IFAC-PapersOnLine.

14:50
SafetyKube: Towards Orchestration at the Edge for Critical Production Systems
PRESENTER: Yousuf Al-Obaidi

ABSTRACT. Due to various market trends, such as changeable and/or lot-size-1 manufacturing, production systems are under high pressure to become more flexible than they are today—a (r)evolution often referred to as Industry 4.0 [1]. While this transformation is already challenging for the physical assets involved, it is equally challenging for the digital infrastructure that operates the production [2, 3]. The digital approach that is taking form is utilizing the edge computing concept to address challenges of timely reconfiguration (or generally: flexible orchestration) of computing, networking, and storage resources. However, these particular challenges are already being addressed by research and implementations [4]. What is lacking is the consideration of safety aspects, which are essential in critical production environments. Software containers and their orchestrators are a promising technology to implement the edge computing concept in industrial settings. In this work, we analyze generic edge computing workloads for hazards and risks related to the orchestration process using the FMEA technique. In industrial production systems, software systems need to be designed and implemented while respecting the software safety requirements. Software can operate safety-critical machinery or can be responsible for implementing safety control actions. The operation of this type of software relies on correct and timely execution. As a result, the orchestrator must be able to guarantee the correct and appropriate deployment of software onto the cluster nodes to facilitate safe execution. The goal of the safety analysis is to enable unambiguous and correct description (or translation) of the safety-related requirements of the orchestrated containerized software application. Our analysis is performed on an established orchestration solution (Kubernetes) and we propose new components that enable critical workloads (description files, custom controller, custom scheduler), to be managed with reduced risk. The new components serve the following goals: • A strict interface to create and describe Kubernetes safety-critical objects. • A safety-context-aware custom Kubernetes controller. • Real-time schedulability analysis capable scheduler. • Runtime overload monitoring. We then implement our solution for an industrial pick and place application. Finally, we discuss the benefits and drawbacks of our approach and highlight future directions for research to make safe orchestration a reality.

[1] Heiner Lasi, Peter Fettke, Hans-Georg Kemper, Thomas Feld, and Michael Hoffmann. 2014. Industry 4.0. Business & information systems engineering 6, 4 (2014), 239–242. [2] Cohen, Y., Faccio, M., Pilati, F. et al. Design and management of digital manufacturing and assembly systems in the Industry 4.0 era . Int J Adv Manuf Technol 105, 3565–3577 (2019). https://doi.org/10.1007/s00170-019-04595-0. [3] Omar Jaradat, Irfan Sljivo, Ibrahim Habli, and Richard Hawkins. 2017. Challenges of safety assurance for industry 4.0. In 2017 13th European Dependable Computing Conference (EDCC). IEEE, 103–106. [4] Gill, S.S., 2022. A manifesto for modern fog and edge computing: Vision, new paradigms, opportunities, and future directions. In Operationalizing Multi-Cloud Environments (pp. 237-253). Springer, Cham.

15:05
An automatic live load survey method based on multi-source Internet data and computer vision
PRESENTER: Chi Xu

ABSTRACT. The measured live load forms the data basis for the reliability analyses. This study focuses on the amplitude measurement of the sustained load, which is an essential component of the live load. Traditional survey methods are characterized by manual and on-site operation, which can lead to a series of problems including the high cost, low efficiency and occupant resistance. Taking full advantages of the unlimited Internet resources and computer vision technology, a new survey method is proposed to realize an automatic and online investigation into the load amplitude. The amplitude statistics are derived from the survey data on the object weights, room areas and object quantities. Specifically, the object weights and room areas are directly acquired from the product information on e-commerce websites and the residence information on real estate websites, respectively. The object quantities are identified from the room photos on real estate websites. Therefore, an object detection model based on the YOLOv4 algorithm is developed. The load investigation into living rooms is used for illustrating the implementation process of the proposed method. The result of a previous survey covering 20040 m2 suggests that 6 types of indoor objects contribute the majority of the load statistics and require to be considered in the detection model. The training, validation and test dataset include 5979, 1000 and 1000 room photos, respectively. The detection model has mean average precision (mAP) of 62% on the test dataset. For comparison, object quantities in 343 living rooms are obtained by both the manual counting and computer vision. The difference between the manual and automatic survey results is smaller than 20%, which verifies the feasibility and accuracy of the proposed method.

15:20
Multi-label Classification with Embedded Feature Selection for Complex Abnormal Event Diagnosis
PRESENTER: Ji Hyeon Shin

ABSTRACT. A nuclear power plant is the largest electrical power generation system composed of hundreds of components. When an abnormal situation occurs in a nuclear power plant, operators have to perform an appropriate diagnosis to alleviate the plant state. This abnormal event diagnosis process is based on the alarms and symptoms described in the abnormal operating procedures. However, when two or more abnormal events occur simultaneously, the plant parameters may show complex changes unlike the alarms and symptoms described. Abnormal event diagnosis models can be helpful greatly to operators when they can provide diagnostic information in more difficult situations such as these. In this study, the diagnostic performance of the existing artificial neural network model was improved by applying embedded feature selection to classify complex abnormal events. An embedded feature selection uses the feature importance of parameters used when a pre-prepared machine learning classifier trains a dataset. The parameters selected through this method only the characteristic parameters for each event so that the artificial neural network model can efficiently perform diagnosis. These results enable the abnormal state diagnosis model to provide diagnostic information to operators even in complex situations. In conclusions, this approach can increase the applicability of the diagnostic model using artificial neural networks to the actual operator support system for safer actual nuclear power plants.

15:35
QUANTUM MACHINE LEARNING FOR DROWSINESS DETECTION WITH EEG SIGNALS
PRESENTER: Caio Maior

ABSTRACT. Human reliability is an area of increasing importance in several areas for the prevention of accidents. Monitoring human biological parameters is one way to follow the metabolics agents to detect patterns that may suggest behaviors liable to accidental situations. Accordingly, actions can be taken to prevent catastrophes from happening. Electroencephalogram (EEG) data is one source that has been explored in the literature for identifying drowsiness. The latter is one of the main causes of fatigue, and can affect the most diverse functions, such as machine operators in the O&G industry. EEG data for drowsiness has already been explored in the literature from classic Machine Learning methods, such as Multilayer Perceptron (MLP). However, computing technology has been enhancing concepts from quantum mechanics to solve problems to bring advantages in terms of computational efficiency to obtain answers. This can occur from simulations of quantum systems and in the prime factorization of a number, for example. In this sense, Quantum Machine Learning models for classifying states have been experimented in several contexts, including EEG signals. Quantum Variational Algorithms (VQA) are a great example that implement quantum concepts in classical structures to train data. Thus, this work aims to classify the sleepiness of operators from QML models. EEG signals will be preprocessed in order to extract features related specifically to this type of data, such as Higuch fractal dimension (HFD), Complexity and Mobility. In addition, statistical features will also be collected, such as the mean, variance, root mean square (RMS), peak-to-peak and maximum amplitude. The QML models will be trained with different architectures that rely on rotation, entanglement gates and quantum circuit layers. The results obtained will be compared with classical ML models, such as the Multilayer Perceptron (MLP). A contribution of this work is to specifically explore the context of drowsiness that has not yet been analyzed within QML in the literature. Finally, this study provides a proof of concept that these models are suitable for these types of data and can be improved as Quantum Computing evolves.

15:50
Model for compiling the specifications for the reconstruction of critical objects
PRESENTER: Dana Prochazkova

ABSTRACT. The specifications of the technical installation are an essential part of the design documentation of the technical installation. They contain technical, financial, time and other data that determine the fabrication of a functional technical installation. They are also a basic document that ensures the safety of the technical installation, since, in addition to a detailed inventory of works, supplies and services, a statement of quantities of the works requested and supplies, the terms of reference must also contain documentation on how the risks must be considered in a given case. They must consider the risks associated with both, the territory in which the technical installation is placed and the technical installation and also risks connected with expected reactions and conflicts of the given territory to the implementation and operation of the technical installation. The legal requirements of the tender conditions in the Czech Republic are regulated by the Building Act (Act No. 183/2006 Coll.) and other laws, as these are financial, relationship, liability, environmental, insurance, information protection, etc. However, the legislation in question does not contain specific requirements for the specifications for the reconstruction of technical buildings, especially critical ones. These reconstructions have specific features, because they are not greenfield constructions and it is necessary to make the most of existing buildings and equipment. Therefore, during the reconstruction it must be considered that: - some parts of the technical installation must be taken over for practical (economic, time and possibly other) reasons, even if they are not ideal from the current point of view, but they meet the requirements for safety, i.e. reliability and functionality, - some outstanding risks (e.g. errors in the establishment of objects, interconnection to networks in the area) cannot be eliminated (this would require changes in the extensive surroundings, finances and long-term service limitations), - some essential reconstruction work cannot be done during operation, it is necessary to interrupt the operation, which limits the functionality and reliability of services for users, and leads to unacceptable phenomena in human society (therefore, some limitations of present living level standards are needed). The Praha metro belongs to the critical transport infrastructure. Therefore, great care is taken to ensure its safety at all stages of the life cycle. The submitted model for the preparation of the tender conditions for the reconstruction of a metro station is based on knowledge gathered in professional literature and on the experience gained during the reconstruction of the Praha metro station, which is located at the intersection of two metro lines in the center of the capital. It proposes a procedure for compiling the terms of references for reconstruction and a method of settlement of the above-mentioned specific requirements for reconstruction for critical buildings where safety is a basic feature of their quality.

14:35-15:50 Session 12C: Safety Nuclear Systems I

The objective of this session is to discuss advances in Nuclear safety research. Topics discusses involve risk modelling, assessment and management.

14:35
Tools for Ensuring a High-Quality Experiment Aimed at the Safe Cooling of Large Heat Flows
PRESENTER: Dana Prochazkova

ABSTRACT. Nuclear technologies are associated with physical processes that lead either to the fission of heavy atomic nuclei when they collide with other nuclei or particles or to the fusion of atomic nuclei of lighter elements into the nuclei of heavier elements (nuclear fusion). During both processes, huge amounts of energy are released, which, if not controlled, can damage both, the technical equipment in which the process takes place and its surroundings. Therefore, it is necessary to ensure process safety management, in which the cooling process plays an important role. Reactor cooling is a critical process in all nuclear facilities. The present problem is the safe cooling of fusion reactors, which are under preparation. The submitted article describes experiments by which it is investigated the cooling process using a heat exchanger with the geometry of the hypervapotron, with which it is reckoned at fusion reactors. At measurement it is used a special connection of components that have certain safety limits. To ensure the correct results, on which the risk-based design of the hypervapotron device can be based, it is necessary that the experiments are carried out with high quality and that their results are conclusive. According to present knowledge, a measurement process quality needs to have main feature the safety, which guarantees both, the correct results and the protection of the lives and health of present humans, property and the working environment. Primal measurements on the experimental loop using the hypervapotron device revealed that the temperature of the thermocouples in the hypervapotron wall does not decrease with increasing the coolant flow as it was expected. This fact indicates that the cooling performance or the temperature on the thermocouples are unstable under otherwise constant conditions and even undergo sudden changes. Therefore, we have started regular checks of the measuring equipment and other accessories as well as the measurement process using the checklists to prevent the occurrence of unacceptable risks that cause large changes in the cooling trend. Their application revealed that the temperatures measured in the hypervapotron wall are significantly affected by changes in the structure of the material from which the seal around the hypervapotron sample is made. As a result of this problem, water leaks and steam forms in the measuring area, which adversely affects the temperatures on the thermocouples. They occur from a certain level of coolant flow due to exceeding the seal resistance limit. Therefore, a seal made of a different material must be used. For use in practice, we must find dependable solution.

14:50
NPP Emergency Response Planning utilizing Tree Search Algorithm and Deep learning Models
PRESENTER: Junyong Bae

ABSTRACT. Currently, responses to emergency situations of a nuclear power plant (NPP) are guided by emergency operating procedures (EOPs). These procedures provide a series of steps for mitigating emergency, such as parameter monitoring and component activation. There are two ways to organize these procedures: symptom-based and event-based. However, both approaches can be limited in providing an optimized response, as it is challenging to prepare for the entire combination of events and symptoms. To resolve this problem, we propose an optimal response investigation framework. The proposed framework utilizes data-driven methods, including tree search algorithm and deep learning. Given a set of response, this method constructs a tree with nodes and edges corresponding to the plant status and responses, respectively. As brute-force tree expansion can be inefficient, a tree search algorithm selectively expands the response scenarios guided by policy and value-predicting deep learning models. The responses, expressed in the form of the tree, are evaluated based on plant parameter future trend predictions by deep learning, which had been widely researched over the last decade as a fast-running surrogate of thermal-hydraulic system codes and plant simulators. The proposed framework can continually reinforce itself over time by interacting with the simulators and the system codes. We tested the proposed method using a compact nuclear simulator (CNS) and its emergency response procedures. We believed that this research could advance the current research in dynamic procedures and automation of NPP operations.

15:05
Multi-Step Soil-Structure Interaction Analysis of Nuclear Power Plant Containment Building using Beam-Stick Model Considering Structural Nonlinearity
PRESENTER: Yuree Choi

ABSTRACT. Abstract Seismic response of important structures such as a Nuclear Power Plant (NPP) can be significantly affected by soil-structure interaction (SSI) and nonlinear behavior. In particular, since the effect becomes significant when a strong earthquake occurs, it is essential to conduct a seismic response analysis that considers SSI and structural nonlinearity. A multi-step method is used to consider frequency-dependent soil properties and nonlinear behavior of structure. This method transforms the impedance function in frequency-domain into an impulse response in time-domain and applies to seismic response analysis in time-domain as a sway-rocking form. The transform method used to consider the SSI effect in this method is proposed by N. Nakamura, which is more accurate than the other methods that only consider simultaneous terms [1]. By using this method, computational cost can be reduced when performing a probabilistic seismic response analysis for the fragility evaluation of the NPP structure. However, a multi-step method has been limited to a simple beam-stick model. This study is conducted to apply a multi-step method to the seismic response analysis of more complicatedly modeled beam-stick structural models such as NPP auxiliary buildings. NPP containment building is used as a target structure for seismic response analysis. In the previous study, a simple beam-stick model considering only the containment shell was used [2], but in this study, a numerical model with 2 sticks including the shield of the structure is used. A multi-step method is verified through comparison with the seismic response of the structure using ACS SASSI, an SSI analysis software. Since ACS SASSI is unable to conduct nonlinear analysis of the beam-stick model, the analysis method is verified assuming that the structure behaves linearly. To consider the structural nonlinearity of NPP containment building, IMK (Ibarra-Medina-Krawinkler) hysteretic model is adopted [3]. Seismic response analysis is performed for various soil profiles and input earthquake intensities, and ISRS (In-Structure Response Spectrum) is compared. By comparing ISRS, the effect of SSI and nonlinear behavior of the structure on the seismic response of the structure is analyzed. Each influences the seismic response of the structure, but when both effects are considered simultaneously, the SSI effect is offset by the large nonlinearity of the structure. Acknowledgement This research was supported by UNDERGROUND CITY OF THE FUTURE program funded by the Ministry of Science and ICT. References [1] Nakamura, N., 2006, Improved Methods to Transform Frequency‐dependent Complex Stiffness to Time Domain, Earthquake Engineering & Structural Dynamics, Vol. 35, No. 8, pp. 1037~1050. [2] Yuree Choi, Heekun Ju,and Hyung-Jo Jung., 2022., Time-domain Seismic Analysis of Lumped-mass Model Based on Nuclear Power Plant Considering Soil Impedance Function, Trans. Korean Soc. Noise Vib. Eng., Vol.32, No. 5, pp. 517~526 [3] Ibarra, L. F., Medina, R. A. and Krawinkler, H., 2005, Hysteretic Models that Incorporate Strength and Stiffness Deterioration, Earthquake Engineering & Structural Dynamics, Vol. 34, No. 12, pp. 1489~1511.

15:20
A Fault Simulation and Monitoring Method of DCS in Nuclear Power Plant Based on Virtual Platform
PRESENTER: Chao Guo

ABSTRACT. Reliable operation of the distributed control system (DCS) is an important guarantee for the economy and safety of nuclear power plants (NPPs). Due to the complexity of functions and structures, the diversity of equipment and interfaces, there may be random faults or systematic faults in the DCS system that cannot be self-detected. It is an important topic to analyse and identify typical faults of DCS systems and equipment, and to establish the correlation between fault modes and fault phenomena. Based on commonly used FMEA method, this study collects typical system-level and component-level fault types of DCS systems and establishes a typical fault effect model in three dimensions (severity of fault consequences, probability of the environment excitation, and scope of the fault) to describe the fault impact. Fault simulation is an effective way for fault analysis. Based on two simulation software packages, a minimised DCS simulation system is built with the capability of fault simulation, comprising Level 0 process system, Level 1 control system, and Level 2 human-machine interface (HMI). The study further performs the implantation and monitoring of several typical DCS faults based on this virtual platform, which helps to validate the fault effects and improve the fault monitoring methods.

15:35
Phoenix HRA Methodology: Preliminary Investigation for Digital Control Rooms, Explicit Time Treatment, and Future Extensions

ABSTRACT. Phoenix is a model-based HRA methodology developed to support the HRA of Nuclear Power Plants (NPP). Since its development, Phoenix has been applied to several case studies to test methodological and practical aspects. This paper discusses future extensions of Phoenix concerning: i) its use with dynamic PRA tools, and ii) its application to digital control rooms. It also discusses Phoenix Performance Influencing Factor (PIFs) and potential applications for inspection processes

14:35-15:50 Session 12D: S.33: Challenges and Opportunities for Risk and Resilience of Industrial Plants in the Management of Socio-Technological Systems II
Location: Room 2A/2065
14:35
Railway safety analysis : trends and challanges

ABSTRACT. Rail transport has been changing the mobility of people. The Railway’s development has pointed the focus on the safety management of trains and infrastructure, particularly related to passengers. This paper focuses on the methods adopted to guarantee a high level of safety in railways; for this reason, it was interesting to present a literature analysis about this topic. The authors short-listed 48 articles for in-depth analysis. A bibliometric research was conducted along with a thorough literature assessment evaluating the safety management in the Railway area. Today, the attention of experts and railway managers is very high on high-speed railways because of their high-risk levels. For this reason, many advanced technologies have been developing all over the World, thanks to the new artificial intelligence tools and machine learning, to guarantee a constantly high degree of safety. Especially the stations and specifically, the platform zones, are crucial hotspots for railway accidents, particularly suicide events. Some prevention techniques showed an increment in safety quality and made Railway the safest transport infrastructure. So, a resilient model applied to safety railways could improve even more actual safety degree.

14:50
Ontologies: discussion of a research protocol on safety management systems articulating social sciences and computer science

ABSTRACT. Industrial risk management in applied setting is more and more faced with the need to provide a unified conceptual picture favourable to the elaboration of risk assessments and risk monitoring approaches, while at the same time accommodate the use of a plurality of data, knowledge, models and expertise that come from different areas, stem from different point of view (field operator vs designers), and reflects also different belief. A central difficulty would be to find bridges between the different levels of abstraction and conceptualization that experts may use, manipulating notions that are only apparently common. It is a problem of conceptualization, because it is the concepts that make it possible to organize the representations, to manipulate the data, the methods, the models and to coordinate the contributions of the different experts. Here, the use of digital technology is essential, as databases allow more easily than paper (or pdf files) to embrace diversity, heterogeneity, and dynamic interactions of knowledge. The question is how to design the right application or platform to help solve this problem, and also to help deliver a common operational pictures that can be operationally deployed. What are the underpinning concepts that can deliver such a purpose. This paper presents the results of a research, started in the framework of the European project Tosca (2013), on the development of ontologies as a support for the safety management systems (cf. Seveso regulation). The thesis that we wish to present in this article is that computer language is indispensable for the design of concepts useful for risk management, as soon as these risks are "major" and their management therefore requires an important level of precision. Computer tools should be considered as a research protocol in human sciences, not only as a support to instantiate concepts that could be elaborated before their computerization. In other words, IT is essential for thinking about major risk management organizations.

15:05
Facility-Level Downtime Estimation Using Only Publicly Available Data Seth Guikema

ABSTRACT. One of the key challenges in natural hazard risk and resilience estimation is estimating the downtime of key infrastructure services at the level of individual facilities such as a residential home, a hospital, or a commercial facility. Here downtime consists both of estimating the probability of a given service (e.g., electric power, drinking water, or cellular communications) being lost as a result of a hazard event together with the conditional distribution of how long that service is out at that location given that it was initially lost. Downtime estimation is critical for a number of different uses. The first is determining which sub-populations in a community face the greatest risk of not having access to essential services after a disruptive event. This is essential to support assessment of inequities in hazard resilience within a community (Logan and Guikema, 2020). A second key use of downtime estimation is to help community members, businesses, government agencies, and other organizations better plan risk mitigation measures. This requires the ability to assess the impacts of specific interventions (e.g., installation of a backup generator at a particular water pumping station). Third, downtime estimation is essential for better pricing of downtime insurance by insurers and better ability to consider downtime insurance as a risk transfer option by commercial entities and others in the community. All of these uses require detailed, facility-level estimation of the likelihood of losing each of the critical infrastructure services and the duration of the outage if an outage occurs. A key challenge in infrastructure downtime estimation is that detailed data about infrastructure system layout and performance models is generally not available outside of the utility operating the system, yet the downtime estimation is needed by many other entities without access to this data. This talk presents an approach for infrastructure downtime estimation that is based on only publicly available data yet yields validated estimates at the facility level. The general steps in this approach are: (1) create a synthetic representation of the infrastructure system layout, (2) create a system-appropriate engineering performance model or approximation of the performance model, (3) simulate hazard loading on the system from hazard events, (4) simulate loss of service at the facility level, and (5) simulate the restoration process at the facility level if outage duration is needed. This approach is demonstrated for power distribution systems, drinking water systems, and cellular communication systems. The advantage of this approach is that it allows detailed, accurate estimation of downtime at a facility level without requiring security-sensitive infrastructure data that is generally not available.

Logan, T. M., & Guikema, S. D. (2020). Reframing resilience: Equitable access to essential services. Risk Analysis, 40(8), 1538-1553.

15:20
Nuclear safety management: A model of nuclear power plant operation based on system dynamics

ABSTRACT. The nuclear power plants operating entities aim to generate electrical energy in safe conditions. From the point of view of nuclear safety, this implies achieving an adequate integration of human, technological, organizational and environmental factors to prevent events that lead to damage to the reactor core. This article presents a qualitative model based on the system dynamics methodology that describes the management of the operation of a nuclear power plant. The work is based on previous works, a literature review, and interviews with experts from the nuclear industry. Seven loops of causal reinforcement and three loops of casual balance are broken down in detail. In this way and through the dynamics postulated, it is described how an operating organization of a nuclear power plant manages its nuclear safety. In this way and as an emergent of the safety management model, it shows the various organizational resilience mechanisms to manage the risk of nuclear accidents. The study results indicate that compliance with the regulatory framework, safety standards, and recommendations of international organizations are the main reinforcing elements of safety. Additionally, the system's effectiveness in managing events, daily deviations, and breaches in compliance verification instances, whether routine or spontaneous, is also crucial for ensuring safety. The model also shows that this effectiveness is achieved through the success of safety culture and leadership programs

15:35
A System Management for Resilience Engineering : a qualitative and quantitative framework proposal

ABSTRACT. In a dynamic environment like a transportation company, accidents are commonplace. For this reason, it is essential to have a qualitative and quantitative model of resilience to manage possible problems. Resilience engineering is significant and more and more used in complex systems. This paper studies Resilience Engineering, its applications, and how it is essential to define and measure a System Management for Resilience. In this regard, a qualitative Resilience Framework is proposed and applied to a rail company to react to adverse and sudden events. Furthermore, it is crucial to measure a company's level of resilience. To do this, the FAHP and TOPSIS models were used and suitably reworked to create an appropriate resilience index.

14:35-15:50 Session 12E: S.15: Digital twin: recent advancements, challenges and real-case applications II
Location: Room 100/4013
14:35
Surrogate Model-Based Calibration of a Flying Satellite Battery Digital Twin

ABSTRACT. At the European Space Agency (ESA), Modeling and Simulation (M&S) plays a fundamental role during the lifetime of a spacecraft, being used from the design phase to the testing and during operations in space. In particular, the European Space Operation Center (ESOC) makes use of M&S tools for various tasks such as monitoring and control, procedure validations, training, maintenance, planning and scenarios investigation, to mention a few. Moreover, moving towards the digital twin paradigm, simulation models are gaining growing attention with the expectation to provide ultra-fidelity capabilities to represent the live status and dynamics of flying spacecraft. In this respect, M&S tools embed general physics-based models and disciplines characterized by configurable parameters which have to be calibrated in order to mimic the behavior of the actual flying spacecraft. However, their calibration requires a large number of simulations which are unfeasible to be obtained through computationally expensive high-fidelity simulation models. In this light, the present work proposes the use of a surrogate model-based approach for the calibration of simulation models of spacecraft. The approach integrates a computationally inexpensive deep-learning-based surrogate model. The approach’s effectiveness is shown by its application to real flying Earth observation satellite data and simulation models.

14:50
Digital Twin Technology for Risk Management of the safety-critical systems in the energy transition

ABSTRACT. Using the digital twin technology, the probability of process failure and the future performance of a product may be examined prior to its final design. This scenario-based testing enables engineers to foresee faults and hazards and immediately implement mitigation strategies. A digital risk management solution may involve the use of process automation, decision automation, digital monitoring, and early warning systems that may provide comprehensive analytics to assist in monitoring compliance status and the current danger level for all risk elements. In order to achieve organisational safety, safety management and its functions must work together to safeguard people and property from unacceptable hazards. Due to the rapid demand for economic activity, safety management in the majority of businesses are under immense pressure to achieve optimal performance in their production, which necessitates access to a variety of high-quality safety data.

Digital-twin technology simulates the performance of a real asset in virtual environments. For instance, a physical asset like a sensor-equipped oil and gas platform is adept at collecting real-time data and operational status. Following the data collection, software is utilised to generate a 3D digital representation. The concept of a "digital twin" is to build a digital complement that mirrors the behaviours of its physical counterpart, enabling informational alignment and mapping between the data realm and reality. The real-time data on how a colossal oil rig reacts to certain environmental circumstances can be used to construct a future smart oil rig. Besides reducing data collection delays in system operation, maintenance, or processes, the digital twin can also provide a plentiful supply of high-quality data that is essential for realising the digital-physical assets system's real-time, strong interoperability, breakneck response, and high transparency.

With the incorporation of risk analysis into the architecture of a digital twin, users can readily identify risk and isolate possible accident scenarios. In the case of process safety threats, incidents with severe consequences will always be possible. Visibility of these implications in both design and operations is a key aspect of risk management. Moving analyses such as the risk assessment procedure into a cloud-based system can revolutionise the delivery of data, and it opens the door for future integration into cloud-based digital twins. In this manner, an analysis such as a risk assessment would become a simple cloud-based process that gets the necessary facility data via the digital twin and offers access to the outputs that can be clearly presented to the user. Integration of the human factor in the digital twin will enable the development of safety-critical systems in the energy transition. As a result of the digital twin concept, it is expected that the industry will regulate how detailed engineering and operational data is stored.

15:05
A Digital Twin Model for Drone Based Distributed Healthcare Network
PRESENTER: Gianluca Filippi

ABSTRACT. Distributed healthcare networks, such as the drone logistic network for medical deliveries studied in this paper, have shown potential for improving the efficiency of healthcare systems. Several studies have examined the impact of drone transportation on medical goods and biological samples. For instance, a trial near Rome demonstrated that drones delivered medical objects in 25 minutes, outperforming road journeys that took 45-60 minutes. Similarly, the effect of drone transport on biological samples was found to be negligible for turnaround times under 4 hours. Noteworthy initiatives include Matternet's collaborations in Berlin and Switzerland, where drones were employed to transport patients' samples and laboratory specimens between hospitals. Tests conducted by Amukele et al. showed that drone transport did not significantly affect microbiological specimens, including blood cultures, flying for 30 minutes. Successful flight tests for medical delivery have also taken place in Spain.

The UK government is investing in developing an autonomous drone logistic network to deliver medical equipment and aid to remote areas. The CAELUS project, funded by the UK Industrial Strategy Future Flight Challenge Fund, aims to explore the utilization of drone delivery systems in the healthcare sector. This paper focuses on the second phase of the project. It presents analyses conducted to create a digital blueprint of the drone logistic network, combining digital twin models and optimization tools.

The digital blueprint serves two primary purposes. Firstly, it facilitates the design process of the drone logistic network by optimizing key performance indicators defined by stakeholders. This virtual simulation-based task involves multi-objective generative network optimization, considering factors such as capital and operational costs, delivery time, and resilience against unexpected events. Resilience refers to the network's ability to recover from failures and absorb adverse events.

To achieve generative network optimization, a biologically-inspired methodology inspired by the behaviour of the Physarum organism has been developed. This methodology builds upon previous work and has been successful in various engineering problems, including network topology and Steiner tree problems. It involves two integrated steps: generating a sub-optimal delivery network that is progressively optimized and simulating the drone delivery system over the generated network, which can be classified as a vehicle routing problem.

The second task of the digital blueprint involves the operational problem of the network. It enables online simulation of the digital twin during the actual operational life of the drone logistic network and optimizes scheduling and planning. Once the physical network is operational, sensor data from the physical systems can be collected and used to refine the digital twin models. The digital blueprint allows for simulating various scenarios affected by short-term uncertainties and determining optimal actions to take.

14:35-15:50 Session 12F: Risk and Asset Management
Location: Room 100/5017
14:35
A case study to demonstrate the applicability of a risk management framework based on collaborative governance and integrated risk-resilience strategies for smart city lighthouse projects

ABSTRACT. In a recent work the authors of the present paper have presented a framework and methodology for improving the risk management in smart city lighthouse projects. The work suggested ways of improving the risk management based on the collaborative governance concept and a risk-resilience-based framework. The current paper examines the practical applicability of the theoretical analysis. The aim of the work is to support and give substance to the theoretical findings and recommendations based on a study of a real-life smart city lighthouse project. The study will examine the benefits that the suggested framework can bring to the project and the challenges that the project might face when implementing the suggestions. The Positive CityxChange smart city lighthouse project will be used as the case study. The project has been granted funding from the European Union’s Horizon 2020 research and innovation programme and will experiment the participant cities on how to become leading cities integrating smart positive energy solutions.

14:50
Risks Management in Complex depollution Systems engineering and control

ABSTRACT. Background Managing the risks associated with complex systems has become increasingly challenging in recent years. This is due to a number of factors, including the evolving nature of threats, the adoption of new technologies, the diversity of components and stakeholders involved, and the interactions between them. As a result, organizations must be more vigilant and proactive in addressing these risks. An example of such a system where dynamic behavior and emergent properties complicate risk management is the depollution system [1]. The depollution system is a complex network of elements, such as human operators, scrubbers, and chemical treatment processes, that work together to effectively remove pollutants. However, the intricate interactions between these components can lead to unforeseen consequences, such as the emergence of new pollutants. These emergent behaviors can make it challenging to accurately predict and control the overall performance of the system and to accurately manage the risks associated with its operation. Aims The objective of this paper is to improve the management of risk during a depollution system life cycle, from the project design phase through to implementation and completion. The goal is to develop a comprehensive approach to risk management that accounts for the complexity factors and requirements from various nature (e.g. regulatory, human, environmental or even financial). This will enable better identification and understanding of potential risks, and more effective strategies for mitigating them. Methods Building on the principles of systems engineering, this paper will identify the key concepts that constitute a depollution system, with a focus on those relevant to risk management. By utilizing Model-Based Systems Engineering (MBSE) approaches [2], we will converge towards Domain-Specific Modeling Languages (DSMLs) that will facilitate the evaluation and control of risk throughout the system's lifecycle [3]. This will enable a more comprehensive and holistic approach to risk management, that accounts for the complexity and dynamic nature of the system. Subsequently, a computer tool will be used to aid stakeholders in modeling these risks that may be incurred in depollution projects, by leveraging the experience gained from previous projects. Results This approach enables stakeholders to better manage risk by considering the complexity of depollution projects. Providing a more robust and realistic assessment of potential risks, and enabling stakeholders to develop more effective prevention strategies. This approach will help to ensure the safe and successful implementation of depollution projects. Conclusion To sum up, managing risk for complex systems like depollution systems has become more challenging due to increased complexity and interconnectedness. This paper aims to improve risk management by using a comprehensive approach that takes into account the complexity of the system and can be applied throughout the project lifecycle. References [1]J. Price et al. “Remediation management of complex sites using an adaptive site management approach,” Elsevier, 2023. [2]Y. Baek et al. “A Modeling Method for Model-based Analysis and Design of a System-of-Systems,”2020 [3]C. Mayssa et al. “WM2023 Conference,” in A systemic, model and data-based method for depollution projects engineering and management, 2023

15:05
Identifying Required Data for Power Generation O&M Investment Plan Decision Making
PRESENTER: Herry Nugraha

ABSTRACT. Data enable better decision-making as it unveils multiple possible scenarios, provides input for prediction and becomes the critical point for evaluating asset conditions during its lifecycle. It applies to the majority of business sectors, including the operation and maintenance of the power generation industry (O&M). Throughout the lifecycle of a power plant, utility companies must decide the implementation of operation and maintenance (O&M) through investment planning. Additionally, the data available to support the investment plan decision in terms of O&M management are affected by a number of variables. For instance, while managing power generation assets, the available data must demonstrate quality and performance while complying to standards and maintaining the highest level of availability and dependability.

This paper provides an innovative approach to identify possible data and decision-making approaches in O&M investment planning in a power generation company. It takes a case study of PT PLN (Persero), also known as PLN, the electricity provider in Indonesia that focuses on the power generation sector and constantly involves decision-making for power plant O&M through investment planning. The method involves a benchmarking process that compares existing data use in O&M management in PLN to the best practice accessible worldwide. The authors then correlate and classify the data, both from best practices and existent data, for every decision aspect in PLN.

Throughout the course of the study, seven additional decision-making factors, namely, urgency, alternative study, technical feasibility, financial feasibility, land, environmental and social, legal, and Governance, Risk, and Compliance (GRC), are uncovered in the existing investment planning model. It has been determined that these aspects could be backed by data derived from best practices or those already present in the PLN. The benchmarking result is then reviewed to determine the requirement for data availability in the O&M management decision-making process in PLN. The findings were summarized and evaluated to determine the dependency between the data requirements. Through the study of interdependencies, it was also determined that more advanced statistical and probability analysis could sharpen the outcome of the decision-making process, as it could support the cost, risk, and performance forecast, which is advantageous for enhancing investment planning decisions. Furthermore, this paper also recommends potential future improvements that can be implemented in the power generation O&M management investment planning. As a future improvement, a more structured decision-making process, such as the SALVO process, should be implemented to support the effort to create a robust and transparent decision-making process to realize continuous asset value growth. Similar approaches and criteria could be modified according to specific business needs.

15:20
A case study of ecological suitability of mussel and seaweed cultivation using bivariate copula functions
PRESENTER: Rieke Santjer

ABSTRACT. Aquaculture cultivation is gaining importance in the current context of continuous growth population as a source of (local) food resources and its potential of being combined with other uses at sea (e.g.: offshore energy production or tourism). Consequently, within the European Horizon 2020 project UNITED, the combination of mussel and seaweed cultivation together with wind energy production in the German North Sea is investigated. Here, the feasibility of the mussel Mytilus edulis and seaweed Saccharina latissima based on their ecological needs are analysed. Ecological data from a three-dimensional hydrodynamic and ecological model covering the European shelf is used. For each of the two species, three variables are selected as relevant, including in both of them the water temperature. In addition, chlorophyll-a and dissolved oxygen are considered for mussels, and dissolved inorganic nitrogen and phosphorus are selected for seaweed. Temperature is selected as dominant variable so its daily maxima for the growing months are selected together with the concomitants of the other variables. Gaussian Mixture distributions and truncated Gaussian kernel distributions are used to model the marginal distributions of the random variables. Bivariate copulas are fitted for each pair of variables to describe their dependence structure. Finally, probabilities of being within the optimal ranges of the relevant variables are calculated. Chlorophyll-a concentration and temperature are the most limiting variables for mussels and seaweed, respectively. Relatively low probabilities are obtained, since ranges for optimal growth are considered. Generally, it is feasible to cultivate mussels and seaweed at this location based on the selected ecological variables, as the probability of variables reaching values outside growth limits for the species is low.

14:35-15:50 Session 12G: Aeronautics and Aerospace III
14:35
Risk Management in Aviation Infrastructure: A Statistical Analysis of Selected European Countries

ABSTRACT. The articles focuses on the statistical analysis of aviation infrastructure in selected European countries. The research explores various factors that contribute to the aviation infrastructure and evaluates their impact on risk management. The thesis presents a detailed description of the aviation infrastructure and its division, including graphs and data analysis. The study found disparities in the level of aviation infrastructure development among the selected countries, with Germany having significantly more hub airports compared to other countries. The analysis showed that most of the airports were modernized rather than newly built, due to the ease and cost-effectiveness of expanding existing facilities. The aim of the paper is to determine the level of aviation infrastructure advancement and to highlight potential risks. The results of the study provide insights into the state of aviation infrastructure and inform risk management strategies for improving the safety and quality of air transportation.

14:50
Statistical analysis of aviation incidents caused by crew communication problems
PRESENTER: Tadeusz Zaworski

ABSTRACT. The purpose of this article is to present a statistical analysis of aviation incidents caused by crew communication problems. The focus was to give an idea of the role that communication plays in aviation and its impact on safety. A database of forty-five aviation incidents was created and reports were analyzed. As a result, it was possible to isolate the factors that contributed to aviation disasters and classify them according to their percentage contribution. The results of the study made it possible to isolate each type of communication that is realistically a problem in aviation. Interaction in pilot-crew and pilot-controller contact was taken into account. The paper contains a number of conclusions regarding the extracted variables and factors that affected the aviation accidents studied

15:05
Survival Analysis of Small Satellites
PRESENTER: Kaiqi Xu

ABSTRACT. Smaller launching systems and highly reliable components have become dominant demand in the small satellite sector. This notwithstanding small satellites have been the cause of the majority of space debris. It is therefore correct to ask, what is the survivability of small satellites? To address this question a small satellite database was constructed based on 4567 small satellites deployed from 1990 to 2022. All satellites are restricted with a launch mass of no more than 500kg. In this paper, we present the survival distributions for different types of satellites based on satellite mass category, standard compliance and subsystem contribution. Our findings show that after the successful launch, microsatellites and minisatellites are equally reliable within the first 20 years on-orbit, with approximately a 98% reliability rate. Compared to microsatellites and minisatellites, picosatellites and nanosatellites exhibit high infant mortality and short lifetimes, which is no more than 10 years. We have found that the small satellite designed based on ECSS (European Cooperation for Space Standardization) and NASA (National Aeronautics and Space Administration) standards have a relatively higher reliability rate than that of satellites that comply with JAXA and other standards. With respect to subsystem behaviour, the communication system is the major contributor to small satellite failure, thus designers should pay more attention to addressing no signal, and software disconnection-related problems.

14:35-15:50 Session 12H: Cyber Physical Systems
14:35
A pragmatic mission-centric approach to ICT risk and security – Autonomous vehicles as a case

ABSTRACT. In the military domain, cyber security has long being characterized by its focus on data confidentiality protection and strict prescriptive requirements aimed at reducing the risk of unauthorized data access through strong logical and physical isolation of the systems handling the data. Modern warfare, where access to information plays a critical role and cyberspace has been recognized as a domain of operations, has been challenging this approach to security.

Most military platforms and processes are becoming highly digitalized and access to the right information at the right time is a fundamental requirement to conduct successful operations. The need of protecting information confidentiality must now be weighed against the cost of losing operational effect in the form of timely access to critical information, so acceptable trade-offs must be identified. Additionally, as most physical platforms like planes, ships, and underwater and ground vehicles are becoming unmanned or even autonomous, cyber security must also encompass protecting safety-critical data and systems.

In this context, there may emerge potentially conflicting requirements concerning how different types of data need to be protected and why, and this can lead to alternative, but equally valid, security solutions where different trade-offs need to be accepted. For instance, one would want to protect the integrity of sensor data on a self-driving vehicle to prevent malicious spoofing by authenticating all data, but this can create a delay in the brakes’ control system that can lead to a safety hazard [1]. How can one then find an acceptable security solution that can satisfy both security requirements, or argue that one concern should be prioritized above the other in certain scenarios?

In the paper, we propose a mission-centric approach [2] that can help formulate risk and security requirements as a function of the high-level capabilities that are critical to protect for the success of the mission, rather than exclusively as a function of predefined regulatory requirements. This can better support the formulation of structured arguments for how different security concerns compare to each other within the context of a given mission.

We argue that this approach constitutes a better starting point to identify and evaluate alternative trade-offs as the basis to derive more effective security solutions, compared to applying existing security standards aiming mainly at achieving compliance. With this, we take a first step from a rule-based to a risk-based approach to security in the military domain.

As a case-study, we use autonomous vehicles as they are a prime example of how the integration of cyber components in other platforms that are not natively digital gives rise to complex systems and conflicting security requirements where trade-offs are unavoidable and new approaches to security are needed.

REFERENCES

[1] Jovanov, Ilija, and Miroslav Pajic. "Relaxing integrity requirements for attack-resilient cyber-physical systems." IEEE Transactions on Automatic Control 64.12 (2019): 4843-4858. [2] B. T. Carter, G. Bakirtzis, C. R. Elks and C. H. Fleming, "A systems approach for eliciting mission-centric security requirements," 2018 Annual IEEE International Systems Conference (SysCon), Vancouver, BC, Canada, 2018, pp. 1-8.

14:50
Cybersecurity in railway - alternatives of independent assessors’ involvement in cybersecurity assurance

ABSTRACT. Cybersecurity and related security management become important issues in railway projects and operations when implementing new digitalised technology. The railway industry is facing an increasing degree of digitalisation like else in society. CENELEC issued the CLC/TS 50701 in 2021 that may become the most important basis for the railway actors to manage railway cybersecurity in context of the RAMS lifecycle process (EN 50126-1: 2017). By connecting cybersecurity to the railway application lifecycles, this standard supports the identification of system requirements related to cybersecurity, and preparation of the associated documentation for security assurance and system acceptance. Like the role of an independent safety assessor acting in the safety domain of railway, the authors believe on, and suggest an independent cybersecurity assessor to be involved at certain levels in system assurance and acceptance with regards to cybersecurity. This paper presents alternatives to such involvement and discusses the possible advantages and disadvantages of each alternative based on a set of criteria or parameters. Recommendations with respect to involvement are based on qualitative evaluations of the mentioned criteria against the alternatives. In general, involvement of an independent cybersecurity assessor seems appropriate for new long-lasting projects that involve significant investment costs to society. For projects or cases where digitalised sub-systems or components are difficult to separate from-, and may become close interconnected with safety functions, an independent cybersecurity assessor should be involved in cybersecurity assurance. The preliminary results derived from discussions among the SINTEF researchers, as well as discussions and testing-out with actors from the railway industry. These opinions have then been compared, i.e., balanced and validated against findings in the literature, that also covered approaches in other industrial domains.

15:05
Methodological insights for the prevention of cyber-attacks risks in the energy sector: An Empirical Study
PRESENTER: Jean Bertholat

ABSTRACT. The increasing dependence on technology and on virtualization has led to a corresponding increase in vulnerability to cyber-attacks, and the need for specialized approaches to analyze and understand these risks. While current risk management standards provide some guidance (e.g. ISO 31000, 27000), there is a lack of clear and consistent methods commonly shared for evaluating the effectiveness of cyber risks management systems. Comprehensive and well-designed cyber risk analyses can provide valuable insights into organizations' vulnerabilities and potential strategies for mitigating risks.

Our PhD research project is focusing on cyber risk analysis and management in the energy industry and sector. Our goal is to suggest a framework for the identification, the analysis and the management of cyber risks for sociotechnical systems in the energy industry and sector. This framework aims at improving the resilience of these systems to known and emerging cyber-attacks.

One of our main task is to organize an adequate and accurate databases for cyber risk analysis. This will include data on past cyber incidents, near misses, and cyber accidents, which will enable the identification of attack’ scenarios and a detailed categorization of risks. Based on this information, relevant methods for cyber risk analysis and management will be proposed and applied to case studies.

This paper aims to provide a first comprehensive overview and critical analysis of the current state of the art in cyber risk analysis with a special focus on the energy sector. This critical analysis will contribute to frame a methodology for developing tool to foster risk analysis and risk management.

15:20
Increasing effectiveness of development and operation of software for cyber-physical systems

ABSTRACT. Cyber-Physical Systems (CPS) distributed over a large territory, require secure communication not only among various parts of system, but also with operation centre. Building its own communication networks by the system operator is financially demanding, which is why more or less open communication systems are used. This is connected with higher requirements for the security of applications, operated in a CPS. CPS like critical infrastructures (for example railway) need to meet high standard in communication security. Respond to new cyber threats is important part of cyber security and CPS integrators or suppliers need to be able to provide software updates in time. The effective provision of these services requires effective tools that can identify and eliminate errors in development phase as well as during operation that can be exploited to carry out a cyber-attack.

European project COSMSOS (2021) is creating a tool that applies DevOps development technologies from the IT field to the field of embedded systems. A comprehensive understanding of the issue of development significantly increases its effectiveness. To explain the issue, we will use the V-model, described by EN 50126 (2017) standard. The effectiveness of the development process is greatly influenced by the number of system validations performed before its acceptance. The goal of tool is identified and eliminating system vulnerabilities in early phase of development like design and implementation, or production. Identify of errors during penetration tests is very time demanding.

Cyber security is not only a problem of designing because limits and conditions of each system and each facility change in time. It means that problem connected with CPS cyber security does not end for the CPS makers with the acceptance of the system by user. From safety reasons, the condition of cyber security of each CPS need to be monitored during the operation until decommission of system. Based on the monitoring results, during the CPS operation the risk-based maintenance needs to be performed. Demands on the risk-based maintenance depend not only of CPS structure, but also very seriously on conditions in which they operate.

At the time of delivery to operator, the system is only resistant to threats known at the time of development. In the paper, we therefore deal with the issue of the effectiveness of the development and maintenance of software during its whole life cycle, as required, by patch management in IEC 62443 (2019) standard. COSMOS project developed for this purpose "Toolset for detection of vulnerability in CPS". This very set of tools and its application to the railway system is the subject of this paper. It shows basic principles of risk-based maintenance of railway CPS systems, which must be respected by their users.

References: COSMOS (2021). DevOps for Complex Cyber-physical Systems. ID: 957254, EU H2020. EN 50126-1. (2017). Railway applications – The Specification and Demonstration of Reliability, Availability, Maintainability and Safety (RAMS). CENELEC, Brussels. IEC 62443. (2019). Security for Industrial Automation and Control Systems. International

15:35
On the cyber-emergency preparedness in a resilient organization
PRESENTER: Anurag Shukla

ABSTRACT. In our modern and complex-society, cyber-security, risk, emergency management and resilience engineering (RE), as scientific fields, have many common characteristics. They all attempt to provide key concepts, principles, and guidance on how to deal with unexpected events, and in particular how to treat large uncertainties. While practices relating to corporate risk management and emergency management have a long history, science the 1950s, focus on the cyber-security and resilience engineering emerged concomitantly in the recent years. These areas represent a new way of dealing with emerging risks in cyber-socio-technical systems [1]. In contrast to the conventional risk and security management approaches, founded on failure reporting and probabilistic risk assessments based on historical data, RE looks for ways to enhance resilience in the sense that they anticipate, monitor, adapt to variations, disruptions, and surprises [2]. Looking closer at RE concepts and its application in the security management field, this paper aims to explore ‘what characterizes cyber preparedness in resilient enterprises?’ Exploring av these characteristics requires an in-depth understanding of the context in which cyber-preparedness take place. To provide such insight, gathering both qualitative and quantitative data, this explorative study, employs a triangulation method in three phases: Reviews of related literature to develop questionnaire and interview guide, web-based surveys with 26 key-informants, and two semi-structured interviews with subject matter experts in the cyber-domain. Our findings address several areas of improvement that could affect cyber-preparedness in enterprises. First, as it might be expected from case studies in scientific research, operators in the front line have rather limited information, and capacity to understand, analyze and process existing data. It highlights a need for enhancing cyber-related knowledge across enterprises, in particular front line operations. Moreover, findings indicate that 25% of enterprises in our samples update the cybersecurity risk picture only once a year. Giving the increasing trend of cyberattack, lack of more frequently updated risk picture, downscales the thoroughness of contingency plans, thus puts companies in a vulnerable situation. An in depth understanding of how to anticipate, monitor signals and learn in day-to-day activities [3], improves awareness and competence in what characterizes people in organizations with a safety-security culture in a resilient manner. Further, an imbalance in cyber-preparedness and the desired state of affairs, creates challenges in dealing with cyber-attacks when it occurs. Globalization, digitalization and changes in the national and international security policies have led to an increased risk of cyber-threats, hence the increased need to strengthen cyber-preparedness and cyber-security in enterprises. Combined with increased cross-border crime, where criminals with malicious intent use the same technology for their gain, cybercrime cause major damage by downsizing enterprises’ functionality and operations. In addition, data shows that increased understanding and competence regarding basic principles for ICT security is an essential factor in contributing to a greater degree of resilience in the business. [1] Patriarca, R., Falegnami, A., Costantino, F., et. al. (2021) [2] Steen, R., & Aven, T. (2011) [3] Hollnagel, E. (Ed.). (2013). Resilience engineering in practice[4] Nemeth, C. P. og Hollnagel, E. (2021). Advancing Resilient Performance

14:35-15:50 Session 12I: Accelerated Life Testing & Degradation Testing
14:35
Optimization of Step-stress ADT following Tweedie Exponential Dispersion process

ABSTRACT. In this paper, we focus on the optimization of step-stress accelerated degradation test plan when the degradation process can be modeled by a stochastic Tweedie exponential dispersion (TED) process. The properties of this family of processes, which is progressively taking an important part of the research work on accelerated testing, will be presented first. Secondly, in the context of an optimization based on the D-optimality and V-optimality criteria, we will demonstrate, through a generalization work of the TED process, the equivalence between a multilevel step-stress accelerated degradation test plan and a simple step-stress accelerated degradation test plan using only the minimum and maximum stress levels. The optimal stepwise accelerated degradation test plan based on these two optimality criteria is then derived, and, an application example, based on data collected in [1], will be presented to compare the effectiveness of the proposed simple optimal step-stress accelerated degradation test plans and some step-stress accelerated degradation test plans proposed in a previous study. Finally, a simulation study is performed to assess the performances of the proposed step-stress accelerated degradation test plans.

[1] W Yan, S Zhang, W Liu, Y. Yu, Objective bayesian estimation for tweedie exponential dispersion process Mathematics, 9 (21) (2021), p. 2740

14:50
Degradation Process Modeling based on Reliability Test and Machine Learning Regression
PRESENTER: Zdenek Vintr

ABSTRACT. A degradation process is the deterioration of an object's internal and external properties, resulting in a decline in performance quality and ability to meet design and operational requirements. Modelling the degradation process has received significant attention from reliability and statistical scientists. Many methods have been proposed and developed to model the degradation process and also to predict and estimate reliability measures. The regression methods are essential methods in machine learning, and it has proved the huge potential for modelling the deterioration process of the objects. In essence, machine learning regression is a concept that represents a series of methods based on i) supervised learning and on collecting data from actual object operations or on ii) reliability tests in laboratories. The paper describes the degradation of light-emitting diodes (LED), which are a type of equipment with many applications in engineering and industry using machine learning regression methods and based on the collected data from reliability tests in laboratories. From that point, the paper assesses the suitability and effectiveness of the methods for modelling the degradation process of LED and attempts to find out some of the best regression methods for this object.

15:05
Accelerated Life Testing in Maritime Critical Systems

ABSTRACT. Climate change and the ice melting in the Arctic zone is a major environmental and geopolitical issue as well. Due to this phenomenon, cargo vessels are able to follow new maritime routes though the Arctic zone and also new energy resources fields are accessible. This change is a major challenge for maritime industry players, affecting maritime industry in many aspects. One aspect is that of the journey shortening from East Asia to Northern Europe, increasing transportation capacity and resulting finally to significant cost cuts for maritime industry stakeholders. A broad use of the new northern routes for the international trade leads to the use of ships that are not constructed according to extreme weather standards. Although Multi state systems and Markov chains are a reliable analysis tool for systems’ availability assessment, the polar zone’s extreme weather conditions affect the availability of the onboard systems, increasing its assessment’s uncertainty. This paper is an attempt to model such changes on the expected availability of maritime critical systems. Combining the direct effect of polar weather conditions on the systems onboard with Markov chains and Multi State Systems modelling, Accelerated Life Testing theory is an additional research tool, contributing to the reduction of the uncertainty imposed by the new factors.

15:20
Accelerated Life Cycle Analysis of Lithium-Ion-Batteries under Different Fast Charging Algorithms

ABSTRACT. Range anxiety and long charging times are two inhibitors of electric vehicle market acceptance. Therefore, current research focuses on the development of efficient fast-charging strategies to overcome these challenges, without causing significant battery aging. This research shows the results of six different charging algorithms and their effect on the cycle life of lithium-ion-batteries (LIB) in a fast-charging application. All tested charging algorithms were designed to achieve a state-of-charge (SOC) of 80 % after a charging time of 15 minutes, which results in 3.25 Ah of charged capacity for each cell. Furthermore, all tests were performed at 45 °C to reduce the negative influence of the internal resistance and thus achieve a higher charging efficiency [1]. As lithium-plating is less likely to occur at lower SOC levels, an approach to achieve the desired fast-charging times is to charge with a higher current at the beginning of the charging process and reduce the current after a defined period to finalise the charge [2]. Therefore, different boost-charge algorithms were designed [3] and tested, as well as algorithms that considered the simulated anode potential of the cell [4]. These algorithms were then compared to the constant current – constant voltage (CC-CV) algorithm, that is considered the technical standard of LIB charging. The end of a test series was reached, when the state-of-health (SOH) in dependence to the cells original capacity was below the 80 % threshold, which is considered End-of-Life (EoL) condition in the automotive industry. After the tests, it was identified that the cells which experienced the highest charging current, even for short periods, experienced the strongest aging acceleration. This became especially visible with the cells that were charged with up to 30 A during the boost-charge phase, as these reached the EoL threshold after just 125 cycles. Cells cycled with the CC-CV algorithm at 13 A performed best, lasting 225 cycles. This can be explained by the maximum current that was used for the different algorithms, as the CC-CV algorithm caused the lowest maximum current during its 15 minute charge time. The anode-potential-based charging algorithms have shown to last longer than the boost-charge algorithms, but similarly did not last as long as the CC-CV algorithm due to the higher maximum charging currents. The complete results and detailed information on the charging algorithms will be presented in this paper.

[1] Kremzow-Tennie, Simeon & Scholz, Tobias & Pautzke, Friedbert & Popp, Alexander & Fechtner, Heiko & Schmuelling, Benedikt. (2022). A Comprehensive Overview of the Impacting Factors on a Lithium-Ion-Battery’s Overall Efficiency. Power Electronics and Drives. vol. 7. 10.2478/pead-2022-0002. [2] Rangarajan, Sobana & Barsukov, Yevgen & Mukherjee, Partha. (2020). Anode potential controlled charging prevents lithium plating. Journal of Materials Chemistry A. 8. 10.1039/D0TA04467A. [3] Kremzow-Tennie, Simeon & Pautzke, Friedbert & Mecit, Haydar & Scholz, Tobias & Schmuelling, Benedikt. (2021). A Suggestion Towards Improving Electric Vehicle Fast Charging. 10.1007/978-3-658-32266-3_14. [4] Boehm, Kai & Herrmann, Pascal & Zhang, Chungxi & Kremzow-Tennie, Simeon & Parzyszek, Daniel & Pautzke, Friedbert. (2019). Das Forschungsprojekt D-SEe - Durchgängiges Schnellladekonzept für Elektrofahrzeuge.

15:35
Optimal design of thermal variant stress spectrum based on virtual accelerated life test of the circuit board

ABSTRACT. The spectrum design and lifetime evaluation of high reliability and long lifetime aerospace electronical products has been a puzzled problem for engineering and scientific researcher. The fatigue of these products are caused by complicated environmental and operational conditions. The specific works in this study involve the establishment of fatigue life model of the circuit board under the thermal variant stress, model fatigue virtual accelerated life test (VALT) based on thermal variant stress spectrum, calculation and analysis of model accelerated life based on VALT data, and the method of stress spectrum optimization based on accelerated model. In this study, focused on the weak node of the circuit board, the fatigue failure model is constructed, and the fatigue life of the model is predicted by accelerated stress spectrum and Coffin-Manson formula. Furthermore, the fatigue degradation data of the model under thermal variant stress are obtained by VALT. Considering that the coefficients of Coffin-Manson formula should be adjusted to different accelerated models, the study combines S-N curve and simulation data to fit the formula parameters, and calculates the optimal stress spectrum of the accelerated model by using the modified Coffin-Manson formula. Finally, the fatigue life of the model under the optimal accelerated stress is obtained. The method and result can be referred to the ALT design for similar products.

15:50-16:20Coffee Break
16:20-17:35 Session 13A: Prognostics and Systems Health Management III

Prognostics and Systems Health Management III

Location: Room 100/3023
16:20
Identifying Changes in Degradation Stages for an Unsupervised Fault Prognosis Method for Engineering Systems

ABSTRACT. The maintenance strategy known as Condition-Based Maintenance (CBM) has become increasingly popular as it optimizes asset availability by minimizing maintenance downtimes and reducing overall maintenance costs. To do so, it analyses asset monitoring data to forecast the degradation and prevent failure before it occurs, a process called fault prognosis. This process generally comprises four basic steps: data acquisition, construction of a Health Indicator (HI), identification of the Health Stages (HS), and prediction of the Remaining Useful Life (RUL). Nevertheless, it is usually dependent on prior knowledge of a failure threshold, thus enabling the prediction of the RUL. In cases where this information is not available, a different prognosis approach is required. Therefore, rather than predicting the RUL, the proposed method intends to indicate the proximity of the failure occurrence based on the premise that during the development of the fault, breakpoints associated with the acceleration of the degradation rate occur. In this way, evaluating only the HI behavior, without considering previously monitored data, the proposed method could be applied to machines whose faults of interest had not yet been observed. To validate the method, it is applied to synthetically generated HI data with different behaviors over time. Results show that the method has the potential to be used in scenarios where there is no previous information on the degradation pattern.

16:35
Health index calculation using FMECA for high-voltage circuit breakers
PRESENTER: Jordon A. Grant

ABSTRACT. The electrical power system lies at the heart of modern society. In order to optimize the total cost of the system a trade-off between investment and reliability must be made. Power system reliability analysis (PSRA) typically quantifies the reliability of the system using fixed component failure rates (Allan 2013). However, the probability of failure for a component increases with usage. To address this problem recent research has focused on health index models for power transformers (CIGRE 2019) to accurately model the condition of a component. Conventional health index models for power transformers assign weighting factors to condition data and are not based on failure mechanisms (Jahromi 2009). Failure mechanisms that are detected from condition data with low weights will therefore not significantly reduce the health index. On the other hand, if the weight of the condition data is too high, the health index will be too sensitive to grading. To address this problem, a health index model based on failure modes, effects, and criticality analysis (FMECA) is presented. The focus will be on high-voltage circuit breakers (HVCBs) since little research has been done on this component. FMECA is used to identify the failure mechanisms and assign a risk priority number (RPN) based on the severity, occurrence, and detectability of each failure mechanism (Rausand and Høyland 2003; Liu 2016). The health index is then evaluated based on failure mechanisms rather than directly from condition data and each failure mechanism’s contribution to the overall health index is weighted relative to the RPN. A failure mechanism with a high RPN indicates high risk and therefore its grading should contribute more to the overall health of the component.

A case study is presented where the health indices of a fleet of ABB EDI SK 1-1 indoor live tank SF6 high-voltage circuit breakers are evaluated based on data obtained from the Icelandic transmission system. Data that indicates the health of an HVCB such as condition monitoring data, maintenance records, operation records and other data were investigated and linked to failure mechanisms. The trip coil current (TCC) is an available measurement that can detect electrical and mechanical issues within HVCBs (Razi-Kazemi and Niayesh 2020) and was used as a key assessment criterion in determining the health indices of the HVCB fleet.

References

Allan, R. N. et al. (2013). Reliability evaluation of power systems. Springer Science & Business Media.

CIGRE (2019). Condition assessment of power transformers. Technical brochures.

Jahromi, A., R. Piercy, S. Cress, J. Service, and W. Fan (2009). An approach to power transformer asset management using health index. 25(2), 20–34. Publisher: IEEE.

Liu, H.-C. (2016). FMEA using uncertainty theories and MCDM methods. In FMEA using uncertainty theories and MCDM methods, pp. 13–27. Springer.

Rausand, M. and A. Høyland (2003). System reliability theory: models, statistical methods, and applications. John Wiley & Sons.

Razi-Kazemi, A. A. and K. Niayesh (2020). Condition monitoring of high voltage circuit breakers: Past to future. 36(2), 740–750. Publisher: IEEE.

16:50
Prediction of Remaining Useful Life for Bearings using Parallel Neural Networks

ABSTRACT. This study advocates the utilization of a parallel neural network (PNN) architecture for the estimation of the remaining useful life (RUL) of bearings. The use of conventional machine learning and deep learning techniques has been inadequate in terms of accuracy and computation time, because of huge input data sizes and the time-dependent nature of the output. To address this limitation, the PNN architecture incorporates multiple parallel processing paths with multiple input neurons that take in data from condition detectors of mechanical machines and output neurons that predict RUL. The PNN structure provides better accuracy and computation time by efficiently handling vast amounts of data and integrating both spatial and temporal information simultaneously. Additionally, time transformers and recurrent neural networks (RNNs) are used to handle complex time series data. Improvement methodologies like positional encoding with self-attention mechanism and ConvLSTM neural network are utilized to leverage multidimensional time-frequency data to process spatial and temporal dependencies present in the extracted features, further increasing model's efficiency. A case study is conducted on XJ-SY rolling element-bearing dataset to validate the proposed methodology, where PNN performed exceptionally in terms of accuracy and efficiency. It is concluded that PNNs exhibit potential for predicting RUL of bearings and can be applied to other machinery types.

17:05
Multilevel Artificial Intelligence Classification of Faulty Image Data for Enhancing Sensor Reliability
PRESENTER: Omar Mohammed

ABSTRACT. The classification of sensor fault types is actively discussed in the literature, see e.g., [1, 2]. Additionally, categorizing the associated intensity of each fault type would better facilitate the performance evaluation of a system. Therefore, we propose an extended classification concept that classifies both type and strength of a fault. The strength represents a defined intensity of a fault type. This proposed classification methodology is presented for an RGB camera sensor. In [3], the failure mode and effects analysis are performed for different fault types in the RGB camera used for an autonomous driving application. Some examples of injected faults are blur, broken lens, dead pixels, etc. Definitively different fault types have certain effects on the functional operation. The strength of faults also intensifies the effects to different degrees. For example, a slightly broken lens would not necessarily alter the lane-keeping performance of a vehicle, but if the lens is heavily cracked, the lane-keeping system could completely fail. There exist some studies where fault injection is applied to the AI networks for analyzing the effects on the system, see [4]. In this paper, the faults are injected to the input image, in order to emulate the possible hardware and environmental faults on the camera sensor, which will later be used to formulate the necessary remedial actions to enhance reliability. For the fault injection, we will be using our in-house tool [5] which can generate different types of faults with an assigned strength. In one of the proposed methods, the sensor module uses three layers of AIs, the first one identifies if the image is faulty or not, then the next AI classifies the type of fault, e.g., blur, broken lens, etc. The last layer identifies the strength of each type of fault into three classes, i.e., slight, medium, and extreme. Finally, the output message from the sensor module would include fault type and strength together with the image data, which could be later used for prognostic health management (PHM).

References:

[1] V. Baljak, K. Tei, and S. Honiden, “Fault classification and model learning from sensory Readings — Framework for fault tolerance in wireless sensor networks,” in IEEE 8th Int. Conf. Intell. Sens. Sens. Networks Informat. Proces., pp. 408-413, (2013). [2] S. U. Jan, Y. -D. Lee, J. Shin, and I. Koo, “Sensor fault classification based on support vector machine and statistical time-domain features,” IEEE Access, vol. 5, pp. 8682-8690, (2017). [3] F. Secci and A. Ceccarelli, “On failures of RGB cameras and their effects in autonomous driving applications,” in IEEE 31st Int. Symp. Soft. Rel. Eng. (ISSRE), pp. 13-24, (2020). [4] P. Su and D. Chen. “Using fault injection for the training of functions to detect soft errors of DNNs in automotive vehicles,” in Int. Conf. Dependability Complex Syst., pp. 308-318, (2022). [5] O. Mohammed, “Fault injecting tool,” [Online]. Available: https:// github.com/omarMohammed-USI/Faults-injecting-tool-USI, (accessed Dec. 9, 2022).

17:20
Cutting Tool Degradation Monitoring in Turning with Artificial Neural Network

ABSTRACT. While machining, the cutting tool is subjected to multiple degradation mechanisms occurring simultaneously [1]. Consequently, the quality of the machined surface and the compliance with manufacturing tolerances are both reduced. Under nominal turning conditions, the tool is predominantly worn on its flank face. The size of this degradation is characterized by a value called Vb and defined by the ISO 3685 standard [2]. Given the impact that tool degradation has on the quality of machined parts, its monitoring is a growing research topic and there are multiple approaches capable of monitoring cutting tool degradation. To avoid having to stop the machining process to perform a tool degradation measurement, existing approaches mainly focus on indirect monitoring, which consists of estimating the tool condition from signals collected during machining. In recent years, artificial intelligence techniques have been the most widely used because of their ability to adapt to any machining process and machine and are generally more efficient than conventional approaches [3]. In this work, a neural network approach is presented to monitor tool wear in real time from data collected during instrumented turning tests. The signals recorded during the tests are pre-processed and used to train a neural network allowing to estimate the degradation of the cutting tool. The results covers the whole acquisition chain, from the data collected from the sensor and its processing to its use in the neural network. To identify the data most correlated with the tool wear, correlation analyses are performed, these correlated data are then used as input for the neural networks. The different hyperparameters defining the neural networks are optimised to obtain accurate and reliable monitoring. The approach demonstrates that shallow neural networks can produce precise results, which limits the computational resources required. Hence allowing them to be integrated directly on machine tools. It is concluded that the approach of monitoring the degradation of a cutting tool by neural networks and simple pre-processing techniques allows to allocate lower computing resources while achieving a good estimate of the tool state and a timely detection of its end of life. These results should help improve the tool use and reduce the machine downtime

[1] Klocke, F., & Kuchle, A. (2009). Manufacturing processes (Vol. 2, pp. p-433). Berlin: Springer. [2] ISO 3685—Tool Life Testing with Single-Point Turning Tools. 1993. Available online: https://www.iso.org/fr/standard/9151.html (accessed on 3 august 2022) [3] Colantonio, L., Equeter, L., Dehombreux, P., & Ducobu, F. (2021). A systematic literature review of cutting tool wear monitoring in turning by using artificial intelligence techniques. Machines, 9(12), 351.

16:20-17:35 Session 13B: S.25: Human Dependability & Automation for Robotic, Intelligent and Autonomous systems

S.25: Human Dependability & Automation for Robotic, Intelligent and Autonomous systems

 

16:20
Proposed method for analysis of eye tracking data from unmanned ship operation

ABSTRACT. Within the maritime domain, there is a focus on applying new technologies to reduce cost, and more recently focusing on environmental sustainability. The use of highly automated and unmanned ships is one such approach. The maritime safety level should not be reduced with the introduction of unmanned remote ship operation and novel technology in the maritime domain. This requires increased knowledge and understanding of how humans perceive information presented through displays in a fully digitalized work environment. Based on this, the paper suggests a method using eye-tracking data to objectively collect and analyze how operators perceive information on displays. The approach is assessed by running scenarios in a simulated environment with navigators and vessel traffic operators. Both information content and the arrangement of information are explored through the approach. The collected eye-tracking data is analyzed and visualized through software and validated against data from participant interviews. This poses a non-intrusive method allowing in-depth post analysis of individual events in a test scenario, without the need to stop and perform e.g., an interview. By using the method, quantitative objective results are obtained, which is valuable for backing up qualitative interview data. The results suggest that the proposed method is promising by enabling quantitative evaluation of visual information accessed by the test participants. Further work should pay attention toward analysis of how individual visual search patterns differ between participants for different test cases. From a safety perspective deeper understanding of multi-asset controls impact on salience and potential tunnel vision is needed.

16:35
Safety and Human Dependability in seaborne autonomous vessels
PRESENTER: Christoph Thieme

ABSTRACT. Highly automated and autonomous seaborne vessels (ASV) are developed to improve environmental impact and transport of goods and people. ASV are expected to be remotely supervised, to fulfil legal requirements and assure safe handling in cases of emergencies. The AutoSafe project is developing solutions for the safe operation of ASV. For emergencies, the human safety supervisors need to handle the vessel, supported by fallbacks, procedures, and technology. Passengers need to feel safe and know what to do in all situations, to avoid injuries or loss of life. International standards are a starting point to build safe, reliable and trust. The aim of this paper is to assess applicability and potential benefit of IEC62508:2010: Guidance on Human Aspects of Dependability to the AutoSafe cases, based on the identified project needs. IEC62508:2010 deals with the human aspect of dependability, where dependability is the combination of reliability, availability, maintainability, safety, etc. Methods and approaches exist to set requirements, assess, and evaluate human performance. However, they are most applicable to trained operators. Passengers' and especially emergency services' interaction with the ferry during emergency situations are only covered to a certain degree by the standard. These create human factor challenges, which should be referenced appropriately. IEC 62508:2010 should be updated with respect to highly automated and autonomous systems or refer to other relevant standards.

16:50
Safe work practice: choosing the right level of flightdeck automation
PRESENTER: Aud Wahl

ABSTRACT. Automation in modern aircraft has been a major contributor to increased safety and efficiency in commercial flight operations during the last decades. However, complexity of automation can reduce pilots’ understanding and control of the aircraft thereby creating dangerous and even fatal situations. Much of the existing research focuses on how a single operator interact and use automation from a cognitive perspective. This is a rather one-dimensional view that does not consider automated systems controlled by multiple operators and the effect of social processes, for example are automated systems in airline cockpits operated by two pilots. We set out to expand the established concept by exploring cockpit crew collaboration from a social perspective, focusing on interaction practices.

The objective of this qualitative study is to improve our understanding of human-automation interaction by examining how airline pilots use automation technology in a social and collaborative context. In-depth interviews with airline pilots describe situated practices at flight deck and shed light on how multiple operators collaborate and interact with automation.

The empirical material reveals that even if automation is regarded as beneficial when it comes to workload management and situation awareness, the pilots do not always fully understand the capabilities and boundaries of the automation with regards to actual collaboration and interaction within the cockpit. This may lead to less efficient use of the available technology and have a negative impact on pilots’ abilities to coordinate work and make in-flight decisions.

It is interesting to note that the level of automation is not regarded as a set entity throughout a flight, but is chosen strategically to reduce risk. The level of automation is selected based on an overall judgment of operators’ competence and situation factors such as weather and complexity of navigation. This dynamic use of automation is not only based on a continuously assessment of the technical state of the aircraft, but also the human actors and their level of awareness through situated practice, i.e., via various forms of both verbal and non-verbal cues. Automation is as such regarded both as an enabler as well as an obstacle for efficient teamwork. Our study shows how this distinct and proactive technology embedded practice facilitates a joint understanding of the situation at hand, an understanding imperative for the safe execution of flights.

17:05
Challenges and Solutions for Autonomous Systems Safety – Findings from three International Workshops (IWASS)

ABSTRACT. The technological advances in automation and autonomous systems enable new and highly sophisticated systems, processes, and missions. Extensive mapping and monitoring of land, space, and the oceans, renewable energy harvesting and production, inspection of physical structures difficult to access, remote operation of subsea systems, land-based, maritime, and air transportation are examples of emerging areas for high autonomy due to the global pursuit of energy, food, minerals, and efficient transportation of people and goods. Autonomous systems intend to be a stepping-stone towards safer and more efficient operations. Still, the corresponding advancements in software, hardware, and interactions with humans and the environment involve complexities that pose major challenges concerning safety, reliability, and security (SRS). Society and regulatory agencies, for example, are hesitating to allow for widespread use of highly autonomous cars or ships. The industrial use of autonomous systems depends on effective and transparent standards for safety, verification, and certification, for which it is essential to develop credible methodologies for characterizing and assessing risk, deriving acceptance criteria, and testing and verification. This provides a strong case for risk management to become an important driver in the early design process and during the operation. Autonomous systems must have sufficient integrity, be capable of determining if it can continue its operation with degraded performance and cooperate with human operators and supervisors. Since 2019, the International Workshop on Autonomous Systems Safety (IWASS) has been bringing together multidisciplinary experts from academia, industry, and authorities to discuss the SRS challenges and potential solutions to the advancements of autonomous systems. This paper aims to provide an overview of the discussions and results from the three editions of IWASS and discuss potential ways to enhance SRS in future developments and implementations of autonomous systems.

17:20
Assessing the Impact of Autonomous Vessels on the Navigational Safety of Maritime Transport
PRESENTER: Iulia Manole

ABSTRACT. Autonomous vessels (AVs) are expected to commission and become operational within the next decade, in response to the declining supply of seafarers and increasing demand for seaborne trade. It is generally assumed that the introduction of autonomy would reduce the occurrence of human-related accidents on ships, but there is a lack of studies on the impact that autonomous vessels would have on the safety of maritime transportation. In this study, a background of autonomous shipping and relevant previous projects are reviewed. Quantitative and qualitative tools are used to assess the impact of AVs on maritime transportation safety. Data is extracted from navigation-related accident investigation reports and subsequently analysed to determine the influence of human errors. Then, it will be determined what proportion of these accidents would have been prevented had the vessels involved been unmanned. Thereafter, the quality of the dataset is assessed, and a preliminary descriptive analysis is conducted, finding the frequency of each factor, as well as any association between them and the occurrence of navigational human errors. The qualitative method comprises of interviews with industry professionals of different backgrounds, to explore the expected future impact of AVs and determine the main benefits and difficulties. These are analysed using thematic analysis. It is predicted and verified by subject matter experts that autonomous vessels will have positive effects, creating less human dependent actions, therefore reducing accident frequency. Obstacles such as the lack of adequate legislation and regulations or possible cyberattacks are also discussed.

17:35
Adaptivity in human-robot-interaction

ABSTRACT. The recent developments in the collaborative application of robotic systems introduce a close human-machine interaction (HMI). Most efforts are already focused in defining safe scenarios during solutions design and processes standardization, while cognitive and psychological aspects remains important factors to be further developed. New technologies increase productivity and flexibility, but new or higher risks can arise if not well managed. Industry 5.0 places the wellbeing of the worker at the center of the production process [1]. The new regulation on machinery products [2] requires that manufactures shall avoid all risks related to moving parts and psychological stress at the same time. In particular, it requires that machinery products with a certain level of autonomy, and fully or partially evolving behavior or logic, shall be adapted to respond to people adequately and appropriately. This means introducing verbal (through words) and non-verbal (through gestures, facial expressions or body movement) actions, and to communicate all planned actions (what it is going to do and why) to operators in a comprehensible manner. In this study, we analyze the critical tasks for safety in a robot system application for which an adaptive cognitive system could be beneficial for operator’s safety and health. Depending on the task and its specific characteristics, we will assess possible adaptation systems [3] [4] [5]. All of that will guide a new risk assessment that considers also cognitive states, putting the wellbeing of the worker at the center of the production process.

[1] European Commission, Directorate-General for Research and Innovation, Breque, M., De Nul, L., Petridis, A., Industry 5.0 : towards a sustainable, human-centric and resilient European industry, Publications Office, 2021, https://data.europa.eu/doi/10.2777/308407

[2] Proposal for a Regulation of the European Parliament and of the Council on machinery products Document date: 20/04/2021 - Created by GROW.01 - Publication date: n/a - Last update: 21/04/2021

[3] Hinss Marcel F., Brock Anke M., Roy Raphaëlle N., Cognitive effects of prolonged continuous human-machine interaction: The case for mental state-based adaptive interfaces , Frontiers in Neuroergonomics V.3, 2022, DOI 10.3389/fnrgo.2022.935092, ISSN=2673-6195

[4] Fabio Fruggiero, Alfredo Lambiase, Sotirios Panagou, Lorenzo Sabattini, Cognitive Human Modeling in Collaborative Robotics, Procedia Manufacturing, Volume 51, 2020, Pages 584-591, ISSN 2351-9789, https://doi.org/10.1016/j.promfg.2020.10.082.

[5] Gualtieri, L., Fraboni, F., De Marchi, M., Rauch, E. (2022). Evaluation of Variables of Cognitive Ergonomics in Industrial Human-Robot Collaborative Assembly Systems. In: Black, N.L., Neumann, W.P., Noy, I. (eds) Proceedings of the 21st Congress of the International Ergonomics Association (IEA 2021). IEA 2021. Lecture Notes in Networks and Systems, vol 223. Springer, Cham. https://doi.org/10.1007/978-3-030-74614-8_32

16:20-17:35 Session 13C: Safety Nuclear Systems II
16:20
Resiliency of Industrial Complexes Powered by Small modular Reactors
PRESENTER: Dana Prochazkova

ABSTRACT. Industry is an important sector of the economy of developed countries. It applies various technologies that contribute to economic development and human prosperity. Its safe operation requires raw materials, energy, well-managed technology, qualified personnel and qualified management, as well as measures to reduce the unacceptable impacts such as pollution of environmental components and damage to the health of people working in hazardous operations and possibly in their surroundings. From an economic point of view, industry must be both, the safe and the competitive ,and therefore, it is highly dependent on available resources. At present, there are problems in the area of material and energy resources in Europe, which seriously threaten the operation of industry. In the presented article, we deal with the energy base for the operation of industrial complexes. Due to the development and advantages of small modular reactors (SMRs), we propose to create industrial complexes powered by SMRs. In practice, this means the creation of complex systems, where a number of technological installations powered by SMRs are located in a certain area. It is a fact that each technological installation including the SMR has its limitations, and therefore, it safely operates only at certain interval of conditions. These conditions´ intervals are not same, and therefore, under certain conditions, the technological installations including the SMR can interact in an unacceptable way, which can lead to failures or accidents not only certain individual part of industrial complex, but also up to the failure of the entire industrial complex. Resiliency of an industrial complex powered by an SMR means adjusting a complex system of the SoS (open system of interconnected open systems) type so that during the operation: it is robust; redundant; inventive; and fast. The insertion of the properties in question guarantees the optimal operation of the industrial complex, i.e. the required level of safety, performance and reliability Based on current knowledge, the resilience management of industrial complexes powered by SMRs must be integrated and strategic. Its aim is to optimize the operation of the industrial complex over time so that under all conditions that must be considered in the design, the risks of both, the whole complex and the individual complex parts would be acceptable and functional failures in the complex were not tolerated. The paper contains a proposal for a model of resiliency management of industrial complexes powered by SMRs. It is based on models based on risk management in favor of safety for individual industrial units and SMR. It sets limits for the operation of individual industrial and service units so that the resiliency of the entire complex powered by SMRs is maintained under all design conditions. In the case if whole industrial complex or its part belongs to object of critical infrastructure, the resiliency management model also needs to solve responses to the possible beyond design accidents from safety reasons of the country stability.

16:35
Reliability and Safety Assessment of a Passive Containment Cooling System in Advanced Heavy Water Reactors
PRESENTER: Saikat Basak

ABSTRACT. Passive Safety Systems (PSSs), which rely on natural forces and processes, such as natural circulation, gravity, internal stored energy, etc., are increasingly utilized in generation 3+ and generation 4 advanced nuclear power plants to increase inherent safety features of the nuclear reactor design. Although PSSs should considerably increase the safety of nuclear power plants, it is still challenging to systematically assess the reliability of passive systems because of the lack of data and uncertainties associated with phenomenon involving natural forces that underlies their safety functions. In this study, the Fault Tree Analysis (FTA) was used to assess the reliability and safety of the Passive Containment Cooling System (PCCS) in Advanced Heavy Water Reactor (AHWR). The failure probability of PCCS was calculated from the failure probabilities of Basic Events (BEs). Using the data for the failure probabilities of Top Event (TE) and BE from the FTA model, two Artificial Neural Network (ANN) models were proposed for the reliability analysis of PCCS to supplement the FTA model. Rectified Linear Unit (ReLU) and Sigmoid activation functions were utilized to build ANN models, and an Adaptive moment estimation (Adam) optimizer was used to train the ANN models to make these models computationally efficient. The results of the FTA model were compared with the predictions of the ANN models to find out the ANN model performance.

16:50
A Study of New Risk Metrics for Non-Light Water Small Modular Reactor

ABSTRACT. For the light-water reactor (LWR), core damage frequency (CDF) and large early release frequency (LERF) are the risk metrics. The risk metrics are used to show the safety level of the nuclear power plants (NPPs). Thus, without the risk metrics, it is difficult to represent how much the safety of a NPP is enhanced with design change or equipment upgrade. However, since non-light water (LW) small modular reactor (SMR), such as molten salt reactor, does not have the risk metrics such as CDF or LERF, it is inconvenient in many cases even though it satisfies the frequency-consequence (F-C) target as shown in Fig. 1 [1] for licensing. Thus, the following new risk metrics for non-LW SMR are suggested in this paper.

Risk of Event Sequences (RES) = ∑_i^n▒〖〖(f〗_i*C_i 〗)

Average Safety Margin of Event Sequences (ASM) = (∑_i^n▒〖SM〗_i ) / n

Minimum Safety Margin of Event Sequences (MSM) = the minimum among SMi

where, i = the event sequence, n = the total number of event sequences SM = safety margin = the shortest distance from point (f_i ,C_i) to F-C target (limit)

Fig. 1 F-C Target Example

In addition, how the new risk metrics are meaningfully used is illustrated in this paper with examples of the two cases: 1) multi-module plant, and 2) EPZ (emergency planning zone) distance determination.

In the F-C curve, since the expression of the frequency metric on a per plant-year basis would reflect the number of accident occurrence depending on the number of modules, the more module increases the frequency, and which could become a critical disadvantage. Thus, the risk metric RES of multi-module plant will be proportional to the number of modules.

In US criteria, NUREG-0396 [2] is still back born for the SMRs’ EPZ as well as for the commercial large nuclear reactors’ one, and for the SMR EPZ case, regulatory guide (RG) 1.242 [3] could be used to clarify the confusing words ‘less severe’ accident and ‘more severe’ accident of NUREG-0396. Since the F-C curve was developed with the assumption that EAB (exclusion area boundary) = EPZ distance, if we select different EPZ distance in RG 1.242, we can get different RES, ASM, and MSM. As shown in the two examples, the suggested risk metrics are very useful to compare how much the safety was changed in the non-LW SMR. References. [1] NRC, “Guidance for a Technology-Inclusive, Risk-Informed, and Performance-Based Methodology to Inform the Licensing Basis and Content of Applications for Licenses, Certifications, and Approvals for Non-Light-Water Reactors”, Regulatory Guide 1.233, Rev. 0, June (2020) [2] U.S. NRC, "Planning Basis for the Development of State and Local Government Radiological Emergency Response Plans in Support of Light Water Nuclear Power Plants," NUREG-0396/EPA 520/1-78-016, December 1978. [3] U. S. NRC, “Pre-Decisional, Final Rule: Regulatory Guide 1.242, ‘Performance-Based Emergency Preparedness for Small Modular Reactors, Non-Light-Water Reactors, and Non-Power Production or Utilization Facilities’,” October 15, 2021 (ML21285A035).

17:05
Achieving a Level of Autonomy for Autonomous Operation of Microreactors: Safety and Reliability Consideration

ABSTRACT. Modular and microreactors, along with other advanced reactor technologies, are important contributors to the future of nuclear energy and net-zero vision. Though designs for these reactor types are diverse, they share a common goal: to ensure that future reactor technologies have (1) low operating costs; (2) high reliability; (3) remote, autonomous, or semi-autonomous operations; and (4) the flexibility to support expanded applications and markets. To achieve this, it is important to embrace advancements in modeling and simulation (physics-based), sensors, and artificial intelligence to make informed decisions. This work serves to propose updated levels of automation for nuclear reactor operations, because of considering long-term economic and commercial ambitions of the advanced reactor developer community. As in other fields such as road-going vehicles and aviation, reactor technologies can benefit from modern automation through the resulting reduction in operations and maintenance costs, while still maintaining the current industry standards regarding safety, resilience, reliability, overall performance, and the capacity for root-cause analysis. The current guidelines on automation levels, as published by the U.S. Nuclear Regulatory Commission in Section 9 of NUREG-0700, reflect design principles that implicitly limit the potential of automation innovation for reactor operations, particularly regarding advanced reactors intended to operate in remote locations or be used for off-grid applications. Motivated by the operational paradigms anticipated for future reactor designs, this work would present a six-level approach [1] that aligns with contemporary automation concepts as well as automation level definitions from other non-nuclear safety–critical industries. These levels build upon the current guidelines to enable next-generation nuclear reactor technologies to become increasingly economically competitive and commercially viable relative to competing power generation sources. The work critically examines the identified challenges, knowledge gaps, and enabling technologies to achieve advanced levels of automation.

17:20
STPA-based Safety Approach on the Emergency Ventilation System in Nuclear Power Plant
PRESENTER: Ankur Shukla

ABSTRACT. The existing analog instrumentation and control (I&C) systems have been digitalized, as have new digital control systems. Numerous risk analyses have been conducted in the past to improve the safety of digital I&C systems. However, the underlying traditional methods have not considered the unsafe interactions between system components, human error, software requirement error, and software and human operator interaction [1]. Systems theoretic process analysis (STPA) is a new hazard analysis technique that provides a potential solution to describe how unintended outcomes can occur due to inadequately identified and implemented constraints on system design, development, and operation [2]. On the other hand, existing risk analyses have primarily focused on analysing the safety of various DI&C systems; however, the safety of the emergency ventilation systems (EVSs) in nuclear power plants (NPPs) requires further investigation [3-8]. In this paper, we have discussed the STPA-based safety approach to evaluate the safety of the EVSs in NPPs. We examined the control structure and process model to identify unsafe control actions (UCAs), which included various controllers (human operator and reactor protection system), control types (manual and automatic), and controlled processes. This method is employed in the Halden safety fan (HSF) design, which is an EVS used in undesirable situations such as radioactive spills and containment leaks. The STPA-based safety approach aids in the identification of safety constraints for the HSF that must be enforced and ensures that they are adequately enforced in the HSF design. Furthermore, it identifies the process model that the controller requires to provide adequate control and information.

References

1. Thomas, J., & Leveson, N. (2013). A New Approach to Risk Management and Safety Assurance of Digital Instrumentation and Control Systems. Transactions, 109(1), 1948. 2. Thomas, J., Lemos, F. L. D. D. D., & Leveson, N. (2012). Evaluating the safety of digital instrumentation and control systems in nuclear power plants. NRC Technical Researcy Report 2013. 3. Kim, E. S., Lee, D. A., Jung, S., Yoo, J., Choi, J. G., & Lee, J. S. (2017). NuDE 2.0: A formal method-based software development, verification, and safety analysis environment for digital I&Cs in NPPs. Journal of Computing Science and Engineering, 11(1), 9-23. 4. Bao, H., Zhang, H., & Thomas, K. (2019). An Integrated Risk Assessment Process for Digital Instrumentation and Control Upgrades of Nuclear Power Plants (No. INL/EXT-19-55219-Rev000). Idaho National Lab. (INL), Idaho Falls, ID (United States). 5. Zhang, H., Bao, H., Shorthill, T., & Quinn, E. (2022). An Integrated Risk Assessment Process of Safety-Related Digital I&C Systems in Nuclear Power Plants. Nuclear Technology, 1-13. 6. Rowland, M. T., & Clark, A. J. (2021). Application of the Information Harm Triangle to inform defensive strategies for the protection of NPP I&C systems (No. SAND2021-4659C). Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). 7. Shin, S. M., Lee, S. H., Shin, S. K., Jang, I., & Park, J. (2021). STPA-Based Hazard and Importance Analysis on NPP Safety I&C Systems Focusing on Human–System Interactions. Reliability Engineering & System Safety, 213, 107698

16:20-17:35 Session 13D: S.34: Risk Analysis and Safety in Standardisation I

 The aim is to promote a discussion between academic research and industrial about state-of-the-art solutions proposed in the field of standardization of safety related topics with a preference to topics covered by Machine Directive (2006/42/EC). We intend to test best the practice with examples of theoretical models (probabilistic) and empirical data (relative frequencies) related to safety and risk analysis in standardization. We also welcome comparisons between different standardization paradigms with clear examples to support the discussions.

Location: Room 2A/2065
16:20
ISO/IEC 27001:2022 and ISO/IEC 27019:2018 ISMS Mapping Tool for Stakeholders in the Energy Industry
PRESENTER: Asiye Öztürk

ABSTRACT. Following the release of the amendments to ISO/IEC 27001:2022 and ISO/IEC 27002: 2022 as normative and informative security requirements for an information security management system (ISMS), which can also be understood as a holistic requirement for the intersectoral triangulation of information security (i.e. IT security, organizational security and human and individual security), the question for the players from the energy industry is how to combine the newly requirements of the ISO/IEC amendments with the not yet updated industry-specific requirement of ISO/IEC 27019:2018 (e.g. industry-specific requirement for OT systems of the energy industry to comply with the state of the art in information security) and implement them. The question is how the normative requirements of the amendments ISO/IEC 27001:2022 can be applied to the normative requirements of the not yet updated ISO/IEC 27019:2018.

Due to existing time delays in the update processes of ISO/IEC 27001 and 27019, as well as the defined 36-month transition period in which the old ISO/IEC 27001:2013 can still be used, an industrial issue arises for mapping - and referencing - the new ISO/IEC 27001:2022 with the old ISO/IEC 27019:2018, since an updated version of ISO/IEC 27019 is also not expected until 2025. The 114 controls have been compressed from a total of 14 security-related topic areas to four and to just 93 normative controls. All controls from the old version except for A. 11.2.5 "Removal of Assets" were adopted and consolidated, and a further 11 new controls were added. This means that a total of 56 controls from ISO/IEC 27001:2013 have been consolidated into 24 controls in ISO/IEC 27001:2022.

The key question for energy industry stakeholders, who must now begin the transition of their Statement of Applicability (SoA) defined based on ISO/IEC 27001:2013 to the new structure of ISO/IEC 27001:2022 and reference it simultaneously with ISO/IEC 27019:2018 to meet legal requirements of the European Union's Network Information Security Directive (NIS 1.0), was:

How can the two standards (ISO/IEC 27001:2022 "cross-sectoral requirement for ISMS" and ISO/IEC 27019:2018 "energy-sector-specific requirement for ISMS") be operated in an efficient manner that is compatible with each other?

In this paper, an approximation is attempted to link the two standards through a semantic analysis and visualize them in a transparent way for the energy industry. This approximation attempt is of elementary importance, as currently no unified and robust mapping methodology of the two standards does not exist, and stakeholders of these mapping processes must perform time-consuming and costly ones individually. With the mapping (transfer of SoA 27001:2013 to SoA 27001:2022 and referencing to ISO/IEC 27019:2018) and the associated artifact (excel-based mapping tool), CRITIS from the energy industry are given the opportunity to conceptualize their update and transfer processes for the applicability of the requirements for an ISMS in an efficient and consistent manner. Furthermore, the mapping tool can be used to redefine the audit process based on ISO/IEC 27001:2022.

The tool is currently being operationalized in various implementation and certification projects for distribution network operators and electrical generation operations, including power plants.

16:35
Comparison of a Normal and Logistic Probability Distribution for the Determination of the Impact Resistance of Polycarbonate Vision Panels
PRESENTER: Nils Bergström

ABSTRACT. International standards for the safety of machinery define requirements for the design of safeguards in machine tools. An essential requirement of the safeguard consists of retaining ejected workpiece or tool fragments in case of an accident. Appropriate protective performance of the guard is demonstrated by means of an impact test carried out against a standardized projectile. The impact resistance (IR) is generally used as quantitative measure of an appropriate protective performance. It is defined as maximum kinetic projectile energy a safeguard is able to withstand. The standard procedure for determining the IR is the so called bisection method, which involves narrowing a wide interval through a series of impact tests. However, this approach is associated with considerable uncertainty since it depends solely on the last two impact tests. In the present study, an alternative approach based on a probabilistic description of failed impact tests is proposed. A normal and a logistic distribution are compared in terms of their suitability for modeling the probability of failed impact tests. Both distributions are well suited, albeit the normal distribution requires considerable data preparation, which also affects its results. In contrast, the logistic distribution does not require any data preparation, providing an advantage over the normal distribution. This new approach can reduce the uncertainty associated with the determination of the IR providing more accurate and reliable results.

16:50
Safety Argumentation for a Nuclear Reactor Protection System – an Assessor’s View
PRESENTER: Xueli Gao

ABSTRACT. Structured safety argumentation has several advantages over safety demonstrations provided through a free text form. However, there are few publicly available examples of broadly accepted safety assurance cases with sufficient detail to demonstrate best practice. Furthermore, they usually reflect the system developers’ viewpoint. This paper presents simplified extracts of a safety assurance case from a case study that uses an assessor’s viewpoint to structure the argument. The case study is based on relevant sections of US Nuclear Regulatory Commission regulation. The argument is partial and focuses on the conceptual design level of the “trip” safety function allocated to the Reactor Protection System of a nuclear power plant. Reflections and general observations from the discussion with an expert assessor aim to support readers with practical considerations for similar safety assurance cases.

17:05
A continuous OT cybersecurity risk analysis and mitigation process
PRESENTER: Christoph Thieme

ABSTRACT. Operational technology (OT) systems, such as oil and gas installations, must tackle extreme demands. Operations need to be performed efficiently and safely with very high availability. Any downtime should be avoided for both efficiency and safety reasons. While the enterprise levels typically are extensively realized through software-intensive solutions, the operation and control, and process levels (OT-levels) increasingly also become operated by software solutions, such as operator support systems and predictive maintenance. Additionally, both IT and OT systems are becoming increasingly connected to the outside world, for example, with cloud systems that can enable more efficient and data-driven operations.

Increased connectivity creates new opportunities but also severe cyber security challenges where connected software can be exploited to attack OT-systems, and hence cause severe safety implications. Such systems are typically protected by a well of technologies, e.g., firewalls, through architectural measures such as separation in zones that are connected through restricted conduits, and through organizational measures, e.g., by controlled access to critical systems.

Although it is possible to build in such security measures during development (prior to operation), it is also necessary to continuously address cybersecurity in operation. For example, software in OT-equipment may have previously undiscovered errors that attackers can exploit (e.g., zero day exploits). Since it is - per definition - impossible to have a complete overview of these upfront, it means that the asset owner needs to maintain this threat picture in operation continuously. Furthermore, the organization needs to be able to react and take mitigating actions as soon as possible when vulnerabilities and threats become known. Hence, there is a need for a continuous and fast cybersecurity and risk mitigation process for connected OT systems.

As a start to solve this challenge, we propose a process that maintains a constantly updated overview of the threat situation from a set of cybersecurity information sources which feeds a risk analysis on both the system and component levels, where the risk is evaluated to decide whether the risk is acceptable or whether immediate, short or long-term mitigating actions are needed. Such mitigating actions may be restriction of operations or intensified monitoring, etc. The input to this process can come from multiple sources, such as threat intelligence (e.g., from CERTs), internal system monitoring (e.g., intrusion detection systems), or security information from suppliers, including patch information. This type of process require a catalog of OT assets that are being protected, and there is a need to track identified risks and mitigating actions over time.

This paper discusses the rationale for such a process based on current studies within the Norwegian oil- and gas industry. We also seek to coordinate the process with recommendations in the IEC 62443 standard series for cybersecurity in OT-systems. Furthermore, we also discuss limitations related to OT systems, where, e.g., monitoring or mitigating actions cannot disturb operation and availability.

The paper concludes by proposing further work to devise such a process in practical terms and how to comply with the IEC 62443 standard.

17:20
Safety of Machinery – Proposal for a comparative method for statistical tools with examples

ABSTRACT. In the VDW Research Institute, a joint discourse on statistics in safety of machinery took place 2022. The aim of the discourse was to define a practical basis through which a standardized application of the most important basic statistical methods and a uniform presentation of the results would be made possible. "Uniform" because at any given time there was a whole range of research projects in the VDW Research Institute using a variety of statistical methods to evaluate their results. Crucial to the discourse was that a broad base of scientific expertise in the VDW environment was available from the outset. Likewise, results from these projects could be used as examples that represented tangible problems for the company representatives of the participating companies.

Statistical evaluations are an essential basis for argumentation in the field of machine safety. However, statistics in the mathematical sense is a broad field, and the methods used are usually not intuitive, including confusion due to terminology. Especially those responsible for machine safety practitioners in the member companies of the VDW are faced with the challenge of evaluating such complex issues in the context of their daily work.

The remedy was the above mentioned joint discourse on statistics. With a statistical toolbox it is intended to define a practical basis by which a standardized application of the most important basic statistical methods and a uniform presentation of the results are made possible, in order to create a recognition value for the industry representatives in particular. Furthermore, a test data set shall be provided at the end to validate and compare the statistical software tools of the individual partners (the VDW does not specify this, but leaves it as a “degree of freedom” to the researchers). Here the following tools were compared: EXCEL, Mathematica, Python (Scipy), Minitab, MATLAB, SPSS and R.

16:20-17:35 Session 13E: S.28: Approaches to generate and use digital twins for the optimized lifecycle of critical infrastructure systems to enhance reliability and safety in the operation phase

 The possibilities of generating and using digital twins using advanced technologies (AI, immersive technologies, sensor technology) will be considered in the special session.  To this aim, novel and innovative approaches from interdisciplinary research will be brought together and the diverse application and linkage possibilities of a digital twin in critical infrastructures from different disciplines (traffic road construction, energy systems, hospital systems) will be demonstrated.

Location: Room 100/4013
16:20
Industry 4.0 for the process industry: Using OPC UA to implement an information model for follow-up of safety instrumented systems

ABSTRACT. Programmable safety systems, such as safety instrumented systems (SIS), are vital for controlling hazardous processes and protecting personnel, assets, and the environment from dangerous events. The reliability requirements are specified in the design phase and must be regularly verified against the estimated performance in the operational phase. This follow-up of SIS performance requires access to various data about the SIS equipment. As of today, information concerning the technical aspects of the SIS is stored in a variety of source systems and documents. The automated information exchange between these data sources is minimal, so considerable manual resources are needed to retrieve, collect, analyze, and update data from different sources. The main objective of this paper is to introduce an information model for safety-critical equipment and suggest how to orchestrate data storage and exchange with existing systems for information management, equipment inventory databases, and maintenance planning.

Industry 4.0 represents a standardization initiative to achieve full interoperability of industrial systems and applications for monitoring and optimization. As many industrial facilities are not equipped with the most recent information and communication technologies, it is necessary to introduce overarching information models that define data formats and structures for more accessible and standardized retrieval of data from underlying systems. Such information model can be implemented using various digital platform, and one of the most common approaches is OPC united architecture (OPC UA). We suggest the application of OPC UA for acquiring and exchanging necessary data to SIS performance estimation.

The paper gives new insight into the transformation of traditional work processes in process industries to more digitalized processes within the Industry 4.0 strategy. The motivation for this work has been to generalize practical inputs to future specifications of digitalization initiatives and to describe how time and resources can be reallocated from failure registration and reporting to analysis and follow-up of the performance of safety instrumented systems. The results and approaches from the specific case from the oil and gas industry that was used in this work are transferable to other sectors where data from similar equipment types contribute to the SIS performance.

The development of an information model for functional safety has been one of the main goals of the ongoing Norwegian joint industry project: "Automated process for follow-up of safety instrumented systems" (APOS) led by SINTEF with 11 partners, of which 5 are offshore facility operators. The research presented in this paper is based on results from this project, including [1] and [2].

[1] Omang, E. OPC-UA Interface for Safety Instrumented Systems. Master thesis. NTNU. https://hdl.handle.net/11250/2831340

[2] Hauge, S., Lundteigen, M.A., Ottermo, M.V., Lee, S., and Petersen, S. (2022). Information model for functional safety – An APOS project (Draft). Research report. Trondheim, Norway: SINTEF

16:32
Secure and trustworthy generation of digital models of existing bridge structures using distributed ledger technology and blockchain
PRESENTER: Jan-Iwo Jäkel

ABSTRACT. To ensure intact infrastructure systems and a long service life of bridge structures in the operational phase, the establishment of a predictive maintenance management based on digital 3D models is essential. These 3D models are often not provided, especially for existing bridge structures. In addition, their creation is very complex, resource-intensive and involves many stakeholders. Through the use of digital methods and tech-nologies, important efficiency steps have been taken in the creation of digital models in recent years. Alt-hough semi-automated modeling of digital bridge models is possible, the quality agreed upon is not always achieved at the end of each development step. In this article, an approach is developed to create transparency, safety, consistency and traceability for the generation process of digital models of existing bridge structures. The approach involves all stakeholders participating in the process and creates a decentralized control and documentation mechanism using distrib-uted ledger technology (DLT) and blockchain. First, the current status on the use of DLT in the construction industry and bridge engineering is presented. Then, the system concept is presented and the basic algorithm is described. The general system architecture and the workflow are presented and described in models. Then the basic feasibility of the approach is presented and the previously shown concept is implemented. The basic functions of the algorithm are described and the results are critically reviewed. The result of the article is showing the possibilities to use DLT and blockchain to improve the accuracy, transparency and security in the creation of digital models of existing bridge structures.

16:44
Creating 3D models of bridges using different data sources and machine learning methods
PRESENTER: Annette Schmitt

ABSTRACT. In today's world, aging and worn bridges pose an increasing risk to transportation infrastructure. In the worst case, old, poorly maintained bridges can collapse at any time. But complex and expensive maintenance work on the bridges course traffic jams, which could lead to accidents or delivery problems. Therefore, bridges need intelligent and individual maintenance, which leads to a higher demand for documentation. One way to facilitate documentation is Building Information Modelling (BIM), which is based on a 3D model of the construction. For most of the German bridges is no 3D data available. So, it is necessary to create a 3D model as base for the BIM by Scan-to-BIM processes. The 3d data for this process can come from a wide variety of sources like laser scanning, photogrammetry or analog 2D plans. A concept for automated 3D modelling with data from diverse sources and machine learning methods is presented. Point clouds of the bridges captured with cameras and/or laser scanners and 2D plans are used as data base for the 3D model, which is created by machine learning methods from the fused point clouds by calculating surfaces. The resulting model can be used for BIM and AR/VR applications.

16:56
Concept for human-machine interfaces for resilient data extraction from digital twins
PRESENTER: Fabian Faltin

ABSTRACT. With the rise of building information modeling (BIM) in the infrastructure segment, the transfer and the accumulation of linked building data results in smart digital twins during the operation phase of public infrastructure constructions. On the other hand, the accumulation also creates extremely large datasets over this phase, which is usually the longest of a construction project. Especially during operation phase, persons and responsibilities will change over time. This leads to the need of an intuitive and easy to use interface to extract data to execute for example maintenance operations. The fast, reliable and user-friendly extraction of specific information from this data source will be key for safe and resilient operation of buildings in the future. The different ways of data extraction often include manual interaction and search or hardcoded application programming interfaces (API’s), which only have limited access to the data. Over the last Years, the development of natural language processing techniques (NLP) has made huge progress. Not only the popularity of chat GPT-3 (OpenAI) has shown the potential of chats as a human-machine interface (HMI). This work picks up the idea of using NLP in a HMI and investigates the data processing, that is necessary to enable fast data extraction from a digital twin. The concept is based on a natural language inquiry send by the user. These inquiries are given a chat-like user-friendly and user-known environment, which can easily be integrated in communication software which is already in use. The text input is preprocessed by NLP to extract the intentions of the information request. The extraction of the inquired information comes with a few difficulties. First the information is unstructured and stored in many different formats. Second the data are often subject to chronology, which renders parts of the data irrelevant in some cases, but not always. To address these challenges a neural network (NN) is proposed. The NN will be trained on digital twin data of infrastructure constructions. Furthermore, this work will discuss the need of fine-tuning the NN for each digital twin it operates on due to the heterogenic data structure of digital twins used in operation. Also, the special needs for the generation of training datasets used to train the NN in the context of common digital twins and their respective information structure.

17:08
Status quo of BIM implementation in hospital construction: a systematic literature review

ABSTRACT. Hospitals are considered as highly complex critical infrastructure buildings due to their size and significant usage. Therefore, a high level of communication and coordination of all stakeholders and tasks is required to accomplish efficient building management throughout the entire life cycle of the hospital construction. Particularly in complex buildings where many stakeholders are involved in construction measures and higher requirements need to be fulfilled, the digital method Building Information Modeling (BIM) provides various potential advantages.

This paper considers BIM as a solution approach and analyzes the status quo of implementation in hospital buildings. The focus is on systematically identifying literature that presents well-founded and current approaches to this topic and analyze the applications. Using a qualitative evaluation of the chosen literature, this study demonstrates the scope of BIM application in hospital construction as well as the improvement that can be achieved in project execution and building management over the entire life cycle. It is noticeable that BIM is already being used in many international hospital projects, with an upward trend. This finding can be explained by the many advantages of BIM-driven project management. In this context, the international BIM implementation status in hospital construction is elaborated and exemplarily compared with the status quo in Germany.

17:20
Approach to generate a simple semantic data model from 2D bridge plans using AI-based text recognition

ABSTRACT. The digital twin is intended to serve as the basis for an improved maintenance management. However, in the case of existing bridges, a digital model of the physical structure rarely exist. Various research approaches are currently addressing this problem using advanced technologies (laser scan, AI, photogrammetry). An essential part of these efforts is the transfer of relevant semantic information from an analogue source into the digital model. This paper deals with the question of how textual information from 2D drawings of bridges can be recognised and translated into a semantic data model. For this purpose, an OCR algorithm was utilized to translate printed and handwritten textual information into machine-readable text. Information about the material properties of the examined component was then stored as attributes in the component BIM object.

The choice of the OCR algorithm, the post-processing of the text recognition results, the identification of relevant information and the translation into a semantic data model are the key findings presented in this paper. It was shown that while the approach is operational, the reliable identification of information is highly dependent on the nature and form of its representation in the drawings. While text recognition has been shown to be reliable, further research is needed to process and interpret the extracted semantic information to enable a more broad approach to semantic enrichment.

16:20-17:35 Session 13F: S.14: Next Generation Methodologies for System Safety Analysis

S.14: Next Generation Methodologies for System Safety Analysis

Location: Room 100/5017
16:20
A discussion on the use of Eliminative Argumentation (EA) to identify Key Performance Indicators (KPIs) for the CERN LHC Machine Protection System.
PRESENTER: Chris Rees

ABSTRACT. Safety Performance Indicators (SPIs) and Key Performance Indicators (KPIs) form an integral part of the Safety Management System (SMS) for a selected system. They provide a key insight into the system’s safety performance, risk management and enable data-driven decision making.

Here we define an SPI as “a quantifiable and observable (detectable) measurement whose rate of occurrence can be used to gauge the safety of a system”. SPIs can be used to estimate the safety performance of a system, as well as to support the safety case and ensure that it remains “fit for purpose” and “live”. Similarly, a KPI for a system is defined as “a quantifiable measure used to evaluate the success of an organization, employee, etc. in meeting objectives for performance”. The KPIs discussed within this paper denote a measure of success/performance of the relevant identified sub-systems. Integration of KPIs and SPIs serves as a method of performance and safety evaluation of the systems they are associated with.

The paper will also discuss how SPIs and KPIs can be grouped into ‘leading’ and ‘lagging’ indicators. A leading indicator is one that tracks the occurrence of events that, while not themselves harmful, are expected to precede, or indicate the potential for, more harmful events. A lagging indicator is denoted as one that tracks the occurrence rate of hazards and/or loss events, such as crashes, injuries and fatalities. Leading and lagging indicators both have limitations, advantages and disadvantages, which will be discussed further in the paper, as well as the challenges of accurate data collection to support SPIs and KPIs.

The application of SPIs and KPIs can be used for a variety of potential uses such as tracking safety trends over time, measuring system compliance and providing evidence for the system’s safety case. This paper will focus on how SPIs and KPIs can be defined from the safety (assurance) case assessment process. Namely here the use of Eliminative Argumentation (EA) to define the potential hazards associated with autonomous vehicle systems, and a comparison to the machine protection system at the nuclear research facility CERN. We discuss the identification and evaluate the use of the SPIs and KPIs for each of these systems, showing how they are linked via a ‘golden thread’ to their identification within EA assessment, and how they are able to be analysed post-mortem to ensure that the assurance case for the system remains valid and “live”, as the system changes.

Finally, we discuss how the use of SPIs and KPIs can benefit the safety case and why ensuring that it remains “live” (fit for purpose) is critical to the continued safe operation of a system.

16:35
Importance Measures in dynamic and dependent fault tree analysis (D2T2)
PRESENTER: Sally Lunt

ABSTRACT. Recent developments in Fault Tree analysis, known as Dynamic and Dependant Tree Theory (D2T2) by Andrews and Tolo [1], enable the inclusion of general dependencies between the basic events making Fault Tree analysis more powerful when evaluating modern industrial systems. Using a combination of Binary Decision Diagrams, Stochastic Petri Net and Markov methods, it is now possible to model variable failure and repair rates, dependencies between component failures, and account for complex maintenance strategies, all of which are common features of modern engineering systems. To aid in the identification of areas where system performance can be improved, it is necessary to calculate measures of importance. Certain components will play a more significant role than others in causing, or contributing to, system failure. Birnbaum first introduced the idea of component importance in 1969 [2]. Since this time, numerous measures of importance have developed which enable more intelligent and effective system upgrade and the development of fault diagnostic checklists. This paper proposes methods for the calculation of Birnbaum’s measure of importance, the Criticality Measure of Importance, Fussell-Vesely’s measure of component importance, Fussell-Vesely’s measure of Minimal Cut set importance, Barlow and Proschan’s measure of initiator importance and Lambert’s measure of enabler importance. The key elements of the algorithms for calculating each measure of importance are demonstrated using a pressure vessel cooling system case study [1].

Acknowledgement: This project is funded by the Lloyd's Register Foundation, an independent global charity that helps to protect life and property at sea, on land, and in the air, by supporting high quality research, accelerating technology to application and through education and public outreach. References 1. Andrews, J., & Tolo, S. (2022). Dynamic and Dependent Tree Theory (D2T2): A Fault Tree Analysis Framework, Reliability Engineering and System Safety, published online 11th November 2022. 2. Birnbaum, Z.W, On the Importance of Different Components in a Multi-component System, Multivariate Analysis II, PR Krishnaiah, ed., Academic Press, 1969.

16:50
A nested Petri Net – Fault Tree approach for system dependency modelling
PRESENTER: Silvia Tolo

ABSTRACT. Risk analysis methodologies commonly applied to real-world engineering systems, such as Fault and Event Trees, lack the capability to model realistically the dependencies existing between system components. This limits the accuracy of the prediction of system behaviour due to the need for simplifying but often unrealistic assumptions (i.e., component independence) and hence the impossibility to capture inner dynamic features. The D2T2 methodology [1] was designed to overcome such limitations and offer a more realistic modelling of system behaviour through the integration of traditional Fault and Event Tree with more flexible techniques such as Petri Nets and Markov Models. These are indeed applied to the modelling of dependencies or complex behaviour (e.g., non-standard maintenance models, dynamic features) of individual components, and the information obtained reintroduced in the initial Fault Tree model in order to proceed with its computation [2]. However, in real-world application dependencies often involve entire subsets of components rather than individual ones. An example is offered by the use of parallel trains of components in safety critical subsystems: this introduces a degree of redundancy, enhancing the reliability of the system, but also dependencies between the trains that, if not adequately taken into consideration, can mislead the estimation of system safety. The D2T2 methodology offers a solution for the modelling of similar features, but may imply in this case the construction of large Petri Nets or Markov Models representing individual components dependencies that may result challenging or at minimum convoluted, putting strain on the analysist. This study offers a generalization of the DT3D methodology aiming at simplifying the dependency modelling of subsystems or trains by-passing the representation of their individual components. The suggested approach relies on the identification of the components trains or sets interested by the dependency relationship, and the extraction of the relative subtrees in the Fault Tree. These model sections are then analysed according to Fault Tree analysis (or the D2T2 approach if containing complex features) regardless the existent dependency. The results obtained and the information related to the nature of the dependency are then combined into the construction of a Petri Net entailing the independent failure mechanisms of the components in the subsets as well as their dependent relationship. The solution proposed is described and demonstrated through its application to a simple case study involving a safety critical subsystem of a nuclear reactor considering the dependency existent between two component trains. The potential for automatic generation of the dependency models is also discussed.

REFERENCES [1] Andrews, J. and Tolo, S., 2023. Dynamic and dependent tree theory (D2T2): A framework for the analysis of fault trees with dependent basic events. Reliability Engineering & System Safety, 230, p.108959. [2] Tolo, S. and Andrews, J., 2022. An integrated modelling framework for complex systems safety analysis. Quality and Reliability Engineering International, 38(8), pp.4330-4350.

17:05
Enhancing Realism in a Spent Fuel Pool PSA: Dynamic Repair Modelling and Realistic Mission Times

ABSTRACT. The PSA (Probabilistic Safety Assessment) of the spent fuel pool facility is evaluating the risk of boiling in the fuel storage pool. Repair of certain components in the cooling system is modelled, as crediting repair has a significant impact on the results. Looking at the interpretation of a MCS containing a repair event, it can be concluded that the static PSA representation does not capture the dynamic features associated with a repair process in real life. Consequently, this representation is associated with a great amount of conservatism.

Another issue in the PSA is the assigned mission time for the cooling system, which is derived from deterministic criteria. A sensitivity analysis shows that the results are highly driven by the choice of mission time. As the aim of the PSA is not to verify deterministic criteria, but rather to be a realistic representation of the spent fuel pool process and its safety functions, a question that consequently arises is; what is the appropriate realistic mission time that should be used?

To address these issues, the I&AB (Initiators & All Barriers) method has been incorporated in the PSA. An advantage of the I&AB method compared to the traditional static PSA approach is that it accounts for the fact that different types of initiating events, such as a cooling water pump failure or a busbar failure, can have different repair times. This means that the time window within which other safety functions must act to prevent an undesired consequence will also be different for each scenario. Furthermore, a main feature of the method is that it credits the dynamic aspects of repair processes, which introduces a more realistic representation than a static PSA approach.

A method has been developed to estimate repair times with a higher level of detail and including a broader scope of component types and failure modes. The repair time data together with the I&AB method has been implemented in the PSA model. This new approach did not only enhance the realism of the model but also enabled the extraction of valuable insights and information, such as importance measures of repair times for different components, which can inform decision-making and optimize repair and maintenance routines.

17:20
Enhanced Bayesian Network for Reliability Assessment: Application to Salt Domes as Disposal Sites for Radioactive Waste Problem
PRESENTER: Andrea Perin

ABSTRACT. Risk assessment of radioactive waste disposal requires a comprehensive evaluation of the potential hazards and uncertainties associated with the disposal, e.g. hydro-geological conditions, over a time span of thousands of years. Among the tools available to assess risk in engineering application, Enhanced Bayesian Networks (EBNs) are capable to provide a deep understanding of multidisciplinary models affected by uncertainties. Contrary to traditional BN, EBNs can be exploited for addressing the long-term safety analysis of radioactive waste disposal, allowing the additional incorporation of information with a non-discrete nature. The usage of EBNs can improve the accuracy of the risk measurements, maintaining the traditional BNs advantages such as compact-representation, human-readability, scalability and multidisciplinary-usability, in various applications.

In this work, the safety of salt domes as deep geological radioactive waste disposal over long terms is analyzed. The main idea is to use EBN as a probabilistic framework for evaluating the possible contamination of the biosphere in different scenarios. Literature, reports and expert knowledge will be used to determine the EBN’s nodes. Nodes combinations produce the set of inputs and uncertainties for a finite element (FE) model able to deal with density-driven (thermohaline) flow, heat transport, transport of dissolved salt and a radionuclide in discretely-fractured porous media.

16:20-17:35 Session 13G: Safety, Reliability and Maintenance in Railway Industry
16:20
Developing risk models that are resilient and responsive to rapid change
PRESENTER: Chris Harrison

ABSTRACT. The Safety Risk Model (SRM), owned and managed by the Rail Safety and Standards Board (RSSB), is one of the most mature and well-established risk models in the EU railway sector according to a survey (ERA, 2015). The main objective of the SRM is to estimate the underlying risk arising from the operation and maintenance of the Great British mainline railway (Gilchrist & Harrison, 2021). The risk outputs from the model are normalized so that they can be used as a tool by railway stakeholders to understand their risk profile and manage or invest appropriately. This allows users of the model to apportion risk based on their operation, for example by renormalizing an estimate based on national passenger train kilometres travelled using the number of passenger train kilometres for their operation. During the COVID-19 pandemic, from late March 2020 onwards, the operation of the GB railway network was significantly affected by a sudden reduction in the number of trains operating and the number of passengers using them. As GB emerged from the pandemic the opposite happened, albeit not as sudden and more of a gradual increase. What became apparent during this period was that the risk models and monitoring tools based upon them were not resilient and responsive to such rapid changes in the underlying normalization. This breakdown of some of the modelling assumptions led to the outputs of the model needing to be carefully interrogated, interpreted and explained to users. This paper will look at some of the issues that were presented and some of the steps that have been taken to address them. One such step to be looked at in detail is the development of more responsive normalizers that can track what is happening in real time. Currently the risk from a signal-passed-at-danger (SPAD) and train collision are normalized using train kilometres. This has some significant drawbacks (Harrison et al, 2022), and in recent years a data-driven system (Red Aspect Approaches Towards Signal, RAATS) has been developed to provide a better understanding of the probability of SPAD at different levels (including national, regional, and operator levels). In the latest development of the SRM, we have investigated the feasibility of normalizing SPAD and collision risk using the more responsive train approaches to a signal, rather than train kms. This work will be looked at in detail along with some practical examples of how this approach can be used to better normalize and understand rapid changes as they occur and the effect they have on risk estimates, and more generally how to make the risk models more responsive and resilient to rapid changes.

European Railway Agency (2015), Final report of Research on Risk Models at the European Level. 31/08/2015 Gilchrist & Harrison (2021) Developing a new Safety Risk Model (SRM) methodology for the GB rail industry, Safety and Reliability, 40:1, 28-47, DOI: 10.1080/09617353.2020.1858244 Chris Harrison et al (2022) At the limit? Using operational data to estimate train driver human reliability, Applied Ergonomics, Volume 104, October 2022, DOI: 10.1016/j.apergo.2022.103795

16:35
Analysis of fault response strategies of Fully Automatic Operation System based on quantitative Resilience Assessment
PRESENTER: Ru Niu

ABSTRACT. The Fully Automatic Operation(FAO) system is the development direction of the current urban rail transit system, but the system changes and manual handling under the fail-ure scenario will have an important impact on the capacity performance of the FAO system. The concept of resilience is introduced to analyze the capacity change of the FAO system after artificial intervention and failure impact. This paper proposes a quantitative assessment method for the resilience of the FAO system. This method is based on the function of the FAO system and combined with the complex network model. The shortest path length of the network model index is used to quantitatively express the resilience of the FAO system. Based on this method, this paper makes a quantitative resilience assessment of the telecommunication failure of the FAO system, and puts forward the improvement direction of the system function and the key links manual disposal that need to be paid attention to according to the verification results.

16:50
A Review on Risk Assessment Methodologies of Decision-making for Virtually Coupled Trains
PRESENTER: Yiling Wu

ABSTRACT. When virtually coupled trains are applied in the real world, there is a need to consider the associated risks stemming from unknown and unforeseen situations. This then requires decision-making systems of virtual coupling to be able to make appropriate decisions autonomously in the face of environmental and behavioral uncertainties and, more importantly, be able to perform appropriate risk assessments prior to decision-making. We provide an overview of risk assessment methodologies for virtually coupled trains decision-making in terms of both quantitative and qualitative analysis, respectively. Among them, the quantitative analysis methods can be further divided into three parts: risk identification, risk measurement and risk reasoning. By comparing the differences of each method, we find that the probabilistic approach can better handle the uncertainty of the input information of the decision-making system. Finally, we propose future research directions.

17:05
New definition and specification of Operational Design Condition for autonomous railway system
PRESENTER: Rim Louhichi

ABSTRACT. Railway market is undergoing a major change with the incoming of driving automated systems (DAS) and autonomous trains in open environment. Due to the strict railway regulations and the complexity of rail technology, demonstrating safety in the era of DAS turns out to be a challenging task. A first step towards establishing a safety demonstration of autonomous trains is the definition and specification of Operational Design Domain (ODD) [1]. The ODD is the set of operational conditions in which an automated system is designed to operate safely.

There have been several attempts in literature to define the ODD especially in the automotive field, as this concept has gained wide attention from government, industry and academic experts. In maritime, we find a concept comparable to ODD defined as the Operational Envelope (OE) including the « conditions and related operator control modes under which an autonomous ship system is designed to operate» [2]. The main differences between ODD and OE are detailed in [2].

However, the ODD, as it is defined, does not consider the system performance or technological limitations that may be caused by a system degradation or functional constraints defined as system capability. Moreover, it does not integrate the human capabilities to understand and react in time in case of ODD exit or system failure. Besides, autonomy is defined according to the grade of automation and the role of the human operator in the control loop. Therefore, there is a need to define another concept that encompasses all these aspects for autonomous trains where human-system cooperation is required to be safe. In this paper, we tackle a new concept in railway called the operational design condition (ODC) that can take into consideration not only the ODD, but also system and human capabilities [3]. We explain further, how the ODC can be specified step by step in each phase of system life cycle following both top-down and bottom-up approaches : • Starting from ODD high-level definition from the operational context, hazard and risk analysis until the derivation of safety requirements encapuslated by the ODD [2] • Specifying system and human capabilities to give a complete understanding of the ODC concept. A refining process is performed all along the system’s life cycle in Railways according to the EN50126 [4].

[1] Tonk, A., Boussif, A., Beugin, J., & Collart-Dutilleul, S. (2021, September). Towards a Specified Operational Design Domain for a Safe Remote Driving of Trains. In ESREL 2021, 31st European Safety And Reliability Conference (p. 8p). [2] Rødseth, Ø. J., Nordahl, H., Wennersberg, L. A. L., Myhre, B., & Petersen, P. (2021). Operational Design Domain for Cars versus Operational Envelope for Ships: Handling Human Capabilities and Fallbacks. In Proceedings of the 31st European Safety and Reliability Conference.  [3] Siddartha Khastgir. The Curious Case of Operational Design Domain : What it is and is not ? Medium (2020) available on : https://medium.com/@siddkhastgir/the-curious-case-of-operational-design-domain-what-it-is-and-is-not-e0180b92a3ae. [4] EN50126 Railway Applications - The Specification and Demonstration of Reliability, Availability, Maintainability and Safety (RAMS) - Part 1: Generic RAMS Process.

17:20
An Integrated Reliability, Availability, and Maintainability Approach for Metro Systems

ABSTRACT. This paper presents the integrated Reliability, Availability, and Maintainability (RAM) approach applied to the design of Metro systems aiming to achieve the overall system-level RAM requirements. It presents the processes, plans, guidelines and techniques applied during the various design stages with the objective of assessing RAM performance and at the same time influencing design and procurement decisions. It explores the recording and management of RAM-derived requirements identified during various RAM studies that are transferred to the design, supply chain and operation and maintenance, preparing the grounds to allow the systems to perform as expected. Furthermore, it shows methodologies and procedures deployed during construction, testing and commissioning, trial running, and system acceptance periods to monitor, mitigate performance risks, and demonstrate RAM performance from the asset level to the system level, aiming to ensure the overall expected system performance is achieved. Further details are provided about an integrated tool for RAM data management producing consistent and integrated outputs for Failure Modes, Effects and Criticality Analysis (FMECA), Spare Part Analysis, Special Tools and Equipment List, Preventative Maintenance Analysis, Corrective Maintenance Analysis and Life Cycle Cost (LCC). It also discusses how the various pieces of work are interlinked and integrated throughout the various stages of the projects.

17:35
Reliability challenges of 5g and beyond networks applications in high-speed trains
PRESENTER: Rui Li

ABSTRACT. 5G and beyond networks are expected to be reliable solutions to support new and complicated wireless communication scenarios. As high-speed railway systems are booming all around the world, they bring about novel challenges to the 5G and beyond networks to support high mobility usage. International Union of Railways (UIC) decides to replace GSM-R, the current railway telecommunication system that is based on 2G, with Future Railway Mobile Communication System (FRMCS). GSM-R is by far the most reliable mobile network in existence. Replacing such a system with 5G and maintaining an equal or even higher reliability is complex. 3GPP has proposed some performance requirements for railway communication functionality. For example, train service reliability is targeted up to six nines. This requirement can be satisfied only by an ultra-reliable 5G system and seamless handover procedures under high mobility. On the one hand, the 5G system faces failures from its virtual and physical layers. On the other hand, high mobility creates radio problems to handover and interrupts network services. Network service reliability performance can be guaranteed by providing continuous end-to-end user plane connectivity that can transmit packets to and from the internet. However, this connectivity can be maintained only by successful handover during the radio zone changes. Handover is a signaling process in the control plane. Therefore, the railway network service reliability analysis requires a combined perspective of both user and control planes of 5G. This paper investigates the possible challenges of high-speed railway network service reliability and examines the impacts of various factors. By using discrete event simulation, we calculate the onboard network communication service reliability and mean time to failure. Different network deployments, redundancy, and radio coverage size are compared. Simulation results provide insights into estimating railway network performance and proposed feasible solutions to improve service continuity and reliability for railway operators and network providers.

17:50
A REVIEW OF AN NATIONAL SAFETY AUTHORITY’S SUPERVISION ACTIVITIES & AUDIT OUTCOMES TO ENHANCE ITS MONITORING OF RAILWAY ORGANISATION SAFETY MANAGEMENT SYSTEMS.
PRESENTER: Shane O'Duffy

ABSTRACT. The National Safety Authority (NSA) plays an important role in safety oversight of railway organisations (RO’s) operating within their European Union (EU) Member State. The NSA is tasked with assessing and supervising RO Safety Management System’s (SMS’s), ensuring compliance with standards and legislative requirements. Depending on the size of the NSA, it can be a challenge to implement all the legislative requirements due to constraints in resources and competence. This review concentrated on a data analysis of NSA supervision activities and audit outcomes to enhance the NSA in monitoring RO SMS’s. The purpose of this study is to provide evidence to support the NSA supervision planning process changing from a compliance-based approach to being a risk-based approach. This research examined broadly accepted approaches to measure or indicate if an SMS is effective and reviewed current practices and studies linking SMS with safety culture. Key recommendations for the NSA would be, to implement the European Railway Agency (ERA) Management Maturity Model (MMM) tool and the safety perception survey approach for evaluating SMS effectiveness and safety culture. The analysis of NSA data and literature reviewed, found a correlation that competence management and risk management were the two most problematic areas of the SMS. The implications of these findings for the NSA are further discussed.

16:20-17:35 Session 13H: S.02: Reliability and Resilience of Interdependent Cyber-Physical Systems I
16:20
SynthiCAD: Generation of Industrial Image Data Sets for Resilience Evaluation of Safety-Critical Classifiers
PRESENTER: Berit Schuerrle

ABSTRACT. Due to their versatility, Deep Neural Networks are becoming increasingly relevant for the industrial domain. However, there are still challenges hindering their application, such as the lack of high-quality training data and suitable methods for assessing their robustness to internal computing hardware faults in safety-critical applications. To address these challenges, this paper introduces (i) a new data generation tool SynthiCAD for creating customisable image training data, along with an open-source industrial data set for classification generated by SynthiCAD. In addition, (ii) we categorized and compared existing approaches to fault injection and evaluated software-based fault injection using a VGG19 model trained on our new data set. Our findings show that software-based fault injection is a fast and scalable way to assess the reliability of DNNs under the presence of faults.

16:35
Towards Cross-Domain Resilience in Interdependent Power and ICT Infrastructures: A Failure Modes and Effects Analysis of an SDN-enabled Smart Power Grid
PRESENTER: Khaled Sayad

ABSTRACT. The adoption of cloud-native technologies like the Software Defined Networking (SDN) paradigm, in the management of Critical Cyber-Physical System (CCPS)’s monitoring and control functions, leads to the emergence of complex interdependencies between the cyber and physical domains, which would increase the risk of cascading failure, especially in the cyber domain represented by edge Data Center (DC) networks. These Edge DCs host critical software services characterized by high dependability and performance requirements. The downtime of such services has a considerable impact that may destabilize socioeconomic well-being. In this work, we provide a failure modes analysis of an SDN-enabled Smart Power Grid (SD-SPG) with a focus on the subsystems involved in cross-domain failure propagation. The objective of the analysis is to establish the causal effect between subsystem failure modes that may lead to cross-domain failure cascades. Then, we focus on the evaluation of the Steady State Availability (SSA) metric under different interaction scenarios between the power and telecommunication subsystems. To this end, we propose a hierarchical modeling framework combining continuous-time Markov chains (CTMCs) and Reliability Block Diagrams (RBD)s to capture both, subsystems and complex systems' steady-state behavior.

16:50
Resilience enhancement of cyber-physical systems against hybrid attacks
PRESENTER: Zhaoyuan Yin

ABSTRACT. With the advancement of information and communication technology, modern critical infrastructure systems, e.g., power grids, tend to be controlled automatically and remotely through cyber systems. Such coupling of physical and cyber systems promotes more efficient operations and inversely induces the vulnerability of two aspects in the face of potential cyber-physical attacks. Specifically, high-impact low-probability extreme weather events can trigger disruption scenarios where many components in the physical system fail, while malicious attackers with limited offensive resources prefer information interference such as denial-of-service (DoS) attacks and false data injection (FDI) attacks to affect the availability and integrity of cyber systems. Considering the serious consequences of natural disasters and malicious cyber-attacks, it is necessary to develop a resilience enhancement framework from the cyber-physical perspective. In this paper, a defender-attacker-defender model is proposed for the resilient strategy of cyber-physical systems against hybrid uncertain threats. The first defend-level problem aims to protect the cyber-physical systems by optimally allocating protection equipment, e.g., distributed energy resources and intelligent firewalls. The attack-level problem formulates the best cyber-attack strategy, including the timing and intensity of the DoS and FDI attacks. Both the first defense and attack strategies should consider the stochastic disruption scenarios caused by natural hazards. And the second defend-level problem targets to optimal operation of the cyber-physical system based on the available components and resources at each disruption scenario. To solve the proposed tri-level stochastic optimization problem, we implement the duality theory to reformulate the tri-level problem into a max-min problem, and then exploit a column-and-constraints generation algorithm to obtain the solution. Detailed case studies are conducted in IEEE 13-node and 33-node systems to showcase the effectiveness of the proposed framework. The numerical results indicate that the designed defense strategy can bolster the cyber-physical system resilience against hybrid threats.

17:05
An Importance Function to Generate Scenarios for Training a Grey-Box Model for the Computational Risk Assessment of Cyber-Physical Systems

ABSTRACT. The operation, control and maintenance of many systems rely on the signal communication functions provided by telecommunication systems. This generates Cyber-Physical Systems (CPSs). Computational risk assessment is being advocated to properly account for the complexities and interdependencies of CPSs. However, simulation times can be high for practical feasibility. Surrogate models are being explored to address computational issues. Among these, Grey-Box Models (GBMs) have recently been proposed to merge the physical knowledge embedded into a high fidelity White-Box Model (WBM) with the learned-by-data knowledge used to train a Black-Box Model (BBM). In this paper, we propose the use of a novel Importance Function (IF) within a Repetitive Simulation Trials After Reaching Thresholds (RESTART) approach to simulate accidental scenarios for training BBMs, ultimately embedded into a GBM. A case study is considered concerning an Integrated-Power and Telecommunication (IP&TLC) CPS of literature.

17:20
Community of Practitioners – Ensuring relevance and resilience of future Multimodal Traffic Management

ABSTRACT. Future transport will benefit from optimizing means for the whole transport chain. Traffic management across silos or transport modalities are rare. One challenge is that operations of the different modalities for road, sea, rail, and air transport today are executed by different technologies, regulations, and degree of automation. Another challenge is the implementation of more automated vessels/vehicles. Thus, multimodal traffic system will change the way of managing the traffic system both at a strategical, tactical, and operational level. This paper will present the EU project ORCHESTRA, for the period 2021 – 2024, focusing on designing a future Multimodal Traffic Management Ecosystem (MTME) including defining significant scenarios, stakeholder types, and functions. Two central objectives are to (1) Establish a common understanding of multimodal traffic management (MTM) concepts and solutions, and (2) define MTME. Stakeholder involvement and anchoring process – Communities of Practitioners (CoP) – are essential in the process of developing and modifying concepts and solutions, e.g. within and across modes, for various stakeholders and contexts, where traffic managements are coordinated to contribute to a more balanced and resilient transport system, bridging current barriers and silos. The purpose of the paper is to present and discuss how CoP may be involved in the design process of future management systems. This includes iterative interaction between project partners and operational practitioners to give input on, discuss and validate results regarding e.g. scenarios and resilience aspects.

16:20-17:35 Session 13I: S.13: Coping with Imprecision in Reliability Analysis
16:20
Consideration of polymorphic uncertainty in model-free data-driven identification of stress-strain relations
PRESENTER: Selina Zschocke

ABSTRACT. Data-driven methods are of increasing importance in computational mechanics. Commonly distinguished are model-based methods, aiming to approximate the constitutive material description e.g. by neural networks, and model-free methods. The approach of model-free data-driven computational mechanics (DDCM), introduced by Kirchdoerfer and Ortiz (2016), enables to bypass any material modelling step by directly incorporating material data into the analysis. A basic prerequisite for both types of data-driven methods is a large amount of data representing the material behaviour, in solid mechanics consisting of stresses and strains. Obtaining these databases numerically by multiscale approaches is computationally expensive and requires the characterization of lower scale models. In case of an experimental characterization, constitutive descriptions are generally required to compute the stress states corresponding to displacement fields, e.g. identified by full-field measurement techniques, such as digital image correlation. The method of data-driven identification (DDI), introduced in Leygue et al. (2018) based on the principles of DDCM, enables the determination of large stress-strain data sets based on displacement fields and applied boundary conditions without postulating a specific constitutive model. The algorithm has shown to be applicable to synthetic and real data taking linear as well as non-linear material behaviour into account.

In order to obtain realistic simulation results, uncertainty needs to be considered. Generalized polymorphic uncertainty models are utilized in order to take variability, imprecision, inaccuracy and incompleteness of material data into account by combining aleatoric and epistemic uncertainty models. The consideration of uncertain material properties by data-driven approaches leads to the requirement of data sets representing uncertain material behaviour. In this contribution, different sources of uncertainty occurring within data-driven identification of stress-strain relations are addressed and an efficient method for the identification of data sets representing uncertain material behaviour based on the concept of DDI is proposed. In order to demonstrate the introduced methods, numerical examples are carried out.

16:35
Multicriteria optimal maintenance planning for industrial electrical installations

ABSTRACT. The infrastructure of electric power systems is aging all over the world. The decommissioning of existing plants threatens the reliability margins of electrical systems and means to keeping existing installations reliable are being encouraged. Similar challenges also can be found at large industries, where failure of electrical systems causes downtime as lead times for large equipment can take up to a year. Despite the abundant literature on the reliability of electrical equipment, the task of analyzing the risk of failure of a system composed of many components, multiple modes of operation, subject to common cause failures and subjected to the wear and tear of years of operation is complex. This study proposes a practical method to develop maintenance plans for electrical systems that minimize the impact of failures and maintenance costs using a model capable of quantifying the risk of electrical system failure considering: (a) its configuration, (b) current condition of the components, (c) uncertainties in the current condition of the components, (d) common cause failures. It was considered important to evaluate the uncertainties, in order to assess whether the level of due diligence of the current conditions of the plant was adequate, comparing how much it would cost to obtain a more accurate estimate of the reliability of a given equipment with how much this information could improve the outcome of the action plan. System reliability is obtained through simulation based on the survival signature, as it is a computationally efficient method for the analysis of repairable systems and because this method separates the analysis of the system from the analysis of the components, simplifying the calculation of the impact of maintenance actions taken in the optimization step. The action plan is defined based on a set of actions that can be performed on the system components: Maintain, replace, reform, monitor or acquire spare parts. Each action has a different impact on the failure and/or repair characteristics of the component when implemented or after the first failure. The analysis considers the action being carried out immediately or at predetermined time in future dates and has the desired mission time of plant operation as its horizon. Due to the complexity of the problem and the wide space of solutions, a genetic algorithm (NSGA-II) was adopted, since it has already been successfully used in the optimization of reliability allocation problems. To validate the proposed methodology, a case study related to a chemical or Oil & Gas plant will be carried out, and the results will be compared with the previously defined action plan.

16:50
Estimation of Imprecise Failure Probabilities using an Augmented First-Order Reliability Method

ABSTRACT. Probability theory offers a practical and sound framework for assessing the reliability (or its complement, the failure probability) of engineering systems. The application of this framework involves a numerical model that represents the behavior of the engineering system. The uncertainty associated with the input variables of this model is described in terms of a joint probability distribution. Then, the probability of failure of the system is computed by integrating such joint probability distribution over the set of input variables that lead to an undesirable behavior. For performing the latter step, a so-called performance function must be formulated, which assumes a value equal or smaller than zero whenever a realization of the input variables leads to an unacceptable system’s performance.

The preceding paragraph assumes that it is possible to describe the uncertainty associated with the input variables in terms of a probability distribution. However, in cases of practical interest, it may be challenging to define a crisp distribution due to issues such as lack of knowledge, data scarcity and corrupted data, among other issues. Under such a situation, one possibility is describing uncertainty through probability models whose parameters (such as mean value or standard deviation) are described considering intervals. This corresponds to a so-called parametric probability box (p-box) approach. By considering a p-box, it is possible to capture both aleatoric and epistemic sources of uncertainty in a problem. In this setting, the failure probability is no longer a crisp, deterministic value but instead, it becomes an interval as well, that is, an imprecise failure probability. Assessing this interval is of utmost importance, as it provides a measure of how sensitive a particular system is with respect to the effect of epistemic uncertainty. Nonetheless, the calculation of imprecise probabilities is usually a demanding task, as it becomes necessary to propagate aleatoric and epistemic uncertainty in a double-loop fashion, which demands repeated evaluations of the numerical model describing the behavior of an engineering system. As the evaluation of these numerical models is usually quite demanding from a numerical viewpoint, calculating imprecise failure probabilities becomes extremely challenging, even for simple applications.

In view of the challenges described above, this work presents an approach for estimating imprecise failure probabilities. The approach is based on the concept of an augmented reliability problem, where the epistemic distribution parameters are artificially regarded as aleatoric. Then, the functional dependence of the failure probability with respect to those distribution parameters can be retrieved using Bayes’ theorem. The augmented reliability problem is solved using the First-Order Reliability Method (FORM), which allows determining the bounds of the imprecise failure probability in closed form once the design point associated with the (augmented) performance function has been located. An example illustrates the application of the proposed approach, indicating that it can provide a good estimate on the intervals of an imprecise failure probability with reduced numerical efforts.

17:05
Computing upper probabilities of failure using optimization algorithms together with reweighting and importance sampling.
PRESENTER: Thomas Fetz

ABSTRACT. In reliability analyis of an engineering structure, the combination of probabilistic and non-probabilistic methods has become an important issue. In particular, the uncertainty about the values concerning properties or parameters of an engineering structure can be modelled by a family of probability density functions parametrized by all t in a set T. The output of such a model typically consists of upper and lower probabilities, for example upper and lower failure probabilities.

The more interesting upper probability of failure is the solution of a (global) optimization problem

max {p(t): t in set T}

where p(t) is the failure probability for a fixed parameter value t. We estimate these function values p(t) using Monte Carlo simulation which means function evaluations (finite elements computations) for each of N sample points. This high computational effort has to be multiplied in addition by the number of function evaluations p(t) needed for solving the above optimization problem to find the optimal parameter t in T resulting in the upper probability.

For our numerical method we use importance sampling or reweighting techniques for two reasons:

(1) For computing the derivatives needed in the standard (global) optimization algorithms used.

(2) For reducing the number of parameter values t for which we need new samples in the optimization algorithm and in addition for reducing the sample sizes for these parameters. For this purpose we re-use the samples from parameters t of previous optimization steps taking the importance sampling ratios into account which leads to importance sampling on sets of a partition.

The efficiency of the method is analyzed by means of a moderate scale engineering structure.

References:

[1] T. Fetz. Efficient computation of upper probabilities of failure. In C. Bucher, B. R. Ellingwood, and D. M. Frangopol (Eds.), 12th Int. Conference on Structural Safety and Reliability, pp. 493–502, 2017.

[2] M. C. M. Troffaes, T. Fetz, and M. Oberguggenberger. Iterative importance sampling for estimating expectation bounds under partial probability specifications. In M. De Angelis (Ed.), Proc. of the 8th Int. Workshop on Reliable Engineering Computing, pp. 147–154, Liverpool, 2018.

17:20
Drone flight time estimation under epistemic uncertainty
PRESENTER: Edoardo Patelli

ABSTRACT. Drone Logistic Network} (or simply, DLN) is an emerging topic in the sector of transportation networks with applications in goods delivery, postal shipping, healthcare networks etc. It is a rather complex system which have different types of drones and ground facilities and it also requires a robust design of the network to ensure optimal time for delivery, efficiency, resilience, risk and cost efficiency along with different other optimizations of `Key Performance Indicators'. Moreover, in sectors like healthcare networks, we need to be extra cautious whilst modeling the network as the consequence of failure is severe. Besides these, we also need to work with real-time telemetry data which can be very noisy at times. To deal with the above mentioned technicalities, we propose a robust surrogate modeling strategy through propagation of interval information from the observed data. We are interested in using this surrogate model to simulate contingency scenarios or simply to construct a \ac{dt}. For this particular contribution, we are specifically interested in estimating the drone flight time in uncertain conditions. With our proposed method, we obtain interval estimates for our quantities of interest, which can be interpreted as the set of possible values in between the optimistic and pessimistic bounds.

17:35
Imprecise Survival Signature Computation Through Interval Predictor Models

ABSTRACT. In recent years, the survival signature has seen promising applications for the reliability analysis of critical infrastructures. It outperforms traditional techniques by allowing for complex modelling of dependencies, common causes of failures and imprecision. However, as an inherently combinatorial method, the survival signature suffers greatly from the curse of dimensionality. Computation for very large systems, as needed for critical infrastructures, is mostly infeasible.

New advancements have applied Monte Carlo simulation to approximate the signature instead of performing a full evaluation. This allows for significantly larger systems to be considered. Unfortunately, these approaches will also quickly reach their limits with growing network size and complexity.

In this work, instead of approximating the full survival signature, we will strategically select key values of the signature to accurately approximate. These entries are then used to build an Interval Predictor Model (IPM) for the prediction of the remaining unknown values. In contrast to standard models, IPMs return an interval bounding the survival signature entry. The resulting imprecise survival signature is then fed into the reliability analysis, yielding upper and lower bounds on the reliability of the system.