previous day
all days

View: session overviewtalk overview

09:00-10:20 Session TH1A: Risk Assessment
Location: Auditorium
A web app to support hazard identification of oil refineries
PRESENTER: July Macêdo

ABSTRACT. Hazard identification is one of the first steps of quantitative risk analysis. This step relies on examination of a variety of engineering documents and the attendance to numerous meetings, which is very labor, time consuming, especially for oil refineries that are equipment intensive facilities (Vinnem and Røed, 2020). The hazards identified are usually recorded in form of textual data as documents. Thus, these documents store valuable information about the risks related to the analyzed system and that was widely discussed by experts. In this context, text mining and natural language processing can be applied to extract, organize, and classify information, allowing to develop models to retain specialists’ knowledge that can be lost overtime. Therefore, a web app was developed to support risk analyst to identify and assess different accident scenarios. Thus, it will be possible to reduce the required efforts in completing the early stages of a quantitative risk analysis. To that end, we fine-tuned Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018) with textual data from previous risk studies to perform different multi-classification tasks to identify risk features in an actual oil refinery. Thus, the web app consists of three models developed to 1) identify the potential consequences of accidents related to the operation of an oil refinery, and to classify each accidental scenario in terms of 2) severity of the consequence and 3) likelihood of occurrence.

Fault tree modeling of human error dependency in PSA

ABSTRACT. In probabilistic safety assessment(PSA) of a nuclear power plant, a human failure event is one of the events that have high risk contribution. In addition, dependency between the events may increase the risk contribution. This is because the success or failure of a previous action can affect the subsequent action. In the case of the scenario in which more than two human failure events occur, human error dependency should be analyzed and reflected in PSA. For this reason, the probability of a human failure event may be different depending on the accident scenario or other human failure events. When the residual heat removal is unavailable after failure of makeup the coolant in an overdrain accident at low power and shutdown, feed and bleed operation may be needed to prevent core damage. There are two cases when feed and bleed is required after the failure of coolant makeup due to machinal failures and due to a human failure event. The human error probability of feed and bleed after a human failure event of the makeup may be higher because of the dependency. Human error dependency has been reflected through the post-processing. There are several limitations in the existing method because the dependency is reflected after minimal cutsets are derived. Several minimal cutsets may be improperly truncated by the cut-off value. Propagation on a fault tree is also difficult. This study proposes a method of modeling the human error dependency on the fault tree. We also provide the applicable cases to model the human error dependency into the fault tree. This method can be practically applicable to a PSA model while supplementing the limitations of the existing method.

Towards a relational model for a joint Safety and Security risk assessment process

ABSTRACT. Risk assessment is an important issue in cyber-physical systems as they confronted to risks with accidental and malevolent origins. These risks fall respectively under the banners of Safety and Security and each discipline has a distinct set of practices for risk assessment and risk analysis. However, the growing need to combine and bring together risk assessment processes from safety and security has led us to look for ways to join the processes and thus deal with the various interactions between safety and security. We observed that risk assessment processes from safety and security dealt with several entities that sometimes overlap and that sometimes are linked together (such as feared events, requirements, risks, threats, countermeasures, etc.). We took interest in identifying these entities and the relationships that can occur between them in a relational model that would describe a Safety and Security joint risk assessment process. In this paper, we present a relational model to constitute a knowledge base that includes all the entities manipulated in Safety and Security risk analyses and their interrelations We further show the means to implement and use the knowledge base effectively to answer questions about Safety and Security risk interactions.


ABSTRACT. Over the past 20 years the aviation industry demonstrated significant growth from 1.5 billion passengers travelled by air in 1998 to more than 4.3 billion in 2018. For aviation companies managing security risks remain a significant challenge in the aviation industry because security events are rare and thus harder to predict (Cliton et al, 2013). Industrial regulations of International Civil Aviation Organization (ICAO, 2013) and International Air Transport Association (IATA, 2019) require air carriers to conduct security risk assessments of aviation routes. They, however, do not provide specific tools or standards for it. Aircraft shooting is one of the most critical aviation security incidents, this can be caused by terrorism or accidental shooting during a military conflict. The most recent examples of aircraft shooting amid military conflicts were downing of Malaysian Airlines Boeing 777 over Eastern Ukraine in 2014 and downing of Ukraine International Airlines near Tehran on in 2020 (The Aviation Safety Network, 2020). In the past the spread of security related information was via traditional media. Therefore, the approach to assessment of threats was rather reactive due to limits in speed of information spread, the method of analysis and the distribution of information about potential risks for air operations (Cullen, 2014). Currently, information about terror attacks, wars or conflicts is instantly available through various internet channels, turning threat assessment to proactive and predictive ways. However, the most popular methodologies for threat assessment used by the industry are qualitative (‘low’, ‘medium’, ‘high’) and subject to misinterpretation by users (Renooij and Witteman, 1999). Therefore, this study explores the potentials to employ a quantitative approach. Specifically, the United Nations (UN) lists several factors that influence on aviation security, such as economic threats, internal and inter-state conflicts, terrorism and organized crime (UN, 2004). This paper examines how these factors influence security related incidents and develops a threat assessment model to predict the likelihood of surface-to-air shooting incidents. The data employed includes 276 cases of surface-to-air shooting at aircraft between 1947 and 2018. The proposed methodology can complement the existing methodologies for aviation security risk assessment.

09:00-10:20 Session TH1B: Mathematical Methods in Reliability and Safety
Location: Atrium 2
Continuous models for discrete data of residual contamination
PRESENTER: Kamila Hasilova

ABSTRACT. In the article we focus on residual contamination of surfaces with toxic agents, on ways to model the measured residuals and on its statistical properties.

The efficiency of decontamination is assessed by points detectors, which is conditioned by finding contaminant on a contaminated person. Koska (2015) proposed to implement contamination control chambers with a suitable chemical detector able to perform analysis of contamination residuals. Based on the experimental results, he proposed optimal condition for residual contamination control with selected detector and operation procedures for execution of check decontamination efficiency in contamination control chamber for Integrated rescue system units.

The objective of this paper is to determine if there is statistical evidence for different behavior of the contaminant detection under different ambient conditions. However, the data are measured at discrete time points, although they represent a continuous function. Therefore, we introduce a functional data analysis approach. Functional data analysis deals with the data in their general form, i.a. with data in the form of curves (Zhang, 2014; Ramsay and Silverman, 2005).

In the analysis, parametric and nonparametric regression models are used to convert discretely measured data into functional form. Then, a mean and variance function are constructed for the respective measurement conditions. The tests directed to validation of significant differences show that a certain level of discrepancy is present, namely in terms of the ambient temperature and type of the toxic agent.

Design verification by small sample Locati experiments

ABSTRACT. Design verification in automotive engineering is often supported by fatigue tests for special parts like conrods of crankshafts, with the target to estimate the component's latent tolerance distribution. In the Locati test stress levels increase with time until failure and it is not obvious how to conclude from stress S on the tolerance T. Two ways to find f(T) from Locati tests are discussed: the CE model of Nelson (1990) and a simple approach based on the Miner rule. The properties of both approaches are investigated in a simulation study.

Genetic Algorithm Approach with Network Configuration for Bi-objective Network Optimization

ABSTRACT. In the real world, many infrastructures, for example, Internet, power/water supply and traffics, are network systems required high reliability. Independently, these systems may need costs for the construction or maintenance of links in the networks. In this study, we consider a bi-objective network with objectives of maximizing all-terminal reliabilities and minimizing costs. In general, these objectives are in trade-off relation, and cannot be optimized simultaneously. Therefore, solving the problem is to find the set of all Pareto solutions. On the other hand, the problem of evaluating all-terminal reliability of a given network is computationally intractable (Koide 2002), which suggests that our bi-objective problem is computationally intractable as well. Therefore, Takahashi et al. (2018) provided a Genetic Algorithm (GA) obtaining a set of quasi-Pareto solutions. However, this algorithm needed much computing time when the number of nodes was large. In this study, we propose a GA algorithm which obtains non-dominated solutions close to Pareto solutions. Proposed algorithm reconsiders parent selection such that all non-dominated solutions and “good” solutions are selected as parents in each generation. In addition, we analyze the GA process, and the crossover process reflects not only component criteria but network configuration. And then, the accuracy of our proposed algorithm is evaluated based on comparison with other algorithms.

PRESENTER: Elena Zaitseva

ABSTRACT. The technological development at the present time results the elaboration and application of new devices in different areas of human activities. One of new devices in the monitoring is unmanned aerial systems or drones which can be used of dangerous and inaccessible territories monitoring [1]. No other data collection platform offers a similar combination of cost-effectiveness, quality, frequency, flexibility, extensibility, portability, safety, and reliability. But study in [2] shown that drones and drone fleets have an overall failure rate of 25%. It is a lot for technical system. Therefore the approaches for the decreasing of the failure rate should be developed. But these approaches should be based on the data about the drones and drone fleets reliability, which can be obtained according to the method for the evaluation these systems reliability. This investigation can be implemented based on method and approaches of reliability engineering. Some investigation of drone reliability and its components are implemented in [2]. Authors considered the structure of drone in point of view of its exportation and basic methods of reliability evaluation of each of them. Mathematical models of drone fleet for reliability evaluation has been studied in [3]. Authors proposed the initial classification of drone fleets classification in point of view of methods for the reliability analysis. In this classification some types of mathematical representation of drones and drone fleets have been considered and one of them is structure function. According typical model of the structure function, drone fleet can be interpreted as k-out-of-n system, which is considered in paper in details.

09:00-10:20 Session TH1C: Reliability and Maintenance of Networked Systems
Online Estimating the Resource Overload Risk in 5G Multi-tenancy Networks

ABSTRACT. The technology of network slicing, as the most characteristic feature of the fifth generation (5G) wireless networks, manages the resources and network functions in heterogeneous and logically isolated slices on the top of a shared physical infrastructure, where every slice can be independently customized to fulfill the specific requirements of its devoted service type. It enables a new paradigm of multi-tenancy networking, where the network slices can be leased by the mobile network operator (MNO) to tenants in form of public cloud computing service, known as Slice-as-a-Service (SlaaS). Similar to classical cloud computing scenarios, SlaaS benefits from overbooking its resources to numerous tenants, taking advantage of the resource elasticity and diversity, at a price of risking overloading network resources and violating the service-level agreements (SLAs), which stipulate the quality of service (QoS) that shall be guaranteed to the network slices. Thus, it becomes a critical challenge to the MNOs, accurately estimating the resource overload risk - especially under the sophisticated network dynamics - for monitoring and enhancing the reliability of SlaaS business.

While existing literature are mostly considering over-simplified models of slice resource load, in this paper we propose a novel approach assuming a Markov-Truncated-Gaussian-mixture model, which is capable of accurately approximating any practical resource load distribution. The proposed approach is demonstrated by numerical simulations to be effective and accurate.

A Hierarchical Predictive Maintenance Model for Networks
PRESENTER: Zhenglin Liang

ABSTRACT. Nowadays, with the emerging of enabling techniques, such as digital sensing, big data, machine learning techniques, predictive maintenance becomes a prevalent topic in system reliability. The blooming of the related research has already cause multiples paradigm shifts, such as industry 4.0 and digital manufacturing. Could predictive maintenance also shed light on resolving the problem at the network level and bring real value for servicing the macro-characteristics network is an essential question that to be answered. This research aims to design a hierarchical approach that enables scaling-up the effect of predictive maintenance to benefit systemic maintenance, and in turn, serve the network macro-characteristics. The hierarchical model integrates the component, system, network knowledge in a layered fashion. Such that, the model contains both actionable details and macro-characteristics. At the component-level, we model the deterioration process of components as a multi-state multi-path stochastic process. We develop a novel approach for predicting the reaching time of the optimal condition for maintenance, based on the information from inspections. At the system-level, we highlight the heterogeneity of components in the system model. More often than not, practical systems are composited by multiple components whose downtime or failure might have different impacts on system performance. We employ the predicted information from the component-level to construct a group maintenance strategy. Components’ maintenance activities will be actively grouped together for sharing the set-up cost and downtime. Different maintenance strategies may require different costs and result in different system performance. We utilize a genetic algorithm with agglomerative mutation (GA-A) for evaluating and optimizing the group maintenance policies to meet the different requirements of system performances. At the network-level, we consider the network is composed of multiple systems and select the system performance for satisfying the macro-characteristics. In such a way, predictive maintenance at the component level can support the decision making at the system level and subsequently serve the network. The overall approach will be applied to the transportation network. In practice, the transportation network is composited of multiple systems such as bridges, roads, and tunnels, which are composed of components such as concrete decks, wingwalls, and joints. It matches the model description. Moreover, macro-characteristics, such as connectivity and resilience, have important practical meaning in the transportation network.

Modeling and prediction for networks with time-series node data

ABSTRACT. Network data are ubiquitous nowadays. Many physical, biological, and social systems are structured as networks, to show pairwise relations or interactions among entities or individuals. However, data are always collected individually, and do not directly reflect the relations or interactions among them. It is therefore important to characterize the correlations among individuals and model the network as a whole. This work proposed a Hidden Markov model with hierarchical Gaussian distribution to model the time-series data network data. The time dependence is modeled by the Markov process and the node dependence is modeled with a correlation matrix that influence the Gaussian mean. Case study shown that the proposed model can fit the time-series networked data well and can be used for prediction.

Robust optimization for network restoration under demand uncertainty
PRESENTER: Chuanzhou Jia

ABSTRACT. With the development of science and technology, networked infrastructure systems, such as communication network systems, are indispensable to society (Ouyang 2017). However, the networked infrastructures are also facing huge threats, such as natural disasters, deliberate attacks, etc., therefore we must consider post-disaster restoration and maintenance work. For the restoration of networked infrastructure, two factors must be simultaneously considered. One is that the components in the network are geographically distributed, and the other is that the flow transmitted in the network is related to the restoration states of the network. Since these components are geographically distributed, vehicle routing must be considered for the restoration of them. In addition, the network flow scheduling and the demands of each customer must be considered during the maintenance process. For this purpose, we propose a robust optimization model to simultaneously determine the maintenance routes and schedule network flow in the post-disaster restoration. Taking into account the uncertainty of customer demands, we seek to optimize restoration task within the limited time to minimize the unmet demand of the whole network under the worst case. Finally, we develop an efficient algorithm to solve the proposed problem.

09:00-10:20 Session TH1D: Land Transportation // Smart Cities and Systems
Location: Panoramique
Accident experience, subjective assessment of risk and behaviour among Norwegian cyclists

ABSTRACT. The aims of the study were to examine if cyclists who have been in a traffic accident when cycling change their behaviour, and how the accident experience has influenced their perceived risk and worry. Risk perception focuses on individual’s subjective assessment of risk and worry as an emotional reaction when thinking about the risk. To increase the number of cyclists and travellers using pro-environmental travel modes is given high priority by the authorities in European countries. However, cyclist have higher risk being in an accident than other road users and are defined as vulnerable road users in traffic safety research. Hence, studying cyclists accident experiences, subjective assessment of risk and behavioural changes after accident experiences are important. Data was distributed in collaboration with The Norwegian Cyclist Association and collected through an online questionnaire survey (n = 300). The respondents were asked about their accident experiences when cycling during the last two years, and to rate their perceived risk and worry when cycling. The cyclists who had been in an accident where further asked to write their stories about their accident experience and how this has influenced their cycling behaviour. The data was analysed using both quantitative and qualitative methods. The results showed that there were significant associations between risk perception and worry on the one hand and accident experience on the other hand. The cyclist's stories about their accidents, reviled that the experiences have influenced the cyclists behaviour, their trust in other road users, their use of safety equipment, and route choice when cycling. How the results from the current study can be used in traffic safety work will be discussed.


ABSTRACT. Electric Vehicles (EVs) penetration in the market affects the road networks that, to date, are not designed and optimized for a hybrid fleet of EVs and Internal Combustion Vehicles (ICVs) (Wei, et al., 2019). The strong dependency of EVs on the power distribution infrastructure, indeed, requires an integrated road-power infrastructure. In this work, a Finite State Machine (FSM) (Anderson & Nair, 2018) modelling approach is proposed for EVs and ICVs motion in an integrated road-power infrastructure. FSM is shown to be a powerful modelling approach that can easily embed realistic characteristics such as drivers attitudes, traffic, effects of disruptions induced by natural hazards or traffic incidents, to analyze the performance, safety, reliability, and resilience of the integrated road-power infrastructure. As a realistic case study, we consider a simplified road network in New York state, and the motion of both EVs and ICVs is simulated in various scenarios assuming different EVs penetration levels and considering various disruption scenarios.

A Cascading Failure Model of Rail Transit Based on Application Cell

ABSTRACT. Rail transit system deliver its application by transporting passengers to their destination. Succeed or not of application’s delivery greatly affects the daily operation of it. Current research based on cascading failure mainly focus on the connectivity of network. Researches based on queuing theory, network calculus and BPR function focus on the performance of network. They all take transport process as homogeneous flow and consider the function-based or performance-based structure of network, ignoring that they essentially are applications with multi- attributes (such as application classes, application process and so on), and these attributes have great impact on rail transit, which leads that current researches are unable to reflect the reliability from users’ aspects. In this paper, we focus on the cascading failure process of rail transit, one call request of rail transit from a certain passenger is treated as one application. In order to map the features of application, we describe them as application cells, and multi- attributes of transport process corresponds to the characteristics of cells. Then, the deliver process of application can be regarded as the interaction among cells in our model. The results of simulation show that our model can effectively depict the process of cascading failure among cells and can help to simplify the complexity of application process. Besides, we find the law that even holding the same amount of burst traffic flow, different application types have various cascading process and totally different network reliability.


ABSTRACT. Physical networks, such that potable water distribution networks, sewerage networks (which collect and convey wastewater flow to a treatment plant), electric networks made of power lines and transformers, telecommunication networks, are fundamental constituents of a complex urban monitored and optimized infrastructure. All these systems share the property of being computerized in a GIS, where physical units (pipes, electrical or optic fiber lines) are often buried under roads. It is thus common to consider that preventive renewals are the main maintenance actions of possibly degraded units that constitute the network. In that case, chosing when and where to place the worksites is a challenging problem. Another aspect concerns the regulatory obligation to provide for preventive maintenance of these systems, leading managers to face the problem of optimizing preventive actions with respect to mono- or multi-objective criteria. One of these criteria is the inconvenience caused by worksites that should be minimized. That is why a worksite should include a set of neighboring units to be changed simultaneously, as quicky as possible and, in our example, in the same street.

This paper proposes a methodology to provide optimized allocation of preventive maintenance budget for worksites. Each potential worksite constitutes a geographical area that contains some units to renew. Each unit is identified by its localization, the indexes of units with which it is connected and some covariates, such that length, material, age, risk priority number, probability to fail

09:00-10:20 Session TH1E: Human Factors and Human Reliability
Location: Amphi Jardin
Determination of Level of Automation for an Adequate Human Performance
PRESENTER: Alaide Bayma

ABSTRACT. The automation system has hugely grown during the last years, and it is the major trend of the 20th century. The automation system can provide superior reliability, improved performance, and reduced costs for the performance of many functions. The human-machine design could not be out of this development and technological advancement. Human-machine design takes into account the human factors and human and machine limitations and abilities through the levels of automation. Therefore, it is primordial to consider in the design of such systems, the automation level for adequate human performance. This paper has the purpose to present a preliminary approach for evaluating human performance during a car parking activity with and without car parking assistance (an automation system) and also to analyze the impacts associated with parking assistance in the human error contributing factors. The method for determining the level of automation is based on four generic functions intrinsic to man-machine domain, which’s are: monitoring, generating options, selecting options, and implementing. The human performance analysis is proposed through the Bayesian Networks approach supported by Fuzzy Logic whose application is to model the performance shape factors and checking, through a causal inference and diagnosis, which factors most influence the performance of the tasks in a specific level of automation. The result indicated a positive aspect, the activity with a higher chance of occurrence of a human error in procedure without the parking assistance system was the activity with the parking assistance with higher automation level. The analysis recommends that the alarm and panel design should be reevaluated since different alarms and equipment are used to offer the same information, causing the make decision slower and complacency. The button which turns on the parking assistance system also is the one that selects three types of maneuvers; in addition, the throttle can be used for controlling the maneuver, and both information should be part of the one training before using parking assistance. Despite workload increases with the parking assistance, the human error chance is smaller than with the parking assistance.

Physiological Measurements for Real-time Fatigue Monitoring in Train Drivers: Review of the State of the Art and Reframing the Problem

ABSTRACT. The impact of fatigue on train drivers is one of the most important safety-critical issues in rail. It affects drivers’ performance, significantly contributing to railway incidents and accidents. To address the issue of real-time fatigue detection in drivers, most reliable and applicable psychophysiological indicators of fatigue need to be identified. Hence, this paper aims to examine and present the current state of the art in physiological measures for real-time fatigue monitoring that could be applied in the train driving context. Three groups of such measures are identified: EEG, eye-tracking and heart-rate measures. This is the first paper to provide the analysis and review of these measures together on a granular level, focusing on specific variables. Their potential application to monitoring train driver fatigue is discussed in respective sections. A summary of all variables, key findings and issues across these measures is provided. An alternative reconceptualization of the problem is proposed, shifting the focus from the concept of fatigue to that of attention. Several arguments are put forward in support of attention as a better defined construct, more predictive of performance decrements than fatigue, with serious ramifications on human safety. Proposed reframing of the problem coupled with the detailed presentation of findings for specific relevant variables can serve as a guideline for future empirical research, which is needed in this field.

SAM-L2HRA: Human and Organizational Reliability Analysis Method for Analyzing Severe Accident Management Strategies and Actions
PRESENTER: Jaewhan Kim

ABSTRACT. Human Reliability Analysis (HRA) is required to evaluate decision likelihood of severe accident management (SAM) strategies and adequately assess the reliability of human and organizational actions, and ultimately to incorporate those results into a Level 2 PSA model. A new Level 2 HRA method (SAM-L2HRA) consists of two parts of analysis: the first part deals with a time uncertainty analysis to the failure of a SAM strategy, of which probability is estimated from the convolution between two time distributions, i.e., time available and time required, and the second part is composed of task-based analysis of error potential or decision-making likelihood [1]. The time elements considered in the time uncertainty analysis include the distribution of time available with consideration of phenomenological uncertainty associated with a severe accident event such as a reactor vessel failure, the time elapsed (or required) (and its distribution) for each individual SAM strategy and the total integrated time (and its distribution) from the entry point into SAMG and to the completion point of implementation of a strategy under consideration. The task-based analysis part deals with error potential or decision-making likelihood associated with critical steps or activities needed for decision-making and successful implementation of a strategy. The steps or activities to be analyzed include the availability or survivability of essential information needed for recognition of a strategy implementation andmonitoring the progress and effectiveness of a strategy implementation, the impact of negative effects associated with a strategy on a decision-making of a strategy implementation and its probability of likelihood, and the reliability of the implementation activity in which coordination and cooperation between distributed organizations such as the technical support center (TSC), the main control room (MCR) and the local operating personnel in charge of actual implementation using installed or portable/mobile equipment are of critical importance.

Investigation of a Domain-Specific Culture Shared by Operator Groups in Nuclear Power Plants
PRESENTER: Seung Ki Shin

ABSTRACT. The performance of human operators has a major impact on the safety of complex socio-technical systems such as nuclear power plants, thus it is important to understand the performance shaping factors to improve the safety of socio-technical systems. Among various factors affecting the performance of an operating team, its cultural characteristics is known as one of the dominant factors. To distinguish the cultural characteristics of each team systematically, it is important to confirm the existence of a domain-specific group culture which can be regarded as a reference to determine the nature of the culture processed by each team. In this regard, this study proposes how to identify a domain-specific culture in a certain domain and confirms its presence in nuclear industry. Four representative classes of cultural factors affecting the formation of group cultures are proposed and eight experimental sets are designed to capture the existence of the domain-specific group culture in nuclear power plants. Empirical cultural data based on Hofstede’s indices is collected from different operator groups with diverse conditions according to the experimental design and analysed statistically. The results show that diverse operator groups share very similar cultural characteristics in terms of two kinds of Hofstede’s indices and it is revealed that the transition of working environment from analogue to digital is the most influential factor on group culture in nuclear power plants.

09:00-10:20 Session TH1F: Structural Reliability
Design and homologation of fiberglass insulators for high voltage switches: an overview of Italian regulations and a proposal for a new approach to determine safety coefficients.
PRESENTER: Elisa Pichini

ABSTRACT. The Pressure Equipment Directive (PED), 2014/68/EU, applies to the design, manufacture and conformity assessment of pressure equipment. However, it does not apply to enclosures for high-voltage electrical equipment, such as switch-gear, control gear, transformers and rotating machines. In EU, the design and manufacturing of such casings are accomplished through regulations or guidelines defined by each Member State. Nowadays, in Italy DM 1.12.1980 is still in force, which provides for product type approval by means of tests on prototype. For metal and insulating materials, the overall safety coefficient is assessed by means of burst tests and specific tests. The proposed article provides an overview of the approach followed by DM 01-12-1980 and a comparative analysis to the indications reported in EN 50052 and in the product standard IEC 61462. In addition, an ad hoc plan of activity will be outlined which is based on experimental lab tests on samples from production batch of fiberglass insulators. This will be coupled to a specific structural numerical analysis with Finite Element Analysis technologies (FEA) to support the evaluation of suitable safety coefficients for orthotropic composite materials to be adopted in the homologation of new products.

The Analytical Equivalent Drift Coefficient of EV-GDEE on Steady State for Several Nonlinear and Multi-dimensional Systems
PRESENTER: Tingting Sun

ABSTRACT. The joint probability density function (PDF) of response of structural systems subjected to Gaussian white noise is governed by the Fokker-Planck-Kolmogorov (FPK) equation. However, due to the nonlinearity and high-dimensionality of the real engineering structures, it is practically difficult to solve the FPK equation directly. To this end, by introducing the equivalent drift coefficient, Chen & Rui proposed the so called ensemble-evolving-based generalized density evolution equation (EV-GDEE), which can reduce the original high-dimensional FPK equation to a partial differential equation in only one or two dimensions[1]. In the EV-GDEE, the equivalent drift coefficient that represents the driving force the changing of PDF, is obviously essential but usually not previously known. Till now, the equivalent drift coefficient of an arbitrary multi-dimensional and nonlinear system is mainly estimated in numerical ways, e.g., by the regression methods, and generally it is hard to seek a universal and satisfactory result[1,2]. However, for some special multi-dimensional and nonlinear Hamiltian systems, the analytical solutions to the steady FPK equation is available[3,4], which allows the analytical expression of the equivalent drift coefficient. In this study, the analytical equivalent drift coefficient of steady state is derived based on the exact PDF solutions for several multi-dimensional and nonlinear systems under Gaussian white noise excitation. The derived analytical expression is also compared with the numerical regression method. In addition, the obtained analytical expression is used to promote the estimate of transient equivalent drift coefficient, and consequently, yield results of higher accuracy.

Estimation of Probability outside a Sampled Domain using Convex Hull Approximation

ABSTRACT. Exploration phase is an essential part of many reliability methods to obtain basic orientation in the domain of basic random variables. Subset simulation, asymptotic sampling or some variants of importance sampling typically start exploring the space from regions associated with the highest probability and, based on the obtained information, they may adapt the sampling density by increasing the sampling variance or moving the sampling density towards failure domain(s). These domain typically correspond to rare events.

We propose using a geometrical representation of the explored part of the domain of basic random variables by a convex hull. Once performed, we may target at exploration sampling, as well as using convex hull for estimation measure of the unexplored domain. Whereas in problems of few random variables we are able to set up convex hull precisely, in problems of high dimensions it suffices to assemble convex hull from a limited number of hyper-planes. Failure probability of such a convex set has been shown to range within narrow bounds. Apart from constructing the failure set as an approximation based on the convex hull, we also propose solution for optimal importance sampling density in Gaussian space. The latter is based on Gaussian s-ball integral.

Finite element-fidelity parametrization of kriging metamodels for structutal reliability assessement
PRESENTER: Ludovic Mell

ABSTRACT. It is critical to obtain a precise estimation of the probability of failure when doing the reliability analysis of a given structure. The Monte-carlo estimator is a non-intrusive and unbiased estimator than can be easily implemented to compute this probability. However, the Monte Carlo estimator requires to simulate the structure for a large number of realizations of input random variables due to its low convergence rate. For complex mechanical problems solved by the finite element method (FEM), the computational cost of this estimator may thus be important. Therefore, some reliability analysis are based on a metamodel built from a few calls to the finite element solver that allows to quickly approximate the structural response. To accurately estimate the probability of failure, the metamodel has to be precise close to the limit state delimitating the safe and failure zones. One of the common methods to construct a metamodel is kriging. This estimation of uncertainty allows to couple the metamodel with the Monte-Carlo estimator, which enables to define an adaptative strategy to improve the quality of the metamodel near the limit state. The discretization of the mechanical problem leads to an error in the structural response and thus in the estimation of the probability of failure. To control that error, multifidelity kriging was introduced in 2020. However, it requires the use of an expensive a posteriori discretization error estimate that is not available for every mechanical problem. This work exploits a priori knowledge of the FEM convergence rate to build a mesh size parameterized kriging metamodel. This metamodel allows to compute the probability of failure for any mesh size through Monte-Carlo sampling and thus check for mesh convergence.

09:00-10:20 Session TH1G: Nuclear Industry
Location: Atrium 3
Development of Containment Failure Probability and Uncertainty Analysis Program, COFUN-M
PRESENTER: Byeongmun Ahn

ABSTRACT. The feature of the COFUN is quantification of the containment state by point estimate value, importance analysis, sensitivity analysis and uncertainty analysisfor internal and seismic event. The uncertainty analysis process of the level 2 PSA is that a level 1 PSA uncertainty propagate to a level 2 PSA uncertainty. In order to perform uncertainty analysis in the COFUN code, we need to prepare the input. The basic event probability data including probability distribution were needed and would be used for the level 1 PSA uncertainty analysis.To perform uncertainty analysis in a level 2 PSA, it is necessary to define the split fractional branch point and define the probability distribution and priority. Once theinput is ready, uncertainty analysis is performed via Monte Carlo sampling.Finally, the uncertainty analysis is performed for the end point of the PDS, CET, STC and LERF. One of the outstanding characteristics of the COFUN is to perform level 2 PSA for multi-unit accident. The one of format of multi-unit level 2 PSA is combination of STCs but the combinations could be too many to acquire insight itself. The COFUN provides user-defined combination which makes STC combinations classified

Development of the deep learning based fast simulation for reducing the uncertainty in probabilistic safety assessment
PRESENTER: Hyeonmin Kim

ABSTRACT. A Probabilistic Safety Assessment (PSA) has been used to estimate the risk of Nuclear Power Plants (NPPs). For more accurate analysis, the PSA analysis should be performed, as realistic as possible. The problem is that, however, the number of accident scenarios will drastically increase for a complicated system that comprises of many systems or components, such as NPPs. Indeed, this problem was the main obstacle hampering the introduction of reduction of uncertainty in PSA. To handle this problem, as in the previous study, DeBATE (Deep-learning Based Accident Trend Estimation) was suggested and developed [1]. DeBATE can provide the whole trend of key process parameters calculated from any physical model after training the associated input to it. The schematic diagram of DeBATE is shown in figure 1. In order to confirm the feasibility of DeBATE, a total of 80,000 accident scenarios of Steam Generator Tube Rupture (SGTR) and Main Steam Line Break (MSLB) reflecting standard post trip action was prepared by MARS-KR. As a result, the output was generated less than in 0.2s with about 5% error on average.

PRESENTER: Isabel Marton

ABSTRACT. The IAEA safety guides establish that technical obsolescence of Structures, Systems and Components (SSCs) important to safety must be managed proactively and within a program for the management of obsolescence. According to these requirements and their regulations, several licensees have been implemented an obsolescence management program as a part of their aging management program.

The new Periodic Safety Review philosophy underscores the importance of anticipating and guaranteeing NPP safe operation in the extended period emphasizes the principle of licensee´s self-assessment. In this process, an advanced Probabilistic Safety Assessment (PSA) is necessary to not only evaluate the current plants´ safety but also be able to forecast and allow minimizing the impact on the risk of aging and obsolescence. These facts have necessary developing a new generation of RAM and risk models that it can be integrated into this PSA.

Since the early nineties, several analytical RAM models have been proposed in the literature to establish the relationship between the RAM criteria and the variables of interest corresponding to the decision-making process. There are a lot of works that consider the effect of aging and maintenance and testing activities performed in an explicitly way and some of them consider the failure modes and the equipment behavior to determine the adequacy of the model. However, there is a lack of models that consider the obsolescence and its management strategies into the models in an explicitly way.

In this context, this paper is focused on a revision study of the capability of the actual RAM models to incorporate the effect of the obsolescence and its management strategies. This study will depart from the age-dependent RAM models developed and published in the literature in previous works, which will be extended in their level of detail to explicitly reflect the effect of technological obsolescence. Finally, the conclusions of the research are presented with recommendations about the necessary improvements of the actual models.

Application of Layerwise Relevance Propagation for Explaining AI Models in Nuclear Field
PRESENTER: Seung Geun Kim

ABSTRACT. Recently, neural network is widely applied in various fields due to its superior performance. Accordingly, many researches on applying neural network to nuclear field have been conducted for enhancing safety of nuclear power plant(NPP). However, due to the black-box nature of neural network and the NPP’s safety-critical characteristic, such researches share common limitations on the perspectives of reliability and practicality. To deal with the explainability problem of neural network, numerous explainable artificial intelligence(XAI) methods have been proposed. Most of existing XAI methods can be classified into two categories. The first category is the XAI methods that provide model-wise explanations. These methods deduce the explanations on model itself, through extracting features within hidden layers or directly accessing to the model parameters. In contrast, the second category is the XAI methods that provide input-wise explanations. These methods deduce the explanations about specific input, through conducting sensitive analysis or relevance backpropagation. To make neural network models that are sufficiently reliable and practical for nuclear field, each category of XAI method should be thoroughly considered. However, only few studies are dealt with adopting XAI methods to nuclear field. In this study, layer-wise relevance propagation(LRP) which is one of the representative XAI method was adopted. LRP explains the neural network model’s output by deducing the relevance between each part of the input and corresponding output, and has shown better performance in most cases compared to other XAI methods. The authors applied LRP to two neural network applications on nuclear field; that are fast running and scenario classification. For the experiments, simulation was conducted and acquired data was used for model training. Results revealed that the application of LRP could provide explanations on model’s output which can be used to enhance the reliability and practicality of neural network models in nuclear field.

09:00-10:20 Session TH1H: Automotive Industry
Location: Cointreau
PRESENTER: Marco Arndt

ABSTRACT. In the component volume of cast aluminum housings, an inhomogeneous distribution of material properties can occur, which is mainly caused by the casting process and other machining like thread forming. This effectively influences the component strength of threads as well. Probabilistic reliability analysis of stress versus strength is necessary in such a scenario to get valid reliability results. Reliability calculations based on empirically determined safety factors like its proposed in different guidelines [1] will not lead to reasonable results. FEM simulations, analytical investigations and testing on component tolerances is used to determine probability distributions of stress and strength. Since the multiplication theorem for independent probabilities provides a way to determine reliability by convolution, this method is used to quantify the effectively resulting reliability value [2]. In this paper a screw connection in an aluminum housing of an electric drive unit is used. The result for probability of failure depends on the ratio of mean values of stress and strength and on the corresponding standard deviations. The resulting probability of failure is PA = 31,87 % with an analytically determined stress distribution fB (x) and an experimentally determined strength distribution fF (x). Eventually, three possibilities of adaptation, using the proposed calculation method, are presented. Illustrating reliability optimization to an exemplary target value of 1 ppm, the existing standard deviation of strength needs to be adjusted to 17,3 % of the initial value with same mean value. Additional options are determined by raising the mean value by a factor of S = 1,591 with same σF or optimizing both mean value and standard deviation, for instance with σ according to [1]. Here, the strength distribution must be increased by the factor SF = 1,216. With this approach, reliability targets can be achieved by statistically tolerancing individual component features instead of changing the entire component dimension.

PRESENTER: Zdenek Vintr

ABSTRACT. The wide application of highly sophisticated electronic components in combat vehicles is a key trend in recent time. Transmission systems (including engines and automatic transmissions), weapons, security, reconnaissance, and communication systems based on digital technology and have been implemented in combat vehicles. Due to the increasing number of electronic devices in combat vehicles, it can be said that this is the process of digitizing military technology. Electronic devices experience vibration stress while combat vehicle operates in a ground space. Vibration may cause structural collapse, mechanical failure, fracture, cracking, physical breakdown of the seal, complete disconnection, or interruption of electronic parts. In order to ensure the reliability of electronic devices during operation, it is necessary to conduct vibration tests for a functionally test under severe vibration environments. The presented study has applied accelerated reliability testing methods for predicting the reliability of electronic devices built in combat vehicles. A special test system have been designed to practically realize pertinent accelerated tests. The system consists of a platform base for placing electronic devices, hydraulic equipment to vibrate the base, hydraulic control system and instrumentation of system performance. The paper presents methodology of the realized accelerated tests and demonstrates procedure of tests results evaluation. Selected results of electronic devices reliability prediction based on test data are presented too.

PRESENTER: Zdenek Vintr

ABSTRACT. Electronic elements are increasingly used in a variety of applications, including on civilian and military vehicles. Evaluating the lifetime of these elements helps to control the reliability of the devices using them. With the development of manufacturing technology, the electronic elements become products with high reliability and long service life. Hence the Acceleration Test (AT) is used to evaluate their reliability, reduces the time and cost of testing them. The paper is focused on reliability prediction in the case of electronic parts of combat vehicles using accelerated reliability testing. An extensive experiment has been realized to predict reliability/life of miscellaneous electronic parts used in design of a combat with respect to various stress factors. As an example of the tests realized the paper presents procedure and results of LEDs accelerated testing based on a multifactor stress of the LEDs. Three types of accelerating factors have been applied – high temperature, ON/OFF cycling and current load. The paper deals with methodology of the realized accelerated tests and demonstrates procedure of tests results evaluation. Two methods are used for accelerated tests data evaluation – application of the Wiener process-based model and classical approach based on international standard IEC 60605-4. The paper compares results both approaches and presents selected outputs of the supposed method of the LEDs reliability prediction for various operating conditions.

D-DEG: A Dynamic Cooperation-Based Approach for Reducing Resource Consumption in Autonomous Vehicles
PRESENTER: Tobias Kain

ABSTRACT. Operating a vehicle autonomously is a resource-intensive task. Since resources, like computing power, energy, and bandwidth, are limited in such vehicles, methods for reducing resource consumption are required. In this paper, we propose D-DEG, a cooperation-based approach for autonomous vehicles that is capable of reducing resource usage. The basis of our approach is that vehicles that are in close proximity and that use the same sensor and software set perceive and compute similar data. The idea is to share information, e.g., sensor data and application outputs, between vehicles using VANET (Vehicular Ad-Hoc Network) technologies. The transferred information is used to achieve resource preservation, whereby our approach aims to reduce resource consumption by degrading sensors and applications. To this end, we introduce the so-called dynamic-degradation evaluator. This component analyzes the information received by other vehicles to determine whether sensors and/or applications can be degraded. Besides the data received from other vehicles, the dynamic-degradation evaluator also considers the current operational design domain (ODD) and the system state, which includes, for instance, information about the current resource utilization and the safety level of the vehicle, to determine whether degradation operations can be performed. Those degradation operations can range from decreasing the sampling rate of a sensor or the output rate of applications to shutting down sensors or applications, respectively.

09:00-10:20 Session TH1I: Security
Location: Giffard
PRESENTER: Stein Malerud

ABSTRACT. The lack of relevant and reliable information is a recurring challenge in analysis and decision-making related to security issues. Security analysis often relies on input from subject-matter experts (SME). SME judgments are subjective, and the personal traits and cognitive and motivational biases of the experts may flaw the quality of the information. If not handled properly, this can threaten the validity and credibility of the collected information, the results of the analysis and, in the end, the supported decisions. Hence, we need structured, transparent and reliable knowledge elicitation methods for extracting information from SMEs.

In this paper, we explore how analytical wargames can be used to facilitate knowledge elicitation from SMEs. The main goal is to obtain valid and credible expert judgements as input to analysis of security challenges.

“Analytical wargaming” is a generic term for various types of games tailored to support data collection and analysis related to conflict and competition. Wargames vary in scope, size, and complexity, from simple table-top seminar games to more advanced computer assisted games. Which type of wargame that is suitable for a given analysis depends on several factors, not least the information requirements, available analysis resources and the availability of SMEs. Given that wargames are adversarial in nature, they lend themselves well to analysis of security related problems.

We address questions such as: How useful are analytical wargames for supporting information collection? How is the quality of collected information affected by factors such as game formats and type of players? How can we improve our ability to use games for information collection?

We argue that analytical wargames are well suited to support information collection in the different phases of the analytical process. Wargames provide settings that not only enable creative thinking, but also bring structure and traceability to the process of extracting information from SMEs. We base our findings on experiences from using wargames for information collection in support of decision-making related to national security challenges.

PRESENTER: Meilin Schaper

ABSTRACT. The need to protect air traffic control against attacks and to detect security incidents is widely accepted. Nevertheless, depending on the systems and procedures, it is sometimes difficult to decide if "something is not as it should be". On the one hand it could be due to a failure, i.e. a safety issue, on the other, it could be because of an intentional interruption/abuse, i.e. a security issue. This paper lists five specific kinds of indications that may be found analyzing the traffic situation and the radio communication at a controller working position and details how they are detected. Those indications are non-conformant movements, conflicts, unusual clearances/behavior, unauthorized speakers and detected stress. Furthermore, a correlation function is described which determines the security situation indicator. This indicator categorizes the security situation into three different states expressing how likely it is that the detected indications may represent a security relevant situation: “green”, meaning there are no security-related actions needed; “yellow”, meaning higher monitoring effort is needed; and “red”, meaning that there is most likely a security incident and a high attention is recommended. The Traffic Management Intrusion and Compliance System is conceptualized being part of an airport security architecture. It is supposed to assist as well the air traffic controllers as the operators in a security operation center. The paper closes with the expected benefits of the system.

PRESENTER: Matteo Iaiani

ABSTRACT. Cyber threats are becoming a growing concern for industrial facilities characterized by a high degree of automation, especially those that highly rely on Operational Technology (OT) systems such as process facilities. Fixed installations where chemical and petroleum products are manufactured and stored (e.g. Seveso sites in the European Union) are of primary concern since attackers (e.g. terrorists, activists, criminals, disgruntled employees) may exploit the inherent hazardous conditions and trigger events with severe consequences on workers, population, and the environment. In fact, a cyber-attack, besides economic and reputation damages, can potentially trigger major accidents (e.g. release of hazardous materials) as happened in 2008 where attackers over-pressurized a section of the BTC (Baku-Tbilisi-Ceyan) pipeline causing an explosion, the release of more than 30,000 barrel of oil in an area above a water aquifer, a fire lasting more than two days and outage losses of $5 million a day. In the present study, the cybersecurity-related incidents (CSIs) that occurred in process industry and similar industrial sectors (chemical, petrochemical, energy production, and water/wastewater sectors) were investigated, pointing out ongoing patterns in cyber-attacks to the process industry, as well as the more effective security countermeasures. This information can be used to support the phases of identification of vulnerabilities, threat scenarios, impacts and security countermeasures for the techniques commonly used to handle security-risks in process facilities, such as Security Vulnerability Assessment (SVA) methodologies. The study is based on the development of a database of 82 cybersecurity-related incidents and the analysis of the overall dataset with Exploratory Data Analysis (EDA). Time-trend (from 1975 to 2020), geographical distribution, distribution among the industrial sectors, impacts of the incidents, and nature of the cyber-attacks (intentional external / intentional internal / accidental) were investigated, evidencing important findings. The attacks resulted to be able to affect not only the company Information Technology (IT) system, which is a threat common to several business sectors, but also to manipulate the BPCS (Basic Process Control System) and the SIS (Safety Instrumented System) with consequent more severe impacts. Finally, the analysis of a sub-set of accidents with a more detailed description allowed for the identification of the general phases of a cyber-attack on IT/OT systems of a process facility.

09:00-10:20 Session TH1J: Resilience Engineering
Location: Botanique
PRESENTER: Rundong Yan

ABSTRACT. Since the middle of the last century, the safety of nuclear power has long been the concern of the academic and social communities. In recent years, it has become even more challenging to ensure the safety of nuclear power plants due to accelerated climate change. This is because some existing safety systems in the plants are not able to cope with new issues introduced by climate change. For example, multiple reactor units in Yangjiang Nuclear Power Plant in China were shut down on the 24th and 25th March 2020. The reason for the two shutdowns was not due to equipment failure, but due to a situation that had not been given enough attention before, i.e. the invasion of a large amount of marine organisms (acetes) into the circulating water system. The presence of so many acetes was due to the abnormal rise of the local seawater temperature near the nuclear power plant in March 2020, as warm seawater temperature is more conducive to the reproduction of acetes. They gathered at the inlet of the seawater filtering system, causing blockage of the system which led to undercooling of the secondary heat transport system and eventually resulted in the shutdown of all 6 reactor units. From this example, it can be inferred that the change of local ecosystems due to climate change can bring more challenges to the safe operation of nuclear power plants in the future. It is therefore wise to take steps in advance to deal with issues of this kind. In response to this need, this paper will analyse the present related reactor safety systems and propose and discuss a measure that can potentially improve the resilience of the reactor system. The study will consider the ability of the system to anticipate for the events, absorb the impact of the events to the system, and recover from perturbations. To facilitate the research, a mathematical model will be developed using Petri nets (Yan et al., 2020) to simulate the reliability and health states of the related safety systems, the occurrence of disruptive events, the corresponding responses of the nuclear system, and the possible operation states and recovery of the system from the disruptive events. The research will lay a solid foundation for future nuclear power system design and the resilience assessment of nuclear reactor systems.

PRESENTER: Elena Stefana

ABSTRACT. Resilience is the system ability to adjust its functioning prior to, during, or following changes and perturbations. Resilience Engineering represents a new paradigm to improve safety, focusing on how to create resilience in systems. The resilience measurement supports decision making processes, but it is not a trivial task. Therefore, the objectives of this paper are: (1) to critically analyze the literature about quantitative resilience assessments in the industrial safety domain, and (2) to propose a novel three-tier approach for measuring and assessing the resilience potential in any organization in the same domain. To achieve our objectives, we performed a narrative literature review about the existing approaches, frameworks, and methods quantifying and ranking resilience indicators, and/or estimating an overall resilience score. Multi-Criteria Decision Making and Bayesian Network approaches are frequently employed for such purposes. The results gathered through the narrative review represent a key source for developing a novel tiered approach. We propose an approach able to quantitatively assess the resilience potential in the industrial safety domain that consists of three tiers. A knowledge-driven tier assesses resilience by using the knowledge of decision makers through techniques involving judgements, a knowledge and data-driven tier incorporates methods considering both expert knowledge under uncertainty and objective data, while a data-driven tier includes models performing resilience assessments entirely based on data provided by devices and information systems in the organization.

Formalization of questionnaire-based score card risk control and resilience assessment for Critical Infrastructure operators and companies countering Covid-19

ABSTRACT. Classical risk analysis and management using frequencies and immediate consequences of threat events are now increasingly applied by critical infrastructure (CI) operators and within companies, such as small and medium sized enterprises (SMEs). However, there is a lack of consensus on how to efficiently extend it for the assessment of resilience against disruptive events. Accordingly, only few established fast methods and practical tools provide assessment of resilience measures and capabilities before, during and after disruptive events. This work presents a rapid questionnaire-based scoring approach covering system understanding, resilience and risk assessment, and selection and monitoring of improvement methods. As opposed to classical risk control approaches, questions cover resilience concepts, approaches and dimensions as relevant for CI and SME companies facing disruptions while highlighting connections to classical standards. In particular, resilience cycle-steps (with attributes before, during and after disruptions), system layers (technical-physical-cyber, organizational, social-societal and economical-environmental, TOSE) and resilience capabilities (4Rs such as robustness, redundancy, resourcefulness, rapidity). The focus of the paper is on the formalization of the approach. First, different types of questions are mapped on a semi-quantitative scaling approach. Thus qualitative discrete answers and quantitative questions with and without scales and thresholds can be treated in a similar way. As the risk and resilience concepts employed are not uniquely defined it is shown how to attribute questions to several resilience dimensions and attributes. In this way they can also be presented in a logic sequence independent of their contributions to different risk control and resilience generation attributes. To ensure that overall aggregated scales at different levels result in similar scales, independent of the number of questions covered for a specific topic of interest, a normalization approach is introduced that ensures extensive quantities regarding the number of questions. In addition, weighing of questions is enabled. The paper shows which minimum inputs are sufficient for a question to be added to the scoring scheme and to contribute to the different scales in a well-defined way. It is discussed which assessment quantities are accessible. This includes comparison of scales for resilience attributes of single (bar diagram), two (matrix assessment) and more resilience dimensions, determination of aggregated scores at various levels, level of coverage of questions of dimensions and attributes, and relevancy of questions in terms of their applicability. This enables relative and absolute statistical comparison of the results, also of different organizations. For the assessment options selected examples are provided from the domains of CI operators and SMEs facing Covid-19.


ABSTRACT. The paper explores the common grounds of cyber resilience and resilience engineering and reports on a literature study of the term cyber resilience for the application of critical infrastructures. The term cyber resilience has during the last few years gained increasing usage. In the safety field of research, resilience engineering has facilitated alternative approaches to safety management, to cope with complexity. Along with its digitalization, critical infrastructures are getting increasingly more complex, by the introduction of information technology (IT) in already existing operational technologies (OT), and at the same time exposed to new threats, by the connection to internet. The paper investigates to what extent cyber resilience, as it is defined and used today in scientific literature for critical infrastructure, represent a symbiosis of the concepts of cybersecurity and resilience engineering, and possibly a new paradigm for cybersecurity practices, similar to how key ideas from resilience engineering have sparked a new paradigm for safety management – "Safety-II". The scope and applications of cyber resilience for critical infrastructure is examined based on a model describing the various levels of operationalization of cyber resilience. The paper also explores whether current interpretations of cyber resilience, alternatively, is a relabeling of existing cybersecurity (best) practices, and rather represent a form of “rebound or “cyber robustness” that is more loosely connected to resilience engineering concepts. The paper concludes with the implications for further research within the field of cyber resilience for critical infrastructure applications.

09:00-10:20 Session TH1K: Innovative Computing Technologies in Reliability and Safety
Location: Atrium 1
Performance Management of Safety Instrumented Systems for Unmanned Facilities Using Machine Learning: Decision Support System for SIS

ABSTRACT. Performance management of safety instrumented systems (SIS) is a vital part of the major accident risk management for oil and gas processing facilities. The requirements to performance management are provided in national regulations and governing standards for SIS, such as IEC 61508 and IEC 61511, and cover how and why the performance to be managed. Many of the tasks related to performance management of SIS are resource demanding, manually carried out, and dependent on local presence of humans at the facilities. For some of the future oil and gas facilities in offshore that are to be completely unmanned, with restricted access to humans, it is necessary to move to a higher level of automation and autonomy in performance management. This includes the utilization of artificial intelligence (AI), such as machine learning (ML), to determine the ability of the SIS to respond to demands under various operating conditions, based on historical or real-time process data and event data from multiple monitoring systems. Proposed decision support system for SIS (SIS advisor) is the system that is intended to provide valuable information on a given process system whether if the system is under normal status according to its design intention or not. With this information, the operators will have better chance to understand current operating situation so that they will be able to make better decision as a response. In most cases, SIF (Safety Instrumented Function)s in SIS are fully automated combined procedure of sensing the process condition, logic solving, and activating the final elements, however, for specific SIF actions, operator intervention is necessary to make those SIF be activated, and the consequences accompanied with some of those SIF can be very high, i.e. abandon the ship or total black out events. Abnormal operating condition can be also easily noticed by alarm signal generated by SIS, but this can be also distorted by instantaneous manual override due to maintenance or certain event, or by permanent design change made during operation phase but not reflected on SIS. Operator’s awareness on such abnormal operating condition will be more critical and essential if the process system is unmanned or a system with limited access for SIFs that require operator’s manual activation as well as ordinary daily operation. This paper proposes how ML can be used to enhance operator’s awareness on the system, and set up a framework for processing input data into the information to support operator’s decision. This research is carried out as a part of SFI SUBPRO, a Research-based Innovation Centre within Subsea Production and Processing in Norway.

PRESENTER: Nicholas Nechval

ABSTRACT. It is often desirable to have statistical prediction limits available for future outcomes from the distributions used to describe time-to-failure data in reliability problems [1-4]. For example, one might wish to know if at least a certain proportion, say γ, of a manufactured product will operate at least t hours. This question cannot usually be answered exactly, but it may be possible to determine a lower prediction limit L(X), based on a random sample X, such that one can say with a certain confidence ‒ (1− α) that at least 100% of the product will operate longer than L(X). Then reliability statements can be made based on L(X), or, decisions can be reached by comparing L(X) to t. Predictions limits of the type mentioned above are considered in this paper. A new approach is used to construct unbiased prediction limits and shortest-length or equal tails confidence intervals for future outcomes under parametric uncertainty of the underlying distributions through pivot-based estimates of these distributions. The approach isolates and eliminates unknown parameters of the reliability problem and uses the past statistical data as completely as possible. To illustrate the proposed approach, numerical examples are given.


ABSTRACT. Nowadays, searching for new alternatives to provide sustainable drive devices is important to maintain transporting and logistic systems as well as incorporate them daily. Thus, electric vehicles (EVs) come as a feasible alternative to diversify the vehicular power source matrix, decreasing fossil fuels’ impacts. These systems are composed of different equipment – e.g., motor, cooling system, battery. Specifically, batteries are the core of an EV, and guaranteeing safety is crucial to avoid accidents such as gas leakage, fires, and explosions. These batteries are mostly composed of lithium-ion cells; they have a high specific energy density, high cycle life, and low self-discharge. Considering the Prognostic and Health Management (PHM) context, the State of Charge (SOC) is one of the main steps to ensure the system’s reliability; this measurement is the remaining charge within the battery, defined as the ratio of the residual capacity of the battery to its nominal capacity. Such estimates can be obtained by electrochemical experiments, physicochemical models, or using data. Data-driven methods use data patterns to identify a certain behavior of the studied phenomenon; they can involve machine learning (ML) or deep learning (DL) models. Firstly, we develop the data-driven model based on an input composed of monitoring data (e.g., temperature, voltage, current) and the output SOC. Then, we create a web application (WA) using a library called Streamlit, which enables turning data scripts coded in Python into a sharable WA quickly, being also open-source, free, and licensed under the Apache 2.0 license. For the deployment, management, and sharing process, the user can recur to a free-sharing platform given by Streamlit. In this WA, the user can enter with their dataset – the format of input will be given – and this input is going to work as additional data for the data-driven model. Also, the temperature and the initial SOC work as pre-input to determinate which model is going to be serving the output; as output, this WA offers the punctual SOC – furtherly, the interval prediction -, the predicted SOC graphic, and the prediction metrics (e.g. root mean squared error, absolute error, and so on). This library also has the option to deploy mobile applications, becoming more versatile and user friendly.

10:20-10:35Coffee Break
10:35-11:35 Session TH2A: Risk Management
Location: Auditorium

ABSTRACT. Distribution of natural gas has become an operation of great concern because of the hazardous conditions. In these systems, accidents turn out to trigger losses regarding human, environmental, and financial consequences, integrated into operational conditions and failures consisting of essential risk assessment elements and decision-making. Because of conflicting objectives and their resulting complexity, multidimensional risk analysis is fundamental to supporting pipeline operations decisions. However, dealing with a large and dynamic amount of information may be inefficient for managers to plan mitigating actions to avoid losses. In this regard, this paper aims to promote a development of a Decision Support System (DSS) based on the assessment of the multidimensional risk decision model for natural gas pipeline sections. A DSS is developed in two main modules. The first module presents the quantification and sorting of risks, which uses Utility Theory and the ELECTRE TRI method to assess the level of risk in the sections. The second module conducts a sensitivity analysis that investigates the input parameters' uncertainty and the modification of output assessment. The modules' interface allows the managers to insert the values of the parameters and obtain information about the multiple dimensions of risk according to each section. This information is processed by the system providing visualization and reports of the risk assessment. The DSS devoloped is useful for continuous operation, controlling, and risk monitoring. For example, to cope with the dynamic changes in the pipeline system's input and allocate resources for the pipeline sections. In general, the DSS contributes to structure the decision-making process of risk assessment of natural gas distribution by following the organization's limitations, environmental regulations, and human interference. Thus, this paper presents a possible way to overcome challenges regarding the planning of imminent risk and formulate uncertainty scenarios to manage risk in natural gas pipelines.


ABSTRACT. This paper proposes a method for identifying the high-level risks in the Magnetic Particle Inspection (MPI) of ferromagnetic material parts, based on Analytic Hierarchy Process (AHP) and Bayesian Belief Network (BBN). The combination of probability and the impact identified the most significant risks, which needed to be addressed to improve quality management system and ensure organization sustainability. The inspection of critical ferromagnetic parts with Magnetic Particle in the manufacturing and services industry is very critical. The correct selection and use of an adequate analysis method to ensure inspection process reliability is very important and can avoid part failure and costly accidents. As a methodological approach, the estimated risk probabilities for the risk factors are loaded into Bayesian Belief Networks software to assess the probability of occurrence of undesirable events and AHP is utilized to rank the relative importance (effect) of risks. The combination of probabilities and the effects identified the most significant risks. No evidence of previous work could be found about the use of AHP and BBN on the risk assessment of MPI of critical hardware. As far as the authors are aware, this is the first time this method is being used in this specific process. The novelty of the paper is the combination of Bayesian Belief Networks with AHP to select the most significant risk and the use of Goal Tree dashboard to improve quality and sustaintability of otimization, in the inspection of critical parts. The application of the method revealed that the most significant risks in the inspection of critical hardware are related to operator failure, unfavourable control and environment, negative organizational factors. The paper proposes responses to these risks aiming at preventing the occurrence of failure in the MPI inspection of critical hardware. This paper contributes to the literature in the field non-destructive inspection of critical parts. The proposed model has also practical implications and is an invaluable source for non-destructive inspection professionals, safety engineers, quality managers and decision makers in companies to augment their information and to identify critical risks in the non-destructive inspection of critical ferromagnetic parts. The identification and prioritization of risk factor makes it easier to allocate resources to prevent critical parts failure and improve product quality and ensure organization sustainability.


ABSTRACT. Agile software development (ASD) represents a paradigm shift in how organizations manage their software development projects and collaborate with internal and external project stakeholders. ASD thus might challenge the common waterfall approach in risk assessment as characterized by strict project specifications, working steps and milestones as well as the performability of the common risk management process The paper addresses the challenges faced by a Swiss Federal Department (SFD) in implementing HERMES in ASD. HERMES is the Swiss open standard project management method for projects in the area of IT, service and product development, and business organization adjustment applied by public and private organizations. The paper starts with a literature research to classify challenges in ASD with regard to risk management and risk assessment. A cases study concretes this for SFD purposes. A SWOT analysis is used to evaluate the literature research and interviews. The results of the paper compile governance recommendations for the risk management of ASD projects mainly in order to concretize the ASD aspect in HERMES.

10:35-11:35 Session TH2B: System Reliability
Location: Atrium 2
Development of a Cause-Effect Relationship Model to Identify Influences on Load Conditions that Cause Bearing Damage
PRESENTER: Thomas Gwosch

ABSTRACT. If damage occurs in a bearing, it cannot generally be clearly traced back to the influences that cause the damaging load, like temperature, external forces and the lubricant. The load conditions in technical systems often depend on many influences. This paper presents a model that should support the identification of the relevant influences on load conditions that cause bearing damage to enable bearing durability tests. The development of the model based on a case, in which pitting as well as widening of the bearing raceway and edge throw-ups have occurred. The influences on the load conditions and thus on the occurrence of the damage were unknown in the case. To build the model, findings from damage catalogs and theoretical system analysis were used in combination with experimental investigations of possible influences. The developed model links external influences on the bearing and thus the influences on the load conditions, operation-dependent parameters in the bearing, physical and chemical mechanisms and the appearance of the damage. The model can be used to identify influences that lead to main loads, parameters that reinforce damage mechanisms, cycles that are self-reinforcing, and parameters that contribute to multiple damage mechanisms. This supports the formulation of relevant influences on load conditions for bearing durability tests.

PRESENTER: Hicham Boufkhed

ABSTRACT. Underground pressure pipes are subject to corrosion under various environment conditions. Such as the aggressiveness of the soil, which is influenced on the corrosion rates, and depends on one region to another at long of pipeline. This study will focus on evaluating the reliability of a corroded pipeline under pressure. Degradation of this asset is induced by localized corrosion, resulting in the loss of wall thickness. It has a significant effect on the probability of bursting and is influenced by external factors (spatial variability), which brings us back to the study of the distribution of corrosion rates along a pipeline, The failure probability is computed by using Monte Carlo simulation. This work will be applied to case study on an oil pipeline located in Algeria made of API 5L X52 steel. We will study the effects of correlations between corrosion defects parameters on the probability of failure. The sensitivity of the design variables in the performance function on the probability of failure is studied

10:35-11:35 Session TH2C: Reliability and Maintenance of Networked Systems
Robust end-to-end reliability evaluation for industrial 5G communication systems

ABSTRACT. The end-to-end reliability evaluation of communication systems has long been a challenging task, due to the system’s structural, protocol, logical and physical complexities and the environmental uncertainties in wireless communication transmission. Nowadays, world leading communication service providers are ambitiously planning to implement 5G communication systems in industrial scenes, to fully replace the current widely used industrial WIFI. However, new challenge arises since the high demand of production quality requires highly accurate real-time communication reliability guarantee, for the communication Quality of Service (QoS) targets such as latency and data rate. In this work, we propose a robust framework to compute the end-to-end communication reliability index. The whole communication system is modelled as a queuing network with data-driven epistemic uncertain stochastic processes. By assuming stochastic monotonicity in the structure of uncertain distribution set, we can specify the worst-case distributions, and, then, approximate the worst-case system reliabilities through Monte-Carlo Simulation (MCS). A detailed implementation of MCS technique in the specified problem and benchmark testing are also presented in this work.


ABSTRACT. Dynamic attributed networks (DANs) provide powerful means of representing complex system, e.g., online social networks, financial networks, transactional networks, and wireless sensor networks. To facilitate situation awareness and critical decision-making, anomaly detection in DANs has become an increasingly active area of research in network sciences. However, most existing methods are only capable of detecting the temporal outliers, neglecting the potential benefits of jointly detecting the spatial outliers across the entire network. To address this issue, this paper presents a novel approach, which is also efficient for large-scale networks. Specifically, we first develop a novel recurrent neural network structure to explore the spatio-temporal correlations of the DANs. Furthermore, prediction residuals are monitored through an exponentially weighted moving average (EWMA) control chart. Experiments on synthetic and real-world datasets depict the properties and benefits of the method compared with existing methods in the literature.

A Set of System Reliability Metrics for Mobile Telecommunication Network

ABSTRACT. Telecommunication networks are one of the most important critical infrastructures in our society, as they transmit information among different people or entities. Starting from 1960s, several telecommunication network operators began to rely on the stable and non-stopping service to fulfill the customers’ ever-rising demands for high quality communication services. Telecommunication network reliability has then become an important research topic as well as practical concern. After few decades of research and development, there are several reliability metrics proposed by researchers in literature and a set of Key Performance Indicators (KPIs) widely used by several companies in practice. Any KPI anomaly is alerted, such that the operators are typically overwhelmed by KPI alerts every day. Moreover, the operators need to invest plenty of resources for maintenance to respond to these alters, which is simply not sustainable. Therefore, there is a strong demand from industry for a system-point-of-view reliability metric that severs as the foundation of scientific management of the maintenance activities of telecom network. However, there is still lack of such a metric in academia as well as in practice. In this work, we propose one metric based on the concept of service reliability to bridge such big gap.

10:35-11:35 Session TH2D: Prognostics and System Health Management
Location: Panoramique
A ROC based model to maximize global detection power of a group of detectors
PRESENTER: Pierre Beauseroy

ABSTRACT. In many applications, detectors are implemented to monitor systems and components of systems [1]. In a complex system many components should be moni-tored so that their detectors must be tuned which always implies to choose a tradeoff between false alarm and non-detection for each of them. One ques-tion that has not retained much attention up to now is the tuning of these detection systems. A naïve ap-proach is to tune each detector on its own, not taking into account the complete system. The global performance of the whole system, its false alarm rate and its detection rate, is thus resulting from those tuning choices performed without global system level concerns. In previous communications [2, 3] we have considered cases where the global system performance can be described as the sum of individual sub-systems. Each detector is characterized by its own theoretical [2] or estimated ROC curve [3]. In that case, we have shown that naïve approach leads to sub-optimal global performance and can be significantly improved using an adequate optimization method to tune each detector according to the chosen performance criterion. In our case, the performance criterion is defined in term of a constrain [4]. In our case the well-known Neyman-Pearson approach [5] has been chosen as the overall detection framework. The optimization is performed only using the ROC curves of the individual detectors in the system. In this communication we tackle the case of a system which is monitored by a set of independent detectors. Each of them being in charge of one component of the system. For example, one can picture a thermic engine with a detector related to engine temper-ature, another to oil quantity and a third one to NOx released. In that case the overall performance is a conjunction of the individual performances so that the global performance expression is different from former tackled case [2, 3]. In this communication we introduce the analytical expression of the global performance based on individual ones and we will propose an optimization approach to find the optimal solution. Here again we demonstrate, analyzing simulations results with 2 cases that proper tuning can help improving significantly the overall detection performance of a system without changing any of its com-ponent, but solely tuning properly each detector. Based jointly on these new results and on the previous ones, we will outline a global optimization strategy to tackle detection optimization problems for a much wider set of systems. These results could be used to gain significant benefits in two cases. First, to improve the detection per-formance of an existing system or to adapt it when performance constraints are changing with time. Second, when designing a new system, it could help to make a choice between different detection sensors and technologies that have their own cost and offer different performance profiles. The proposed approach is a step forward to enable comparison between different design options. A future work will be dedicated to extend these results in order to be able, for a given system design option, to specify the minimum performance requirements needed for each detection sub-system so that put together they ena-ble to reach a prescribed overall performance. REFERENCES [1] Nada Matta Yves Vandenboomgaerde Jean Arlat ; Super-vision and Safety of Complex Systems, ISTE Ltd and John Wiley Sons Inc, 2012 [2] P. Beauseroy, E. Grall-Maës. “Join optimization of detec-tors' fleet settings to maximize global detection power”, ESREL 2018, Trondheim, Norway, June 2018. [3] P. Beauseroy, E. Grall-Maës. “A probabilistic model to maximize joint power detection of a group of trained de-tectors”, ESREL 2019, Hannover, Germany, September 2019. [4] E. Grall-Maes,, P. Beauseroy. “Optimal decision rule with class-selective rejection and performance constraints”. IEEE Transactions on Pattern Analysis and Machine Intel-ligence, 31, 2073–2082, 2009. [5] J. Neyman; E.S. Pearson. "On the Problem of the Most Ef-ficient Tests of Statistical Hypotheses". Philosophical Transactions of the Royal Society A: Mathematical, Physi-cal and Engineering Sciences. 231 (694–706): 289–337, 1933.

Forecasting Components Failure Using ACO for Predictive Maintenance
PRESENTER: Ankit Gupta

ABSTRACT. Early detection of faults and prioritized maintenance is a necessity of vehicle manufacturers as it enables them to reduce maintenance costs and increase customer satisfaction. In this study, we propose to use a type of Ant Colony Optimization (ACO) algorithm to diagnose vehicle faults. We explore the effectiveness of ACO for solving classification problem in the form of fault detection and prediction of failures, which would be used for predictive maintenance by manufacturers. We show experimental evaluations on the real data captured from heavy-duty trucks illustrating how optimization algorithms can be used as a classification approach to forecasting component failures in the context of predictive maintenance.


ABSTRACT. The ability to accurately predict the remaining useful life (RUL) of rolling bearings plays an important role in the condition monitoring and maintenance of rotating machinery. Some practical challenges are related to the selection of optimal degradation features for effective and accurate RUL prediction. There are few works dedicated to the problem of selecting suitable features for RUL prediction based on physical modelling. This paper proposes a study for predicting RUL of rolling bearing considering various features and prediction methods based on physical fault crack growth modelling. Three feature sets (RMS, level crossing, multiple features (MFs)) are considered as degradation indicators. A nonlinear least squares method is used for initial parameters estimation. Bayesian method and particle filtering are applied for updating the values of the parameters of the physical model. The proposed framework is demonstrated using real test data provided by the FEMTO-ST institute. The results of two methods are compared, considering three different indicators. MFs indicator has the least error in the RUL estimation compared to other indicators. Particle filtering is found to perform best when data are collected in real time, whereas the Bayesian method is more suitable than particle filtering when batch data are available.

10:35-11:35 Session TH2E: Organizational Factors and Safety Culture
Location: Amphi Jardin
Human Reliability Analysis as Pedagogical Tool
PRESENTER: Alaide Bayma

ABSTRACT. Chemical accidents involving explosions, large fires and leakages of hazardous substances occurring during transport, storage and industrial production of chemicals constitute a real challenge to health, environmental and industrial safety professionals. Human factor is one of main causes of fire and explosion accidents in petrochemical enterprises. Safety chemical industry process depends on many factors, one of them is the safety culture. Great efforts have been made for improving safety culture among operators and all agents. The technical education institution of chemical process that trains and prepares professionals are one of them fronts. The purpose of this article is to present an HRA (Human Reliability Analysis) as pedagogical tool to increase the safety culture of students and professionals from a technical education institution of chemical process, through routine maneuver in the prototype process unit and failure simulations, and to evaluate the effectiveness of the training given. The technical education institution with all attributes, including safety culture, and that it is willing to cooperate with this innovator project in formatting professionals and preparing workforce for Brazilian industry, it is National Service of Industrial Training - SENAI, sited in the São Paulo industrial and metropolitan region, Brazil. In order to evaluate students' and professionals' interface, it is proposed a method for analyzing the human interaction within the system to establish a generic causal framework aiming at the study of the human error mechanism. This analysis is proposed through the Bayesian Networks approach supported by Fuzzy Logic whose application is to model the performance shape factors and checking through a causal inference and diagnosis, which factors most influence in the performance of the students operation at prototype process unit. The results recommended a design reevaluation of prototype process unit regarding human interface and instructions procedures, to promote students critical thinking regarding human errors, and more practical trainings.

PRESENTER: Trond Kongsvik

ABSTRACT. By means of a short literature review, this paper will provide an overview of the current state of knowledge regarding the scope and causes of underreporting of adverse events, and possible measures to stimulate reporting. Underreporting is understood as lack of reporting or registration of accidents, near-accidents or deviations that, according to government requirements and / or company-internal requirements, should have been reported or registered. Reporting of adverse events is carried out at three levels: (1) Individual workers report to their own company, (2) Contractor companies report to operator/ hiring companies, (3) Operator / contractor companies report to the authorities. Underreporting can be an issue at all three levels and may also propagate from one level to the next; incidents that are not reported at the individual level will not reach companies or the authorities. Furthermore, there may be different sources for underreporting at the different levels and also a need to differentiate measures. Therefore, the literature review addressed these three levels separately. The literature search was limited to scientific papers that could be obtained from acknowledged data bases, and resulted in 35 articles deemed relevant to the topic. A majority of the articles considered individuals reporting to one’s own company (level 1). A general finding in different studies was that less than 50 % of incidents were reported, with considerable variation. Nine sources of underreporting was identified on this, including e.g. lack of feedback, fear of reprisals/negative attention, aspects of professional identity, company size etc. Several of these were also of relevance for level 2 and level 3. On level 2, trust and power relations were distinct sources. On level 3, bureaucratic reporting routines and underlying structural changes in working life were found to be of relevance. A differentiated overview of proposed measures is presented and discussed.


ABSTRACT. Within the major risk reduction measures the basic knowledge and awareness of safety issues in various environments must be highlighted. The knowledge of the main concepts of safety means that a good safety culture. And as this knowledge grows, operators will be more willing to adopt the prescribed prevention and protection measures in the work environment. In addition, good knowledge can push operators to actively participate in data collection campaigns and proactive security measures. In fact, the safety regulations of various countries prescribe training and information on safety in the workplace. To be effective, these training and information measures require operators to have a basic safety culture and knowledge. But poor data on the level of basic knowledge possessed by the general population is available. The culture and basic knowledge about safety are learned during the development phase of the individual. To evaluate how this knowledge is acquired, a data collection on the safety knowledge of children in primary and secondary schools was carried out. In particular, about 1600 students (around 800 primary schools and around 800 secondary schools) from 5 different schools aged between 7 and 15 were involved in the data collection. The schools where the data collection was carried out are located in the city of Turin, Italy. Data collection was carried out through a game-based approach that required the answer to a short series of multiple-choice questions. The questions varied according to the age of the participants. Based on the results obtained in this article, it is presented how knowledge on safety of child varies with the age, gender and location of the school (consequently the socio-economic conditions of the neighborhood in which it is located).

10:35-11:35 Session TH2F: Critical Infrastructures

ABSTRACT. The logistics of a country is essential to the subsistence and development of its population. In a global market, the challenge is to move goods quickly, reliably, and economically. Maintenance is a major player in the operational performance of a country’s infrastructure. The functionality, safety, productivity, comfort, image, and conservation throughout the entire life cycle of ports, airports, roads, and railway connectivity depend on the maintenance function. As a peculiarity, this kind of maintenance is generally carried out via concession contracts or operating subsidiaries. After conducting an exploratory analysis, the authors identified the main trends, methodologies, research opportunities, risks, as well as recommendations to reduce hazards in contracts. We found that the number of publications about concession models and infrastructure contracts has been increasing, nevertheless most of the works correspond to BOT-type (building-operate-transfer) contracts, mainly focused on roads. Therefore, there is a need to abound in research regarding contracts that also involve maintenance of ports, airports, and other infrastructure. Particularly since we confirmed the evidence that PPPs (public-private partnerships) contracts offer great advantages for contractors to provide high quality infrastructure. Also, we identify four elements that are the most cited as key factors for an adequate concession contract: 1) Taking care of financial viability, 2) Procuring collaborative work, 3) Shielding against political instability, and 4) Being careful in the estimation of the contract concession period.

How Corona Crisis Affects Critical Flows – a Swedish Perspective

ABSTRACT. The functioning of modern societies is deeply embedded in flow-dependences, with high reliance on timely deliveries of goods and services. Some of these, termed Critical flows, are particularly important (Lindström & Johansson, 2020), e.g. energy, food, water, pharmaceuticals, and digital services. Infrastructures needed to uphold flows are further often transgressing national borders and disruption of critical flows holds the potential to escalate into a wide-spread crisis. Continuity of critical flows is hence of outmost importance under various types of stresses (Aula et al., 2020). With the corona crisis, closed borders and altered demand patterns have followed due to the world wide strain. Here the aim is to explore what effect the ongoing corona pandemic has had on critical flows in Sweden to date. The corona crisis, strongly characterizing 2020, have dominated news and media. The crisis has in many respects had devastating consequences on people's lives and health, and the Swedish healthcare sector has been severely stressed. Many studies have hence focused on this sector, but less attention has been given to the impact on other critical sectors, which is the aim here. A grey literature scoping study of Swedish printed press - news agency, national and metropolitan press, and specialist and trade press - throughout 2020 was performed using the media database Retriever Research. The aim was to find evidence for impact on critical flows within four societal sectors: telecommunication, food & water, transportation, and energy. The search resulted in a collection of 4693 news articles, debate & opinion articles and reportages. These articles were then subjected to a step-wise selection process, leaving 790 articles relevant for the topic and whereof 207 was deemed relevant for further study; through a categorization process and a content analysis. Tentative results points at many prophecies and predictions of shortages in supplies and impact on critical flows with associated consequences, however, little confirmed and reported effects within the studied sectors. Any temporary unavailability of certain goods was short-lived and seem to mostly been originated from short-term changes in demand patterns, rather than e.g. non-functionality of transport systems, disrupted supply chains, or caused by closed borders. This is possibly a bright spot in the otherwise dark times, indicating that Swedish critical flows are generally resilient to this type of pandemics.


ABSTRACT. The fast-growing occurrence of unexpected events affecting Critical Infrastructure (CI) systems in recent years fostered a shift from a protection-focused approach to CI Resilience (CIR). In this context, the increasing number of interdependencies, which generate domino effects and cascading failures, led to the call for establishing collaborative approaches and partnerships at the regional, national or international level. To support and implement CIR strategies, governments and CI operators often rely on Good Practices (GPs), generally defined as methods or techniques that are applied to solve existing problems producing effective results and bringing benefits to the users. Despite the high number of GPs, they are often insufficient to cover the wide spectrum of capabilities required for effective Emergency Management (EM). In this study, the systematic analysis and review of scientific literature and European projects in the CIR domain, led to the identification of 53 GPs that have proven to be effective in managing CIR. To enable comparison among the GPs the study proceeds with the development of a framework for classifying and assessing GPs according to their application context, the activities and functionalities covered, and the EM capabilities they are able to support. From a research perspective, the framework offers a robust background for future assessment and benchmarking of CIR related GPs; it is also useful for practitioners to assess and select the most suitable GPs under different institutional and operational contexts.

10:35-11:35 Session TH2G: Asset management
Location: Atrium 3
Optimizing condition monitoring retrofitting decisions for interdependent multi-unit systems under dynamic uncertainty

ABSTRACT. In many industries, the employed maintenance policies contributed to the concentration of asset replacements in a short period of time. Thus, the number of O&M activities increases, leading to rising operational costs that are not compatible with the available resources. Moreover, these assets encompass multiple failure modes, which reduce asset availability and influence its longevity. Because asset degradation is stochastic, a considerable amount of uncertainty is associated with this problem. The recent technological advances in monitoring technology may foster a reduction in degradation uncertainty but the extra effort regarding the investment plan must be carefully planned.

Bearing this in mind, we propose a methodology to determine the investments in the installation of monitoring equipment accounting for the impact in maintenance budget for O&M activities for a resource-dependent asset portfolio with multiple failure modes. The budget is shared between multiple assets and must be determined, a priori, and managed throughout an established time horizon. Since investing in monitoring equipment requires substantial capital due to the system size, DMs have to define which and when a given asset monitoring technology will be installed. Hence, not every asset may have the same monitoring technology and, consequently, the same degradation uncertainty. We formulate the problem as a stochastic optimization problem to capture the dynamic uncertainty in the assets’ condition. Due to its inherent complexity, we employ a meta-heuristic based on a co-evolutionary genetic algorithm to achieve high-quality solutions under reasonable computational time for real world-sized systems. The approach is validated in a case study in the electricity distribution in which a system operator has to manage a portfolio of power transformers operating under different operational conditions.

A Global Approach to Life Management of Pressure Equipment
PRESENTER: Silvia Ansaldi

ABSTRACT. During the life cycle of pressure equipment it is necessary to investigate the structural integrity in order to avoid failures and outages which may affect safety and cause lost production. The fabrication phase is quite straightforward as pressure equipment in Europe must comply with PED Directive with the application of european or international standards. The following phase is the “putting into service” which is aimed at assuring a correct installation and operation according to the instructions. From this phase onward the equipment is operated and maintained according to the operating instructions but unfortunately a detailed in-service inspection plan is seldom provided by the manufacturer. Therefore in order to manage the integrity of the components during their whole life it is necessary for the User to define a general in-service inspection plan (GIP). To obtain this goal a specific procedure is under development by the Italian Thermotechnical Committee. According to this procedure it is suggested to carry out a preliminary inspection on the equipment at an early stage of service life which is useful to identify failure modes acting on the item under consideration. The results of this baseline inspection gives indications to determine the life consumption and to develop an in-service inspection plan. The inspection plan can be defined according to various levels of efficiency related to specific NDT types and extension. After in-field inspections the general plan can be implemented or modified as new failure modes can be detected. Obviously the choice of the type of NDT must be made taking into account the detectable/allowable flaw size. The period between inspections becomes a function of life consumption and can be modulated according to the efficiency of inspection, in the sense that a higher efficiency allows lengthening of the inspection interval therefore reducing maintenance costs and increasing operational profitability. The role of standards is essential to draw up a consistent inspection plan. There are many international codes but no European standard concerning this matter. For this purpose this paper describes the national standard which is under developement by the Italian Standardization Body UNI/CTI. Eventually some case studies concerning life management are presented with reference to pressure equipment in industrial plants and in Seveso establishments, aimed at showing how the lack of a specific plan for inspections may bring to severe accidents.

MASTERING SMART ASSET MANAGEMENT ININDUSTRY 4.0 REVOLUTION - The missing link: Harness field power of Industry 4.0 through risk management.
PRESENTER: Remy Arbaoui

ABSTRACT. Asset management becomes a significant issue in today’s industry 4.0 globalization. When asset management is coupled with industry 4.0, mastering it brings us to new transition challenges in its implementation strategy, tactics, and execution. It is gradually being expanded so far as to all areas of industry. This research paves the way for further and deep research on comprehensive asset management in the dark side of the industry 4.0 transition. It is about achieving the best value of asset management implementation, through the right balance between quality, cost, delay, flexibility, and performance. In other words, it is a question of digging up the hatchet of war against the shadow mechanisms of power and influence, which covertly control the implementation of asset management within industry 4.0. This research is projecting to influence asset management policymakers in the industry 4.0 transition. Actors and organizations will use it to build transition strategic plans and to implement and manage sustainable asset networks knowingly. Based on a multiple-case study, this empirical study is explanatory intervention research. Its design is consistent with the epistemological paradigms of generic constructivism and ontological relativism. We use abductive reasoning combined with a qualimetric approach based on the socio-economic method, with priority for extra-accounting methods. This research makes three main contributions. First, it provides a framework of critical mechanisms to drive asset management implementation in industry 4.0 with confidence based on system assurance. Second, it offers governance tools to organizations to master sustainable asset management strategies from transition decision to implementation. Finally, the study offers guidance to asset and risk managers as to which factors need to be considered to successfully manage assets in industry 4.0 through risk management.

10:35-11:35 Session TH2H: Safety and Reliability of Intelligent Transportation Systems
Location: Cointreau
PRESENTER: Ni-Asri Cheputeh

ABSTRACT. Crossings allow trains to change tracks and hence are an essential railway component. However, they are also bottleneck points on the rail network under increased rolling stock traffic. Crossings are subject to high loading conditions and therefore, their structural integrity is of critical importance for the safety and reliability of railway operations.

As the wheel switches from one track to another, a vertical impact on the crossing nose or wing rail occurs. Wheel impacts can lead to several forms of damage, such as impact damage, wear, rolling contact fatigue, plastic deformation, among others. The crossing geometry is of utmost importance as the wing rail angle or nose shape can act as stress concentrators, further propagating damage and reducing the asset’s lifetime.

The UK railway network alone comprises around 6,000 crossings, which need to be assessed and maintained with a frequency that ranges from one to three years in busy routes [1]. Modelling techniques, such as finite element analysis, can provide a better understanding of the stress states involved in this complex dynamic problem, leading to the optimization of the crossing geometry for an extended lifetime.

In this paper, we evaluate changes in the design of railway crossings in a dynamic environment via finite element analysis. The effects of changes in the crossing length and angle, train speed, as well as nose design, are evaluated regarding impact and deformation. This paper provides advice on best design practices allowing for more reliable and long-lasting railway crossings.


1. Network Rail, 2021. Level Crossings. Retrieved from

Comparing macroscopic first-order models of regulated and unregulated road traffic intersections
PRESENTER: Ibrahima Ba

ABSTRACT. Road traffic models can allow understanding the properties of the traffic and improving traffic control. To do so, the models must be realistic and also understandable (i.e. with few parameters that can be interpreted and calibrated). Macroscopic models are in particular useful for the simulation of large traffic networks. Yet intersection models are underrepresented compared to traffic models on roads in the literature (especially urban regulated intersection models). In this contribution, we analyze and compare four minimal regulated and unregulated macroscopic intersection models of the first-order. The two unregulated models are the FIFO model (first-in-first-out, i.e. roundabout-type intersection) and an optimal model for which the flow by direction are independent (i.e. highway-type intersection). The control (i.e. traffic light) operates upstream on the flows going-in for the first regulated intersection model while the control takes place downstream on the directions for the second regulated model (see Fig. 1). We demonstrate mathematical relationships between the intersection models and analyze the performances using Monte-Carlo simulation. The numerical simulations are performed by assuming random demand upstream and supply downstream, and also random direction distributions. This approach allows us to account for average performances but also for standard deviations and more generally for the performance distribution. Indeed, reliable intersections should describe ”regular” performances (small variations). We observe that the optimal regulated intersection models overcome the performances of the FIFO model, on average but also in terms of variability (i.e. reliability). Furthermore, bounds with the optimal unregulated are provided.

A Dynamic Agent-based Transit System Disruption and Recovery Simulation Model
PRESENTER: Steffen Blume

ABSTRACT. Transit system disruptions can have severe system-wide effects on service delays, crowding levels, and the ability of millions of travelers to reach their destinations routinely and unencumbered. This work therefore develops an agent-based simulation model to understand and augment the resilience of transit systems. Other than existing dynamic agent-based models that focus on small disturbances (Cats et al., 2011; Othman, et al., 2015) or simplify the re-routing behavior and real-time information availability of passenger agents (Leng and Corman, 2020), the proposed simulation model concentrates on severe disruption scenarios and resulting system-wide redistribution of passenger agents that variably alter their decision behavior in response to delays and system alert information. Moreover, the simulation accounts for timetable rescheduling measures. The simulation model has been validated on a case study of the New York City (NYC) subway network with real-world passenger demand data and train vehicle schedules, and produces reliable predictions of passenger flows during both undisrupted and disrupted conditions. The versatility in testing diverse disruption and recovery scenarios (e.g., fig. 1) is a valuable addition to the preemptive assessment of disruptions and recovery schedules as well as to contrive contingency plans, identify potential bottlenecks, or prepare component and process redundancies to be swiftly engaged and dispatched when needed.

10:35-11:35 Session TH2I: Mathematical Models in Maintenance
Location: Giffard

ABSTRACT. Maintenance of technical systems is of particular importance in the era of growing competition and ever higher requirements in quality, reliability, and productivity of organizations' functions and tasks. According to [1], maintenance for complex socio-technical systems can be defined as a combination of activities which ensures that physical assets continue to fulfill their intended tasks effectively (performing required functions), efficiently (at minimum use of resources), and safely (at a minimum human and environmental risk). Following this, the main challenge for the maintenance manager is to structure maintenance procedures and activities to be undertaken to achieve the strategic objectives associated with them. This requires the provision of technical skills, techniques, and methods to properly utilize assets like factories, power plants, vehicles, equipment, and machines. Therefore, it is important to find an optimal solution to ensure the continuity of operational processes in the technical system. One of the possible solutions is to consider the so-called resource sharing [2]. In the literature, in the context of planning maintenance processes, the issues of resource sharing primarily include the issue of sharing maintenance teams (e.g., [3]). At the same time, in the context of the PN-EN 17007 standard [4] such an approach is insufficient. Therefore, in the article the authors proposed a method for maintenance resource sharing for production systems in the context of physical asset maintenance management concept. Thus, the paper includes the short introduction to the maintenance problems. Later, short literature review in the analyzed research area is provided. Next section introduces a method for maintenance resource sharing in the context of physical asset maintenance management concept, which implementation possibilities are discussed in a case study. The presented paper gives the possibility to identify research gaps and possible future research directions connected with physical asset maintenance problems in industrial organizations.

References 1.L. Bukowski, S. Werbińska-Wojciechowska. Resilience based maintenance: a conceptual approach. In: Baraldi P, Di Maio F, Zio E. (eds): Proceedings of the 30th European Safety and Reliability Conference and the 15th Probabilistic Safety Assessment and Management Conference, Research Publishing, Singapore: 2020: 3782-3789. doi:10.3850/978-981-14-8593-0. 2.M. J. Olsen, S. J. Kemp, Sharing Economy: An In-Depth Look At Its Evolution and Trajectory Across Industries. Piper Jaffray Investment Research, (2015). 3.C. Z. Qu, R. Liu, J. R. Zhang, Y. Dong, Maintenance Process Simulation Model Considered Sharing Resource and its Application. Advanced Materials Research, 189–193, 2530–2534 (2011). 4.PN-EN 17007:2018-02 Maintenance process and associated indicators. European Committee for Standardization, Bruxelles.

Optimisation of maintenance policies for multiple deteriorating components in a system

ABSTRACT. Condition-based maintenance is a useful technique to assess the condition of a system for scheduling of maintenance actions, which aims to reduce maintenance cost, improving the security of management, or ensuring the stable quality of the products. This paper uses the Wiener process to model the degradation processes of the multiple components in a system. When the degradation level of a linear combination of the processes exceeds a pre-specified threshold, the block or age replacement policy will be considered as the preventive maintenance for the system. Based on these two replacement policies, the optimized maintenance intervals are then sought. Besides, the paper also develops a cost process which considers the situation that when the maintenance cost is higher than an expectation value, the decision-maker will prefer to replace the whole system but not repair it. Numerical examples are given to illustrate the optimisation process.

Multiple deterioration processes with stochastic arrival intensity
PRESENTER: Inma T. Castro

ABSTRACT. Many systems, such as electronic products, heavy machine tools and piping systems3, are subject to multiple degradation processes for example. On a pavement network, several different degradation processes, such as fatigue cracking and pavement deformation, may develop simultaneously. For systems subject to multiple degradation processes there are two approaches to model the degradation mechanism of the processes. The first approach considers that all the degradation processes start to deteriorate at the same time. However, as Kuniewski et. al claim, it is unlikely that all degradation processes appear at the same time. The second approach assumes that the degradation processes initiate at random times and then grow depending on the environment and conditions of the system. In this approach, two stochastic processes have to be combined: the initiation process and the growth process. For some special cases, the combined process of initiation and growth has a particular mathematical structure. For example, if the degradation processes appear following a non-homogeneous Poisson process (NHPP) and all these processes degrade following the same degradation mechanism, the number of degradation processes that exceed a fixed degradation threshold at time t follows a non-homogeneous Poisson process. In this work, a system subject to different deterioration processes is analyzed. The novelty of this work is that the arrival of the degradation processes to the system is modeled using a Cox processes. A Cox process generalizes the non-homogeneous Poisson process since the intensity of arrivals is itself a stochastic process 1. Using the properties of a Cox process, the combined process of initiation and growth is modelled and the system reliability is obtained.

10:35-11:35 Session TH2J: Energy
Location: Botanique
PRESENTER: Marco Cinelli

ABSTRACT. Multiple Criteria Decision Analysis (MCDA) methods are widely used to aid the interpretation of results from energy systems analyses. This trend is justified by the capacity of MCDA methods to integrate a variety of factors, sometimes conflicting and measured on different scales, in an easily understandable decision recommendation, such as a ranking, sorting in preference-ordered classes, or selection of most preferred alternatives. In order to obtain credible results from the application of MCDA methods, they should be suitable for the study they have been used in. Limited research has been conducted on evaluating the match between the capabilities of the MCDA methods and the characteristics of decision-making problems. This study provides exploration in this direction by employing the following question: “Was the chosen MCDA method the best fit for your case study?”. This question is being answered by means of a new Decision Support System (DSS) that the authors developed to recommend MCDA methods based on a comprehensive set of features describing complex decision-making problems. It was tested on a set of peer-reviewed case studies from the literature on energy systems analysis. The first main finding is that the authors of the case studies explore a limited set of features that would be needed to accurately describe complex decision-making problems. Furthermore, a few erroneous applications of MCDA methods were identified, among which (i) the use of criteria weights with a certain meaning (e.g., importance coefficients) in methods that require weights of a different type (e.g., trade-offs), and (ii) use of ordinal criteria in methods interpreting all scales of criteria as quantitative.


ABSTRACT. Achieving greater levels of asset availability is the goal of many industries in diverse segments. This ambition is accompanied by actions aimed at increasing the reliability of these assets. Therefore, aiming at an operational campaign exempt from unexpected stoppages, one of the paths of success adopted by maintenance managers is the improvement of the equipment maintenance plan. This paper presents a strategy for developing a maintenance policy with a focus on reliability. Through multi-criteria decision-making methods (MCDMs), Entropy combination method and Multi-Attribute Utility Theory (MAUT) approaches, weights of criteria are established and classification of alternatives are performed in order to determine the critical components of a Kaplan hydropower unit, whose maintenance policy is used as a case study. The maintenance activities of these key assets are analyzed and improved under the opinion of specialists. This research obtained a ranking of the ten most critical items for the hydropower turbine and enabled the improvement of robustness the maintenance plan. The maintenance policy developed aims to improve preventive maintenance and predictive monitoring plan of the hydro generator to support an increase in availability and quality in the supply of electrical energy to society.

Assessing electricity supply resilience based on a rational investigation of interacting evaluation criteria

ABSTRACT. Decision problems are often characterized by complex criteria dependencies, which can hamper the development of an efficient and technically sound decision model. In several cases, these dependencies, called criteria interactions, are preserved after the modelling procedure and handed down to the decision maker, who is required to articulate arduous and demanding preference statements for their quantification. This modelling rationale, albeit acceptable, passes to the decision maker the cognitive burden of estimating the criteria dependences and can lead to confusion, context misunderstanding, and in the end degraded results and poor decision support (Roy, 2009). The problem of the evaluation of electricity supply resilience falls in this category, being highly complex and multifaceted, and exhibiting several interactions among its criteria (Siskos and Burgherr, 2021). The assessment framework, proposed in this research work, attempts, wherever possible, to address and eliminate the criteria interactions, right at the criteria modelling stage, with a view to building a consistent criteria family and facilitating significantly the decision maker during the preference articulation phase. More specifically, the resilience problem dimensions are categorized into four points of view: Infrastructures, Economy, Society and Governance, before forming the final evaluation criteria. In total, 35 European countries are evaluated and ranked using a system based on the Multi-Attribute Utility Theory and a synergy of Multicriteria Decision Aid methods. The resilience results presented by this research work aim to support energy policy decision making in Europe in a tangible way and provide guidelines and areas for improvement at a country level.

10:35-11:35 Session TH2K: Reliability and Availability Issues of the 5G Revolution
Location: Atrium 1

ABSTRACT. The fifth generation (5G) of mobile telecommunication network is designed with an ambition to be a network faster, stronger, better and smarter than its predecessor. With the digital transformation, all industry sectors will develop new applications with new requirements regarding telecommunication networks that 5G should be able to meet. According to ITU-R these applications are divided into 3 categories in general: enhanced Mobile Broadband (eMBB), massive Machine Type Communications (mMTC) and Ultra-Reliable Low Latency Communications (URLLC). Some detailed use cases examples are, in transport industry, the communication on a high speed moving train; in energy industry, the application of massive smart devices on smart grid… Some of them have very strong requirements in terms of network resilience due to critical usage scenarios. To fulfill 5G ambitious goal and to maximize the satisfaction of emerging end-users, new technologies are introduced: Network Function Virtualization (NFV), Software Defined Network (SDN), and Mobile Edge Computing (MEC). 5G network also introduces a concept of network slicing, a dedicated independent virtualized end-to-end network that better fits specific applications requirements. Never the less, while these new technologies provide convenience, they also bring new challenges as network is becoming more complex than ever before, a real complex system of interdependent systems. To meet the requirement of future 5G use cases and applications, it is crucial to study the complexity of such network system by finding decent models and by applying fitted metrics to quantify the risk and resilience of the system. This paper addresses the main resilience challenges to anticipate in the 5G network from an end-to-end perspective (device, radio network, core network, service platform) and from a multi-layer perspective (slicing, orchestration, virtualization/containerization and infrastructure). After a selection of complementary use cases, in terms of resilience requirements, different modelling methodologies and resilience quantification metrics of 5G network will be proposed. In this paper, we mainly intend to highlight 5G network complexity and open a discussion on methodologies to model such a network with the hope that this paper could inspire the future study of researchers in the related field.

Steady-state Availability Evaluation of Multi-Tenant Service Chains

ABSTRACT. Nowadays, many telecommunication service providers (or tenants) share the same service infrastructure for cost optimization purposes, the so-called multi-tenant Network Service Chains (NSCs). These novel infrastructures are enabled by the Network Function Virtualization (NFV) paradigm that relies on the decoupling of the physical layer (i.e. hardware) from the service logic (i.e. software). NFV allows turning classic network appliances (e.g. routers, switches, etc.) into software instances often referred to as Virtualized Network Functions (VNFs). The composition (or chaining) of more VNFs results in NSCs which represent the modern way of providing network services. NSCs are failure-prone structures since the failure events can occur both at the physical and at the service logic layer. In this paper, we propose a methodology to ease the computation of the steady-state availability of multi-tenant NSCs, and to identify an NSC configuration respecting high steady-state availability constraints while minimizing deployment cost in terms of redundant subsystems. In our proposal, we: i) model an NSC as a Multi-State System, where the state is the delay introduced by the system and derived from queuing theory; ii) adopt an extended version of Universal Generating Function (UGF) technique, dubbed Multidimensional UGF (MUGF), to efficiently compute the delay introduced by the interconnections of various VNFs forming a service chain; iii) define and solve an optimization problem that allows retrieving, in a numerical setting, an optimal NSC deployment which minimizes the costs and guarantees high availability requirements (defined in terms of the delay metric), at the same time. The whole assessment is supported by an experimental part relying on IP Multimedia Subsystem, an NSC-like infrastructure widely adopted in the modern 5G-based networks to manage multimedia contents.


ABSTRACT. Among the three types of services promoted by the 5G standard, ultra-reliable and low-latency communications (URLLC) will be a cornerstone of new mission-critical services, such as industrial automation, autonomous driving, smart energy, remote surgery, etc.

A key element of 5G systems is the radio link, the availability of which is crucial: In a number of applications, an availability exceeding 99.999\% is expected if not downright required. Data transmission must also be effective within an acceptable delay --- from one to fifty milliseconds or more --- depending on the application under consideration. Weather conditions may hamper transmission and lead to short-lived error bursts; radio links may actually be viewed as self-healing. If the transmission is successful within a ``survival time'' $\Delta$, it is possible to neglect or omit the link temporary failure.

We address the availability of a radio link in the framework of neglected/omitted failures. Using a new approach, we have been able to compute the variation with $\Delta$ of the modified steady-state availability, as well as the Mean Time To Failure (MTTF). After a detailed analysis of the case of exponential lifetime and repair distributions, we turn to arbitrary, more realistic distributions and provide exact, analytical results in the cases of the Gamma, Birnbaum-Saunders, and Inverse Gaussian repair distributions. These expressions could be helpful to assess the availability ``budget'' of the radio link in the whole the end-to-end 5G system. They may be also used in other domains in reliability and system safety where neglected failures are routine, especially in more complex systems.

11:35-12:35 Session TH3A: Risk Assessment
Location: Auditorium
Understanding Wildfire Induced Risk on Interconnected Infrastructure Systems Using a Bow-Tie Model and Self Organizing Maps

ABSTRACT. With increased global warming and urbanization, the risk of wildfires is increasing in several parts of the world. Large-scale critical interdependent infrastructure systems like electricity distribution and transmission, telecommunication, and the building systems are highly vulnerable to wildfires. Since modern infrastructure systems are becoming increasingly interdependent, where the failures in one network may easily cascade to other dependent networks causing a severe widespread national scale failure, the wildfire-induced risk on such infrastructure networks cannot be assessed using a silo-ed approach.Previous studies used traditional modeling techniques like fault tree analysis method that can only analyze the causes of failure while capturing the network behavior of such systems, or the event tree analysis method which can only model the consequence of the infrastructure failures in a systematic way. However, instead of analyzing the cause and consequence of the failures in an isolated way which often times underestimates the risk, in this paper we propose a framework that can integrate the both the fault and the event tree methods into a single bow-tie model to capture the wildfire induced risk on multiple interdependent infrastructure systems. The proposed framework can capture most of the dimensions (i.e., physical, logical and geographical) of the intra-infrastructure (i.e., within a particular infrastructure) and inter-infrastructure (i.e., between two different infrastructure systems) dependencies into a pixel based risk map. Using simulated burning probability data and infrastructure network layout, we validated our proposed framework for the state of California. Furthermore, using self-organizing maps we created quantized dynamic risk maps for efficient risk zoning and risk communication to the respective stakeholders. From the clustered risk map, using image segmentation, we identified the areas belonging to differential wildfire risks. As the burning probability scenarios and infrastructure scenarios evolve with time, the dynamic risk maps will be also updated, which can be depicted using the novel interactive visualization technique proposed in this paper. Our proposed risk assessment framework will help the federal/state governments and the utilities to make risk-informed decisions related to resource allocations, and planning for wildfire risk mitigation intervention strategies through efficient and interactive risk communication technique offered by the proposed dynamic risk map.


ABSTRACT. Real time damage assessment of building systems subject to hurricanes has attracted significant interest over the past few years owing to its potential to facilitate emergency response and management. The major difficulty in its application lies in the high computational demand stemming from the need to propagate uncertainty through systems that present significant complexity. In this paper, a Kriging metamodel based rapid damage assessment methodology for building envelope systems of engineered buildings is developed to address this issue. Based on the recently proposed framework outlined in [1], envelope damage is characterized through progressive multi-demand coupled fragility models. Within this context, damage measures are defined for each coupled damage state of the system and a full range of uncertainties in structural properties, capacities, as well as wind load stochasticity. By calibrating the metamodels for damage prediction, deterministic mappings are defined from the input space of the site specific wind speed and direction to the output space of the mean and standard deviation of the damage measures of the envelope components. The calibrated metamodels can then be used to rapidly predict the expected (with variability measured through the associated standard deviation) envelope damage in terms of predicted site specific maximum wind speed and direction, where these last are estimated in real time through parametric hurricane models [2]. To demonstrate the applicability of the approach, a case study consisting in a 45-story steel building located in Florida is presented. The accuracy and efficiency of the proposed framework, around five orders of magnitude faster than high-fidelity models, illustrates the capability of the approach to provide real time information necessary to facilitate emergency response decision making.

References [1] Ouyang, Z., and S. M. J. Spence., 2020. A performance-based wind engineering framework for envelope systems of engineered buildings subject to directional wind and rain hazards. Journal Structural Engineering, 146(5) [2] Vickery, P.J., Skerlj, P.F., Steckley, A.C. and Twisdale Jr., L.A., 2000. Hurricane wind field model for use in hurricane simulations. Journal of Structural Engineering, 126(10)

Exploring the nexus between organizational anticipation and adaptation in crisis management

ABSTRACT. Organizational anticipation involves the ability to foresee and analyze potential threats and disturbances as a means to minimize the likelihood of hazard occurrence and to reduce the potential impacts. Common methods include Risk and Vulnerability Assessments (RVAs) and contingency planning, where potentially harmful events are identified and analyzed, and where measures to prevent, respond to and recover from these events are suggested. This includes the development of plans and procedures for what actions to take in case calamities, identified in the assessment, occur. While highly important as a strategy to risk reduction, these anticipatory efforts will never be sufficient for eliminating and treating all potential threats, especially in situations characterized by large uncertainties and high complexities. In the last decade, the dangers of black swan events, i.e. surprising events that have not been anticipated, have gained increased attention to illuminate the limits of the anticipatory approach. As a complement, many scholars have therefore highlighted the value of promoting adaptive capacities as a means to perform resiliently and reduce risks in the face of sudden disturbances. Despite clear interconnections, the anticipatory and adaptive perspectives have been studied in partly disparate scientific strands of research. The purpose of this paper is to explore the nexus between these areas to provide ideas on how they can be combined in a proactive crisis management setting. The paper constitutes a continuation of a three-year researcher-practitioner collaboration in the municipality of Malmö, Sweden, where a method for RVA previously has been developed. The method relies strongly on an anticipatory perspective, but the occurrence of Covid-19 has highlighted the need to integrate or complement it with efforts that facilitate adaptative behavior in the face of sudden shocks and disturbances. The paper draws on a literature review of the anticipatory and adaptive perspectives, focusing on how the anticipatory perspective can be complemented with actions that promotes adaptative capacity. The applicability of the approaches identified in the literature for the context of municipal RVA is also assessed.

11:35-12:35 Session TH3B: Risk Management
Location: Atrium 2
Community resilience: How to measure interactions among society and authorities?
PRESENTER: Sahar Elkady

ABSTRACT. Economic disaster losses increased by 82% from 1.63 trillion USD in 1980-1999 to 2.97 trillion USD in 2000-2019 [1]. This poses a threat to societal wellbeing which has led to increased attention to the concept of community resilience. As it provides the means to prepare for hazards, cope with their impacts, and recover from them. One of the greatest challenges of community resilience conceptualization is the ability to assess it; identifying the variables that reflect the resilience level and how to measure them. In this paper, we are interested in knowing how to measure the interaction and collaboration between all stakeholders in a community; individuals, authorities, and emergency organizations; and in which direction this collaboration affects the different aspects of community resilience. Especially that resilience is not a passive concept; it is an intrinsic ability of a community that is influencing and influenced by the actions of all stakeholders in society. Several studies surveyed the literature covering the assessment and quantification of community resilience [2], [3], but without an emphasis on how a community’s stakeholders' interaction could impact its resilience. Building upon the literature [4], [5], we identified seven dimensions that unfold the interaction between stakeholders in a community: 1) communication, 2) risk awareness, 3) resource allocation, 4) knowledge sharing, 5) preparedness, 6) governance and leadership, 7) community networks. Afterward, we identified factors and indicators to measure the stakeholders’ interaction and matched them to their corresponding community resilience dimension. For example, the number of local disaster training programs could be used as an indicator of the level of improvement needed in the preparedness dimension. By providing this categorization of dimensions and indicators/factors we can identify in which dimension we are lacking indicators, and how we can measure them, especially in the aspects that tend to be more subjective, such as community networks. This will help authorities and policymakers identify areas of improvement in the interaction with the society and consequently enhance the community's resilience throughout the preparation, copying, and recovery phases.

PRESENTER: Coralie Esnoul

ABSTRACT. New technology and access to renewable energy sources enables production of electricity by new actors, outside of traditional energy suppliers. One such actor is the energy islands, isolated communities that produce and consume a part or all of their energy. To support energy islands getting into the energy market, the E-LAND project (E-LAND) is developing a toolbox that use consumers data and external data (weather, energy prices) to deliver an optimal energy schedule that minimizes the cost of energy production and consumption. The E-LAND project started in 2019 has reached half-way to completion and the toolbox is currently being installed and tested at pilot sites. The E-LAND toolbox consists of new technologies, new functionalities, and new ways of thinking both operational and business processes for the energy islands, and with these changes comes new risks. This paper presents how risks are understood and managed, first by the project with regards to the pilot sites; and secondly, by the pilot site themselves. We introduce the development of risks so far, and systematically present which risks that have increased, and which risks that have been sufficiently mitigated. Some risks are even closed at the halfway of the project. For the pilot sites and future users of the developed solutions, trust in a “safe and secure” toolbox and functionality implementation is addressed. Risk analysis has been performed for the project and the E-LAND toolbox, based on technical requirements. The analysis identified risks with a top-down approach, based on different use cases, designed for each pilot sites (E-LAND D4.7). While detailed methodology and results of the mitigations have already been presented (C. Esnoul and al), this article is focusing on the development of the risk and the accuracy foreseen for the pilot sites. The introduction of the toolbox presents the pilot sites with new functionality and new risks: where the sites previously had locally stored data and local processes to manage their risks, the toolbox challenges the sites to manage their non-local data to develop new processes to handle novel risks. Another example is that the connectivity of the new solutions requires pilot sites to have better control of their own information assets by performing stricter risk management and introducing new processes to sufficiently handle distributed risks. The E-LAND project has received funding from the European Union’s Horizon 2020 Research and Innovation program under Grant Agreement No 824388. This document reflects only the author’s views and the Commission is not responsible for any use that may be made of the information contained there.

Decision Making for the Prevention of Intentional Third-Party Damage: An Evolutionary Game Perspective
PRESENTER: Xiaoyan Guo

ABSTRACT. Traditional risk decision-making method cannot simulate the strategic interaction between the pipeline company (PLM) and the intentional third party (iTP). In order to overcome this, the evolutionary game theory is adopted firstly to analyze the long-term dynamic complex imitation and learning behavior between the PLM and the iTP, under the hypothesis of bounded rationality and incomplete knowledge. Firstly, the mental model is used to simplify the complex analysis process of the traditional Wright manifold theory, following cognitive rules. A threshold value which means a group proportion of the iTP adopting the damage strategy is obtained. It can guide the PLM to adjust the defense strategy flexibly. And then, the prospect theory is adopted to improve the traditional expected return matrix into the income perception matrix. Four equilibrium conditions that both parties need to actively protect the pipeline are obtained. From the view of cognitive ability, fluke mind and psychological of adventure, the two parties of the game are often unable to fully satisfy the conditions and cause frequent accidents. The results of evolutionary game analysis show that increasing the awareness of management/learning costs, probability of occurrence, severity of consequences and punishment of the PLM and the iTP can reduce third-party damage (TPD) accidents and enhance pipeline risk management.

11:35-12:35 Session TH3D: Prognostics and System Health Management
Location: Panoramique
Establishment of EHA Performance Degradation Model Based on PMSM and Its Active Fault Tolerant Control
PRESENTER: Zhaozhou Xin

ABSTRACT. The electro-hydrostatic actuator(EHA) consists of a controller, an electric motor and its driven hydraulic pump, hydraulic actuator components, and a high-pressure hydraulic oil tank. As the power source of the hydraulic pump in the EHA, the motor can drive the rudder surface to run along or under load through positive and negative rotation. However, due to the low reliability of the traditional brushless DC motor, the large locked-rotor current, and the irreparable after damage, the cost is high for large-scale complex systems, permanent-magnet synchronous motor(PMSM) can ensure the stable and orderly operation of the EHA system. During the operation of the system, the performance of the product will gradually decline due to internal or external factors, and a wealth of characterization data will be generated. Based on these characterization data, it is possible to study how to complete the set goals and complete fault-tolerant control on the basis of system performance degradation. Therefore, this paper takes the rudder control system of an underwater vehicle as an example. First, it analyzes the working principle of EHA and proposes an EHA model based on PMSM. By analyzing the different performance degradation degrees of different parameters of the motor and the traditional brushless DC motor model and its performance degradation, the validity and accuracy of the model are verified. Combined with adaptive active fault-tolerant control based on RBF neural network, it provides a basis for reliability analysis based on EHA.

Research on performance degradation of single random diffusion coefficient inverse Gaussian process based on BPNN data screening
PRESENTER: Zhaozhou Xin

ABSTRACT. For systems with high reliability and long life span such as the bow, stern and rudder control system of an underwater vehicle, the performance degradation data obtained within a certain period of time can well predict the performance degradation of the system. However, due to the complex working environment and high coupling of submarines, the performance degradation is often non-linear and highly uncertain. Common performance degradation description processes such as Wiener process, Gamma process cannot describe the performance degradation of the system well. Based on this, this paper adopts the inverse Gaussian process, aiming at the performance degradation data of the pressure change of the pressure sensor at the centrifugal pump of the underwater vehicle, and in the degradation process, proposes to introduce the difference mean function between different individuals to describe the performance degradation process. Compared with a single set of degradation data, multiple sets of data are introduced and the BP neural network(BPNN) is used to filter the data to obtain a set of reliable performance degradation data. And then use Bayesian theory and EM algorithm to collaborate on research to obtain effective estimates of the model. Through continuous update and iteration to improve the estimation accuracy, the system's performance degradation trajectory, remaining useful life and the probability density distribution of remaining life can be obtained. Verification by the alpha-lambda indicator can get that the accuracy of the actual range and the estimated range is above 85%. In addition, the improved EM algorithm is proposed to give an explicit expression in M-Step, which is conducive to real-time calculation in engineering. The experimental results verify the effectiveness of data screening and the rationality of the performance degradation description trajectory, which provides a direction for future research on fault-tolerant control based on performance degradation.

State Set Sequential Pattern Mining Based on Improved-Apriori Algorithm
PRESENTER: Houxiang Liu

ABSTRACT. Sequential pattern mining is a hot topic in recent years, but in today's increasingly diverse customer needs, especially for the related industries that expect to consider the status attribute of project, it still faces great application limitations. Based on this problem, this paper studies the sequential pattern mining of state set, and explains the related concepts and constraints. In addition, aiming at the shortcomings of traditional Apriori algorithm, such as high computational cost and low computational efficiency, this paper introduces Boolean matrix to improve the algorithm, proposes Improved-Apriori algorithm, and explains the basic idea of the algorithm. Finally, small-scale and large-scale examples are used to verify the proposed method and algorithm. The results show that the proposed method and algorithm are feasible and efficient, SSPM can mine more rules than SPM, and Improved-Apriori algorithm has higher computational efficiency than traditional algorithm.

11:35-12:35 Session TH3E: Organizational Factors and Safety Culture
Location: Amphi Jardin
PRESENTER: Marja Ylonen

ABSTRACT. The objective of this paper is to discuss the learning from accidents and incidents in the high-risk industries. Theoretical framework consists of theories of learning, institutions, organisations and leadership. The paper is motivated by the principle of continuous improvement based on learning from accidents, incidents and near-misses in the high-risk industries. There is a growing tendency to learn from the less important incidents. However, incident reports do not often contribute to the learning from organizational viewpoint. The specific goal of the paper is to examine the patterns and assumptions regarding learning from the accidents and incidents. Research questions are the following: 1) What kinds of rationality guides the identification, handling and learning from accidents and incidents? 2) What do the identification, handling and learning of accidents and incidents tell about the institutional strength-in-depth? 3) How could organization learn more? The study consists of the following data and methodologies. First, fundamental features of learning from accidents and incidents are researched via cases, such as Boeing 737 Max aircraft accidents in 2018 and 2019 which led together to 346 fatalities, and Deep Water Horizon accident in 2010 in the Gulf of Mexico, that led to 11 fatalities, and related accident investigation reports. Second, 19 incident reports, and interviews with 3 experts from a nuclear power company regarding incidents are analysed. Interviews and reports are studied based on the qualitative content analysis. Main findings are discussed in terms of learning, organizations and leadership. Furthermore, suggestions for better learning are made.

Public Procurement of Critical Services - Reliability Issues in the Transition Between Service Providers
PRESENTER: Tone Slotsvik

ABSTRACT. In line with (neo)liberal forms of governing, many services critical for covering the population’s basic needs are procured through competitive tendering processes. The effects of public procurement on the reliability of such services and the organizations providing them have been discussed in several studies building on high reliability theories. However, one aspect that remains underexplored is how the transfer of service provision between two contractors influences output reliability and why this is so. In this paper, the case of the latest procurement of Norwegian fixed-wing ambulance services is examined to show how service output was affected. Inter-organizational challenges and the process of transferring sharp-end personnel (pilots) to the incoming operator are discussed as two factors influencing output reliability. In terms of the change of operators, coordination and cooperation became challenging, partly due to conflicting views of the procurement process and because of the contractual arrangement. The transfer of pilots resulted in a lack of employer–management trust, in turn affecting pilots’ completion of mandatory training programs. Overall, the study shows how critical service reliability can be affected when involved organizations and occupational groups pull in different directions. The transition phase represents a discrete and potentially vulnerable aspect of critical service tendering, as the splitting of service supply into contract periods leads to a temporal fragmentation of service supply.

Teaching of Safety Engineering in COVID Times
PRESENTER: Zdeněk Tůma

ABSTRACT. In this article, the authors describe the evaluation of safety-oriented teaching at a technical university during the COVID pandemic under the long-term closure of universities in the Czech Republic. Three practical case studies are presented to illustrate the current level of sophistication of virtual reality technology for industrial safety teaching. The case studies are focused on the environment of small and medium-sized enterprises (SMEs) and cooperation with them. The proposed affordable and simple approaches describe virtual teaching at universities providing education in the field of safety engineering.

11:35-12:35 Session TH3F: Critical Infrastructures
PRESENTER: Corinna Köpke

ABSTRACT. Due to the current pandemic situation, distance rules have been implemented in most countries to reduce contact and thus infection risk. The impact of these distance rules, can be experienced in everyday's life but it also influences infrastructure processes. In this work, the quantified impact of distance rules on infrastructure performance is investigated. The example infrastructure considered here is an international airport and the passenger behavior is represented using an Agent-Based Model (ABM) which has been developed in the EU-H2020 project SATIE (Security of Air Transport Infrastructure of Europe). Varying distance rules in the ABM enables to quantify the impact on the airport's performance during normal operation but also under specific cyber-physical threat scenarios and to estimate the infrastructure resilience. Further, the simulation environment can be employed to analyze layout and process changes taking into account distance rules and thus to optimize existing infrastructure management and future infrastructure design.

ABM-Based Emergency Evacuation Simulation considering Dynamic Dependency in Infrastructures

ABSTRACT. Nuclear and radiological accident causes significant consequences to people and the environment and leads to a massive evacuation. Therefore, radiological emergency preparedness and response plan is prepared to minimize the consequences. However, the assessment of the evacuation is based on various assumptions to simplify the problem, and the effect of infrastructure systems is not considered properly. In the event of a major disaster, the infrastructures’ availabilities are highly uncertain and have temporal and spatial dependencies, that is dynamic dependency, on each other. In this study, we propose a method to find an effective and efficient scenario of emergency response and evacuation. We present a method for simulating emergency response and evacuation using an agent-based model. Within the simulation, the dynamic dependency in infrastructures is modelled by constructing loading-dependent state transition probability. Moreover, we demonstrate how to find major elements in evacuation using some importance measures.

11:35-12:35 Session TH3G: Mechanical and Structural Reliability
Location: Atrium 3
Analysis over detection of areas responsible for failure to crude oil transportation line at down stream

ABSTRACT. The Safety and reliability are the essential requirements for smooth transportation of crude oil as considered as the main concern. The problems associated with the supply of crude pipe and to control the losses at downstream because of increase in abrupt pressure, temperature and chocking of line of crude production. It affects the smooth and safe supply of crude oil towards the process plant. The focus of this study is to analyze the root cause analysis of such abruption witnessed at pipe line which identifies the problems and associated solutions to the problems. The damaged line takes time, cost and loss of crude products which can be saved by developing the reliable solution of adding bypass lines on the critical roots for identification of such point and areas for onward rectification to sustain the safe and smooth process. The failure may not be existing after installation of PSV’s and Bypass lines to ensure the reliability of system. Such technique not only save the time and losses but also maintain the flow rate of crude oil.

PRESENTER: Philipp Heß

ABSTRACT. Todays products, especially technically complex machines and systems, have to meet higher demands on safety and reliability than past generations of products. In addition, customers desire individual customized products that suit their own personal requirements. One problem is the contradiction of cost-optimizedmass production and the enabling of mass customization, since the production of the same quantities of a certain individual component is much more complex in comparison to uniform products. Another major challenge is the design of custom products (e.g. more complex mechanisms, functionality).With less permissible installation space and weight, more and more functions must be realizable. The shape memory technology is one innovative possibility to replace conventional technologies and components like actuators, drives and valves to reduce space, realize a lightweight construction and achieve new creative solutions that would not be possible with conventional methods. Shape memory alloys (SMAs) have the ability to restore an initial imprinted shape after large deformations. The shape memory effect can be used in actuators to provide displacements and mechanical forces. Further advantages are high working capacity, noiseless operation and an integrated sensory function. For each specific application, however, it must be checked whether this technology is capable to fulfill all needed functions and still meet the requirements of quality, reliability and demanded service life. A major issue is the lack of standardized test programs (including test rigs as well as test plans). It is not possible yet to achieve reproducible test results testing different actuators. In addition, a very long test time is required. Uniform test stands, test procedures and accelerated testing are required to optimally design the testing program. Additionally, the development of a prediction algorithmto determine the (remaining) lifetime during a fatigue test can lower the testing time even further. This paper describes the fundamentals of SMAs before outlining a case study of SMA wires that are tested with an endurance test rig using different loadings. Finally, the measurement series of the fatigue tests are analyzed using several (statistical) methods and techniques. The results focusing on the detection of impending failures and possibilities to predict the (remaining) lifetime are discussed in detail. This analytics of failure behavior and long-term reliability are the base of operations for the development of different SMA applications.

Predicting Reliability of bolted structure using Monte Carlo Simulation
PRESENTER: Mohammed Haiek

ABSTRACT. Bolted structures are commonly used in many automotive and aerospace industries, due to its easy assembly method and low cost. They involves different sources of uncertainty and non-linear characteristics. The aims of this communication is to model the predictive reliability of these bolted structures by using Monte Carlo Simulation. Firstly a stochastic Finite-Element Method (SFEM) is carried out using Abaqus Software in order to evaluate the dynamic behavior with taking account many stochasticals variables (Load applied, Plates geometries, Material…), Furthermore, we integrate these results with an effective method for reliability assessment based in Monte Carlo simulation. A Correlation between these parameters and structure failure probability (Ps) is carried out. Also, we estimate the model parameters by the Euler-maximum Likelihood estimation method. Finally the proposed model is applied in real case, results and conclusions are highlighted.

11:35-12:35 Session TH3H: Safety and Reliability of Intelligent Transportation Systems
Location: Cointreau
Degradation Assessment of Railway Bearing Based on A Deep Transfer Learning
PRESENTER: Dingcheng Zhang

ABSTRACT. Railway axle bearings which support whole vehicle weight and transmit speed are one kind of key components in railway system. Degradation assessment of railway axle bearings is significant for ensuring the safety of train operation and scheduling maintenance. Deep learning algorithms have been widely applied for machinery degradation assessment using run-to-failure datasets. However, it is hard to collect railway train bearings’ run-to-failure datasets in the real operating condition. In the paper, a deep transfer learning algorithm is proposed to address this problem. In the proposed method, the deep convolution inner-ensemble learning (DCIEL) model is firstly trained by using source domain data and labels. The labelled data in the target domain are then fed into the DCIEL model. The trained model is used to obtain pseudo-labels for unlabelled data in the target domain. Finally, the model for health indicator construction can be obtained by minimising the loss function. Experiments are conducted to test the proposed method and results verified its effectiveness.

Certification of Deep Reinforcement Learning with Multiple Outputs Using Abstract Interpretation and Safety Critical Systems
PRESENTER: Faouzi Adjed

ABSTRACT. The certification of machine learning models based on deep learning technics becomes a major research topic of AI scientific community. In the current decade, several approaches were explored to certify the deep learning outputs by evaluating its robustness. The deep reinforcement learning, to our knowledge, is less studied due to its complexity of output evaluation. Indeed, the output of the classification is binary where it is obvious when the model misses the correct classification by giving non-expected results. However, for the deep reinforcement learning, the decision of the agent could be modified without any safety critical effect. Therefore, a new approach considering these variabilities of decision is needed to be explored. In the current work, we proposed a new approach mixing the robustness and the safety requirements. The approach is applied on autonomous driving implemented on Open-Source environments using PPO algorithm [1] for the deep reinforcement learning. The implementation uses the abstract interpretation theory for the robustness [2] whereas the safety requirements are based on SOTIF norms [3, 4]. The approach shows promising results and an important contribution of the safety critical systems in AI certification.

Réferences: [1] Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347. [2] T. Gehr, M. Mirman, D. Drachsler-Cohen, P. Tsankov, S. Chaudhuri, and M. Vechev. Ai2: Safety and robustness certification of neural networks with abstract interpretation. In 2018 IEEE Symposium on Security and Privacy (SP), pages 3–18. IEEE, 2018. [3] Jenn, E., Albore, A., Mamalet, F., Flandin, G., Gabreau, C., Delseny, H., ... & Pagetti, C. (2020, January). Identifying challenges to the certification of machine learning for safety critical systems. In Proceedings of the 10th European Congress on Embedded Real Time Systems (ERTS), Toulouse, France (pp. 29-31). [4] Schwalb, E. Analysis of Safety of The Intended Use (SOTIF).

Functional safety of railway signaling systems: performance requirements and evaluation methods

ABSTRACT. Signaling is fundamental to the safe operation of the railway, ensuring that trains are spaced safely apart and conflicting movements are avoided. Railway signals are ‘traffic light’ devices, which tell a train driver if it’s safe to proceed along the track. A railway signaling system consists of several complex subsystems, e.g. trackside- and onboard signaling systems, which cooperate to ensure the safe operation of railway traffic. The failure of signaling system will weaken both capacity and safety of the railway. It is therefore important to keep the railway signaling system complying with the defined performance requirements. The purpose of this study starts with the summarization of railway RAMS, focusing on railway signaling systems. The tolerable hazard rate (THR) which is an indicator of signaling system performance in EN 50129 (2018) has been compared with the similar indictor PFH (probability of failure per hour) for safety-related systems in IEC 61508 (2010). Based on the commonly used methods for safety-related systems in IEC 61508 (2010), several reliability modeling and analysis methods have been listed and reviewed for the specific system. This paper aims to provide clues for the engineers and analysts in the performance evaluation for the railway signaling system.

11:35-12:35 Session TH3I: Cyber Physical Systems
Location: Giffard
PRESENTER: Jan Prochazka

ABSTRACT. The necessity to protect the critical infrastructure in way as the cyber-physical system (CPS) is growing with the development of communication and control technologies. The one of elementary approach of protection is to close critical elements to a protected area with secure access. This principle is used in both spaces, the physical and the cyber. Access to these protected areas is then through the gateways. Gateways shall be able identify and authenticate of persons or processes with authorized access and to prevent the access of unauthorized. The specific problem of transport infrastructures as railway is the presence of many moving elements (for example, trains). The security of moving elements within the CPS must therefore be ensured against both physical and cyber intrusion. We will deal with the cyber gateway of the train at this article, which is called a mobile communication gateway (MCG). The MCG is associated with problems of the standard cyber gateway and the problems specific to the moving systems. It is impossible to secure communication between train and control centrum through a closed communication system only, it must take place through open space because of extensive infrastructure with assistance of ground communication gateway (GCG). The MCG design shall ensure the security functions of the gateway as well as sufficient communication capacity. Our control over environmental conditions of MCG is limited because it is in open space, both physical and cyber, often in motion. The MCG therefore needs to be able to respond dynamically to environmental changes caused by deliberate attacks or unintentional changes in the system. The ability of the adaptability must be given to the MCG in design. The MCG example uses system segmentation the Multiple Independent Levels of Security (MILS). MILS approach enables response to problem at the MCG through redundancy and redistribution of resources, or changes in software processes.

Resilience Assessment Framework for Cyber-Physical Systems

ABSTRACT. Automation and digitization trends have caused cyber-physical systems (CPSs) to be deployed at an extremely fast rate. Owing to their complexity and to the heterogeneity of their components, CPSs face significant challenges to their security and privacy protection. In recent years, several standards and guidelines have been published to ensure the cyber-security of these systems. However, this recent stream of literature adopts a risk-based approach and assumes that attacks are identifiable and quantifiable. Moreover, it often focuses on security issues related to the cyber system, without consideration of the direct effects of cyber attacks on the dependent physical system. Finally, existing cyber-security standards and guidelines are mostly qualitative, and a common framework for their assessment is still lacking. In order to fill these gaps, we propose a resilience assessment framework for CPSs. Specifically, the framework addresses the problem of cyber-security from a resilience perspective, in which cyber threats might be unknown and unforeseeable. In this context, the framework provides quantitative methods for the generation and assessment of both known and unknown threats, e.g., attack paths, and integrates the analysis of the recovery phase following a disruption. Moreover, the framework is constructed around the three subsystems constituting a CPS, namely the physical, control, and cyber subsystems. The physical subsystem is controlled by the control subsystem, which processes physical system state data and returns action commands by receiving data and transmitting action commands through the cyber subsystem. The framework proposes a standardized workflow to assess the resilience of CPSs before and after the occurrence of a disruption (Fig. 1). Accordingly, established methods are deployed for (1) modeling the CPS, (2) identifying disruption scenarios that may impact CPS performance, and (3) assessing the benefits of resilience strategies to prevent and react to disruption scenarios. The proposed framework is demonstrated with reference to a power substation and associated communication network and is a first step towards quantifying the ‘value of resilience’ in CPSs.

Using Decision Trees to Select Effective Response Strategies in Industrial Control Systems

ABSTRACT. Critical Infrastructures (CIs) are essential to ensure smooth functioning of contemporary society. CIs are increasingly dependent on Industrial Control Systems (ICS). These systems are becoming more connected to the internet, either directly or through the corporate networks. Therefore, abnormal behaviour in CIs operated by ICS could be caused by attacks in addition to technical failures. There is a need to effectively respond to such problems that could be observed by operators in infrastructures operated by control systems to recover the system from adversaries in a timely manner and limit negative consequences.

In our previous work, we developed the attack-failure distinguisher framework which would help to construct BN models for distinguishing attacks and technical failures [1, 2]. Furthermore, we also developed the root-cause analysis framework which would help to develop BN models for determining the most likely root cause [3]. However, decision support that enable operators to choose effective response strategies based on the inputs from the above-mentioned BN models is missing, which is the aim of the present study.

Decision Trees have the capability to tackle this challenge especially based on existing applications in domains like medical. In this study, the structure of a decision tree is used to visualise the effective response strategies. Once the BN model developed using an attack-failure distinguisher framework determines whether the problem is caused by an attack or technical failure, this could be used as an input to the BN model developed using a root cause analysis framework. Based on the input from the latter BN model regarding the specific attack vector or failure mode, the decision tree visualisation could support operators to choose effective response strategies. This could also help to consider safety and security interdependencies including mutual reinforcement and antagonism. Finally, we demonstrate the proposed approach using an example from the Energy domain.

11:35-12:35 Session TH3J: Renewable Energy Industry
Location: Botanique
PRESENTER: Olivian Savin

ABSTRACT. Due to the deregulation of the energy market and the integration of renewable energies, hydropower plant operators are faced with an increasing number of start and stops and load changes that can reduce the life of equipment. This paper proposes to assess the influence of the start and stop cycles on the ageing of stators in hydroelectric generators. In a first step, different modes of generator degradation are identified. The component most affected by start and stop cycles is the stator insulation. The insulation defect is considered to be the first cause of end-of-life. An estimate of stator lifetime is made from the winding replacement dates recorded on a large number of units subject to a variable number of start and stop cycles per day, using a Weibull reliability law. The results show that there is a significant difference in lifetime between installations subject to a high number of start and stop cycles and installations subject to a lower number of start and stop cycles. As the degradation of the generators’ insulation is mainly due to thermal stress, a model using Coffin Manson’s law is used to determine the acceleration factor that will predict the reduction of the stator’s lifetime due to thermal cycling. Temperature measurements recorded every 10 minutes for one year on a pump storage hydropower plant are used to quantify the effect of start and stop cycles on the generator temperature. The results show that the value of the acceleration factor is greater than one and increases with the number of cycles per day, which means that the life of the generator stator will decrease as the number of starts and stops per day increases. The acceleration factor is higher when the power plant is operated in pump mode than in turbine mode. The study is completed by an analysis using different simulated temperature cycles, which allows us to study the sensitivity of the estimated acceleration factor to the parameters of the Coffin Manson model and to evaluate the contributions of the main factors (mean temperature, temperature amplitude and cycle frequency) on the acceleration factor.

A review on mathematical algorithms for predictive maintenance techniques and anomaly detection in PV systems
PRESENTER: Khaled Osmani

ABSTRACT. The applicability of PhotoVoltaic (PV) systems as an efficient renewable energy supply for an average lifespan of twenty-five years, is backpedalled after being confronted with fault events. Just like all outdoor systems, PVs are often subject to various types of faults and undesired working conditions. At any instance, PV modules can interact with different electrical faults. On the other hand, harsh external conditions can also affect a proper PV system functioning. Efficiency reduction and output deficits are most common results of PV system's interaction with faulty events, reflected as improper behaviour of the system. From this point, arises the need for a diagnostic and prediction system, which estimates the possibility of a future achievable potential and partially observable fault among a large set of possible failure modes, prior from happening, regardless of the PV cells raw materials being used. In the interest of a sustainable and reliable PV system design, this paper aims at exploring different mathematical models of fault predictive techniques. Underlying the artificial intelligence and algorithm-based decision making, various predictive algorithms are surveyed and compared with reference to their event risk's accuracy. For instance, Markov chains based probabilistic model computes failures rates, where convolutional neural networks indicate a malfunctioning panel, and supervised machine learning based automated barcode detection algorithms detect PV module's defects. The critical assessment between different models would serve as an informative background when choosing a PV fault predictive technique.

A reliability and durability study of Alkaline and Polymer Electrolyte Membrane electrolyzers

ABSTRACT. GRTgaz is the French owner and operator of the longest high-pressure natural gas transmission network in Europe. GRTgaz has its own research center, the Research and Innovation Center for Energy, RICE. For several years, RICE has been working to transform gas infrastructures and to accelerate the integration of renewable gases and hydrogen in the gas networks. The current energy infrastructure must therefore evolve towards new competitive large-scale storage solutions. One solution currently studied by GRTgaz is chemical storage in hydrogen. In this context, project Jupiter 1000 was launched, with the contribution of engineering, technical and research teams from GRTgaz and its industrial partners. The main goal is to conduct an industrial scale experiment on a Power-to-Gas pilot installation with injection into the natural gas transmission network. Jupiter 1000 is the first industrial demonstrator of Power-to-Gas with a power rating of 1 MWe for electrolysis and a methanation process with carbon capture. Green hydrogen will be produced from 100% renewable energy, using two different technologies of electrolyzers: an alkaline electrolyzer and a Polymer Electrolyte Membrane (PEM) electrolyzer. This will allow RICE’s research team to compare the two technologies on an economical, performance, and safety points of view. The aim of this paper is to provide an overview of the current development status of the reliability study conducted on both types of electrolyzer of Jupiter 1000. The project will be conducted in two phases: -The first one will focus on understanding the reliability of both technologies, meaning their ability to perform as required, without failure, for a given time interval. This part breaks down the mechanisms of both technologies to simple elementary functions and/or components that will then allow to conduct a Failure Modes and Effects Analysis (FMEA). The FMEA evaluates the severity and probability of occurrence of failures that may occur on these electrolyzers, to prioritize the most critical ones. It also documents current knowledge and actions about the risks of failures, and how to use them to improve the system. -The second phase will focus on the durability, which is their ability to perform as required, under given conditions of use and maintenance, until the end of service life. It will attend to finding the degradation phenomena that affect the durability and safety of alkaline or PEM type electrolyzers. Based on this, it will be possible to identify the critical elements in terms of reliability, safety and durability and examined them thoroughly in order to advance the understanding of phenomena that can reduce the service life or degrade the safety of alkaline and PEM type electrolyzers, and thereby provide recommendations in terms of safety-related functions, monitoring functions, maintenance policy and operational conditions for each technology of electrolyzer.

11:35-12:35 Session TH3K: Reliability and Availability Issues of the 5G Revolution
Location: Atrium 1

ABSTRACT. Successful operation of a mobile network means preserving the active call when a customer moves from one cell to a neighboring one. Switching from one base station (antenna) to the next necessitates a procedure called handover or handoff, which involves resources that could have been used for establishing calls for new clients. The knowledge of the handoff rate --- the number of crossings of the boundaries between cells per unit of time --- is therefore important for the assessment of the probability of cut or rejected calls (bad news for a telecommunications company) and/or for the adequate provisioning of network resources. It becomes even more important in 5G communications\cite{Siddiqi2019,Slalmi2020}, as handover failures hamper the end-to-end Quality of Service (QoS).

The number of handovers depends on the mobility pattern of customers. The Random WayPoint (RWP) model provides a commonly used and reasonable description of realistic behavior, and lies at the heart of a general framework proposed by Hytti{\"a} and Virtamo\cite{Hyytia2007}. In previous works\cite{Tanguy2015,Tanguy2016}, we were able to compute the probability of handovers for circular and triangular RWP domains, with polygonal or circular cells. We demonstrated a quasi-linear dependence of the handover rate on the cell perimeter, the linear dependence being exact for isotropic domain or cell.

We address here the case of an anisotropic RWP domain, namely an ellipse, in order to quantify the influence of the anisotropy on the handoff rate, namely the error being made when using the expression proportional to the cell perimeter. Analytical expressions have been found, which could be useful for other practitioners in the reliability of mobile communications.


ABSTRACT. Along with enhanced Mobile Broadband (eMBB) and massive Machine Type Communication (mMTC), Ultra-Reliable Low-Latency Communication (URLLC) will bring new types of 5G services that may profoundly affect current industries and develop new ones. URLLC applications require end-to-end data transmissions that are very reliable (with error rates lower than $10^{-5}$) while exhibiting a small latency (from one millisecond for automated industry to ten milliseconds for virtual reality).

Assessing a system's reliability or availability due to equipment failure may become a challenging task, when its architecture cannot be described by a simple series-parallel, underlying graph\cite{Beichelt2012,Trivedi2017}. Likewise, calculating the probability that the latency of the service remains below an acceptable upper bound is no easy task\cite{Westmijze2014,Ma2019}, even when limited to a radio link, all the more so because of the usual heavy tail of the latency distribution. The probability of a successful operation is expected to decrease because of the introduction of latency constraints.

A proper description of the Quality of Service (QoS) of URLLC should take the two facets of the problem, namely reliability and latency, simultaneously into account. In the present study, we propose such a general framework in order to calculate the probability of successful end-to-end data transmission, where reliability and latency are assigned to {\em each} link and node of the system. We then apply our model to a few simple architectures that may be implemented in industrial applications. We also address the delicate issue of numerical estimating the Quality of Service.

A Fast Method to Compute the Reliability of a Connected (r,s)-out-of-(m,n):F Lattice System

ABSTRACT. A rectangular matrix of mn binary components roughly mirrors the arrangement of base stations in a 5G mobile network. As follows from geometrical considerations, if all four stations in any 2-by-2 submatrix fail, then an area with no signal coverage occurs within the network range. Therefore, a connected (2,2)-out-of-(m,n): F lattice system can serve as a fairly adequate reliability model for a grid of base stations. The latter system is a special case of a connected (r,s)-out-of-(m,n) lattice system, defined, inter alia, in Zhao et al. (2011). Its reliability is defined as the probability that an m-by-n matrix of binary components contains an r-by-s (or larger) submatrix of failed ones. Computing the exact value of this probability is a numerically complex task, as can be observed in Nashwan (2018) and Zhao et al. (2011). This task can be accomplished with less effort only in some special cases, as shown in Nakamura et al. (2018). It is therefore useful, in the context of mobile networks, to have a possibly efficient near-exact method for computing the reliability of a connected (2,2)-out-of-(m,n): F lattice system. In this paper a fast algorithm is presented allowing to calculate, with good accuracy, the reliability of such a system. The proposed method employs a recursive procedure based on Markov chain analysis. The obtained results are particularly useful for estimating service availability of a 5G mobile network, where the required value of this parameter is close to one. Thus, the network designer and/or operator should be able to estimate it with high accuracy provided by the presented algorithm.

12:35-14:00Lunch Break
14:00-15:20 Session TH4A: Risk Assessment
Location: Auditorium
Resilience of the European Natural Gas Network to Hybrid Threats
PRESENTER: Peter Burgherr

ABSTRACT. There is no commonly used definition of the term hybrid threat, but diverse characteristics and traits are often considered. These include: (1) a combination of coercive and subversive activity, (2) conventional and unconventional methods, (3) state or non-state actors, and (4) activities that remain below the threshold of detection and attribution. In general, hybrid threats affect multiple domains, create ambiguity, and are likely to exploit the vulnerabilities, caused by systemic risks. In recent years, the exposure of energy assets, such as the natural gas transmission system, on hybrid threats has significantly increased [1]. The conceptual framework proposed by the European Commission’s Joint Research Centre and Hybrid CoE [2] comprises 13 domains, including infrastructure, cyber, economy, society, public administration, political and information, among others. The current study aims to develop a composite index that measures at a country level the resilience of the European natural gas network against hybrid threats. For this purpose, a comprehensive set of indicators is established and quantified. Specific network indicators are derived from a complex network analysis [3], whereas the domain-specific indicators are based on data from reliable international organizations, such as the World Bank. At the phase of preference elicitation, selected experts in the field of hybrid threats articulate their viewpoints on the indicators, with the aid of a tailor-made procedure, based on the Simos Multi-Criteria Decision Analysis (MCDA) method [4]. Finally, the constructed decision model is applied in conjunctions with an MCDA aggregation framework to calculate a composite hybrid threat resilience (HTR) index, measuring the performance of the individual countries, and ultimately providing insights and recommendations to support policy makers.

A multistate Bayesian Network integrating MISOF and probit modelling for risk assessment of oil and gas plants

ABSTRACT. In this work, we integrate in the multistate Bayesian Network (BN) modelling approach developed in (Di Maio et al., 2020a) i) the Modelling of Ignition Sources on Offshore oil and gas Facilities (MISOF) for characterizing the mitigative safety barriers and ii) a probit modelling for ultimately evaluating the severity of the accident scenarios (namely, Flash Fire (FF), Jet Fire (JF), Pool Fire (PF), Explosion (EX) or Toxic Dispersion (TX)) and properly assessing the probability of fatality following an accident by considering the actual effects of the mitigative safety barriers in place. The proposed approach is applied to a case study concerning a Loss of Primary Containment (LOPC) accident in the slug catcher of a representative onshore Oil & Gas (O&G) plant.

Risk Assessment of Fires in Residential Buildings - A Case Study in Norway
PRESENTER: Bahareh Tajiani

ABSTRACT. Every year, many fatal and non-fatal residential fires pose a real threat in many countries such as U.S., Norway, Denmark, and Sweden. These fires cause a large number of fatalities, injuries and a huge property damage depending on the fire detection time, designed building, response time by the occupants and etc. Thus, risk assessment of residential fires is of great importance toward elevation of home fire safety towards an acceptable level for everyone. The statistical analysis of the data regarding Norwegian residential fires were mostly related to 1990 to 2014, while there has not been much research on the last five years. This paper analyses the real data of fires in dwellings in Norway from 2015 to 2020 in order to develop a fire risk assessment. For this purpose, two main fire scenario clusters were adopted which considered both measures to prevent fire from occurring and measures to control the fire growth and smoke spread. In Fire extinction scenario, a basic residential sprinkler was designed and investigated in more details to calculate the probability of failure on demand and reliability of the system at different time intervals. Furthermore, some additional measures were introduced to increase the building fire safety grading and to evaluate how they can affect the fatality and injury rate.

14:00-15:20 Session TH4B: Mathematical Methods in Reliability and Safety
Location: Atrium 2
An Analytical Variance Estimator for Separable Importance Sampling with Applications to Structural Reliability
PRESENTER: Gabriele Capasso

ABSTRACT. In recent years, reliability-based designs gained a growing interest in scientific community. Effects of uncertainties in input variables are often taken into account in terms of failure probability. In structural design applications, this is intended as the probability of exceeding the limit capacity of a structure. To estimate failure probability, several Monte Carlo based methods are widely applied [1]. In Classical (or Crude) Monte Carlo procedure (CMC), samples are extracted following input distributions, making the number of needed simulations quadratically proportional to the inverse of target probability. More advanced techniques thus seek to evaluate this probability with fewer samples. Importance Sampling (IS), for example, only considers samples generated from an auxiliary distribution and proved quite useful to estimate small probabilities with a reduced number of simulations.

In a number of applications, stress and strength of a structure - or more generally response and capacity - are actually independent. This was exploited in [2], where classical Monte Carlo was applied to sample separately stress and strength - from here, the name Separable Monte Carlo (SMC) -, thus leading to variance reduction and consequently decrease of needed analysis. Practically, every stress sample is compared to all strength samples, exponentially increasing the global amount of combinations generated with few simulations. This latter technique was also combined to Importance Sampling [3] (Importance SMC or ImpSMC). However, no analytical variance estimation was provided therein, making the gains in terms of number of needed simulations difficult to evaluate.

In the present work we build an analytical variance estimator, devoted to show the power behind the ImpSMC procedure. The analytical estimator itself is used to stop simulations when the required coefficient of variation limit is reached.

Applications of ImpSMC to two academic examples are presented in this work. Unbalanced dataset are allowed, opening to the possibility of reducing even more the amount of more complex simulations (often deriving from stress samples generation). In the applications considered herein, the number of required runs is reduced by a factor of 5 with respect to IS, 6.5 to SMC and even 320 if compared to CMC. Gains with respect to CMC and SMC approaches increase as failure targets decrease.

References [1] J. Hammersley, Monte carlo methods. Springer Science & Business Media, 2013. [2] B. P. Smarslok, R. T. Haftka, L. Carraro, and D. Ginsbourger, “Improving accuracy of failure probability estimates with separable monte carlo,” International Journal of Reliability and Safety, vol. 4, no. 4, pp. 393–414, 2010. [3] A. Chaudhuri and R. T. Haftka, “Separable monte carlo combined with importance sampling for variance reduction,” International Journal of Reliability and Safety, vol. 7, no. 3, pp. 201–215, 2013.

PRESENTER: María L. Jalón

ABSTRACT. The deterioration of Cultural Heritage assets due to the climatic change and natural hazards is a pressing issue in many countries. In this sense, the assessment of their actual structural integrity based on higher-scale structural responses is key to assess the resilience of these important assets. This paper proposes a rational methodology to integrate modal vibration data into structural FE models based on probabilistic tools. The methodology is based on solid Bayesian probabilistic principles thus allowing uncertainty quantification in the assessment. A real case study for a sixteenth century heritage building in Granada (Spain) is presented. The results show the efficiency of the proposed methodology in identifying the probability density functions of basic material parameters such as the Bulk modulus of the building stones or the modulus of soil reaction among others.

A New Model of the Network Design Problem with Relays for Maritime Rescuing with Uncertainties

ABSTRACT. In this paper, we presented a mixed-integer linear programming (MILP) model for a new variant of the network design problem with relays (NDPR) and introduced its application on maritime rescuing where aircraft and vessels could be routed as commodities on the network. With the fast development of marine economic all over the world, the security guarantee and disaster relief in open sea area are becoming more important and challenging tasks. Given that uncertain events might occur in any specified open sea areas with estimated probabilities, our proposed model is able to find out the optimal rescuing routes with optimal location of relay stations for a fleet of heterogeneous rescue aircraft/vessels in advance. We considered multiple practical factors faced in the today's maritime rescue problem, such as the uncertainties of the events, concurrences of multiple missions, heterogeneous types of rescuing equipment, and the return after rescue. The economical efficiency was also included in the model to serve as an important evaluation index of the rescuing operation plans. Computational experiments on a randomly generated data set were carried out to simulate various types of random multitask with uncertainties and verified the validity of the proposed model. These experiments demonstrated that the model can obtain practical maritime rescue solution with a lower total cost.


ABSTRACT. A natural way to assess the reliability of a complex industrial system is to carry out numerical simulations that reproduce the behavior of the system. The PyCATSHOO tool developed by Electricité De France (EDF R&D) allows the modeling of such systems through the framework of piecewise deterministic Markov processes (PDMP). These processes have a discrete stochastic behavior (failures, reconfigurations, control mechanisms, repairs, etc.) in interaction with continuous deterministic physical phenomena.

It is well known that for sufficiently rare events, crude Monte-Carlo methods require a very large number of simulations to accurately estimate their probability of occurrence. We propose an adaptive importance sampling strategy based on a Cross-Entropy method to reduce the cost of estimating the probability of system failure. The success of this method depends crucially on the family of instrumental laws used to approximate the optimal law. We construct this family according to the PDMP structure of the system, in particular according to the configuration of its minimal failure groups. Finally, we propose different sensitivity analysis techniques3 to reduce the dimension of the problem and to determine the respective contributions of different component failure modes to the probability of system mission loss.

We present an application of this strategy on a test case from the nuclear industry: the spent fuel pool.

14:00-15:20 Session TH4C: Maintenance Modeling and Applications
Selective Maintenance Optimization for Multi-State-Unit System with Multiple Repair Channels
PRESENTER: Mingang Yin

ABSTRACT. In many real industrial maintenance applications, the maintenance activities can be simultaneously performed by multiple repair channels. In this paper, a new selective maintenance model for multi-state systems with multiple repair channels is developed. Firstly, based on the homogeneous continuous-time Markov chain (HCTMC), multiple repair channels and the stochastic durations of break and maintenance actions are integrated into a multi-state system(MSS). The system state distribution of the next mission is deduced. Next, considering the multi-state nature of units, the transient performance rate and the expected cumulative performance rate of the system are calculated. The computational burden of the involving complex expression is alleviated by the Gauss-Laguerre quadrature. Then, based on the expected cumulative performance rate in a limited time, the concept of system mission reliability is redefined. Finally, the established selective maintenance problem is systematically verified with specific cases.

Optimal heuristics for reliability-based inspection and maintenance planning
PRESENTER: Elizabeth Bismut

ABSTRACT. Inspection and maintenance planning of most engineering systems is based on prescriptive rules and ad-hoc planning. There is hence a significant potential for savings or improved performance by the application of smarter inspection and maintenance (I&M) planning. In general, I&M planning belongs to the class of sequential decision processes. Finding the theoretically optimal solution for such processes in realistic engineering systems is not possible at present, due to the complexity of these systems and the involved maintenance processes, which lead to intractably large state and policy spaces. Heuristics, which parametrize the policies, offer an alternative, that is computationally tractable. If chosen well, these heuristics can lead to near-optimal I&M policies. In addition, they have the advantage of being easily interpretable, which is of importance in practical implementations. In this contribution, we look at two example systems, offshore steel structures and feeder pipes in nuclear power plants. We utilize physics-based stochastic models to describe the system performances and to assess the effect of inspections and maintenance on the system reliability. We discuss the formulation of possible heuristics for inspection and maintenance policies. On this basis, we calculate the benefit of using advanced reliability-based I&M planning over existing rule-based I&M planning in terms of the I&M costs and the resulting risks.

References: Bismut E., Pandey M., Straub D.: Predictive inspection and maintenance planning of a feeder piping system. In preparation. Bismut E., Straub D.: Optimal Adaptive Inspection and Maintenance Planning for Deteriorating Structural System, Reliability Engineering & System Safety, under review.

How to use prescriptive maintenance to construct robust Master production schedules
PRESENTER: David Lemoine

ABSTRACT. Current contributions in the field of maintenance optimization focus, for the most part, on defining a maintenance decision-making framework based on the current estimation of the system health state and its prediction over time. This is all the more true in the context of predictive maintenance for which one of the challenges is to improve prognostic models by integrating the current information relating to the different failure modes. Prescriptive maintenance is positioned as the highest level of maturity and complexity of knowledge-based maintenance (KBM) [1, 2]. It seeks in part to integrate actions on future constraints, especially operational constraints, into decision making. In an industrial production context, it is easy to imagine the strong interaction with other processes such as quality management and production planning [3, 4]. It is in this context of production management that our work is positioned, with planned production defining here the operational constraint that can be modulated to a certain extent. However, it should be noted that, although a desire for agility in production is sought, master production planning remains the steering production tool.

The goal of this paper is to elaborate a methodology for providing optimized tactical production plans that remain robust to a dynamic maintenance decision based on the overall production system health state. The mutual dependencies are here modeled through the production capacity and production efficiency. Let’s assume the production system health is a function of the planned production and degradation will impact its efficiency in terms of production duration, more degraded the system is, higher the allocated production capacity should be. A robust approach based on a feasibility criteria will be proposed to ensure the operational feasibility of the plans in their manufacturing phase against the hazards of degradation and failure of the production system.

Improvement, application and verification of a new multivariate forecasting model for real industry related issues
PRESENTER: Abderrahim Krini

ABSTRACT. With this work, a research contribution in the field of reliability theory has been made, with which a realistic prognosis of reliability parameters of technical systems can be carried out. The motivation to deal with this topic resulted from the realization that the prognosis quality of established prognosis models must be optimized. An early and realistic prognosis of reliability parameters contributes to the success of a concern, mainly through the early implementation of quality measures. The work focuses on the development of a new multivariate prognosis model, which uses multivariate stress parameters as reference variables. Its application enables the prediction of reliability parameters for electronic control units. The predicted reliability parameters can be specified as stress-dependent (bivariate/multivariate) or time-dependent variables. While univariate reference quantities usually use the time dwell time of a technical system, the prognosis model newly presented here can process multivariate reference quantities. During the time in the field, technical systems are not only exposed to different usage behavior, but also to other stresses and influences that make a not inconsiderable contribution to failure. The use of time in the field as a univariate reference variable does not allow for this differentiated consideration and does not take into account relevant information in the reliability analysis. All existing prediction models have in common that only univariate reference parameters can be processed. For a fully comprehensive reliability analysis, all stress variables that lead to a failure must be considered. This is not sufficiently possible with a simple univariate approach. With the new approaches, it is now possible for the first time to consider different stress variables, their changes and their effects on the technical system under investigation in a field data analysis. The presented approach for the multivariate prognosis model considers in its general idea the prognosis of stress-dependent reliability parameters. Usually, time-dependent reliability metrics are specified in practice. A new approach is presented that transforms stress-dependent reliability metrics into time-dependent reliability metrics using the multivariate annual stress distribution. Furthermore, model corrections are introduced to increase prediction quality, which provide significant improvements in prognosis quality.

14:00-15:20 Session TH4D: Uncertainty Analysis
Location: Panoramique
PRESENTER: Andreas Hafver

ABSTRACT. In this paper, the concept of assurance is discussed in relation to knowledge, uncertainty, risk and complexity.It is argued that assurance may be understood as a tool to manage risk related to knowledge-based claims, in contexts where trust in the claims is consequential.. Knowledge is here understood as justified beliefs derived from information and scientific reasoning. Since both information and reasoning in general will be incomplete and imperfect, a degree of uncertainty is associated with the validity and veracity of the claim. The association of uncertainty and consequence to claims in assurance indicates a relationship between assurance and the concept of risk, with risk defined as a combination of consequences and uncertainty. In relation to trust, we discuss the role of trust providers, and trust enablers such as independence, transparency, and governance.

Complexity is discussed as a source of risk which drives assurance needs, especially the need for validation. It is suggested that an uncertainty-based risk perspective may help tackle complexity faced in many disciplines and can help shape the future of assurance services across industries.

Uncertainty in a Hurricane Vulnerability Model

ABSTRACT. This paper deals with the treatment and effect of uncertainty from stochastic variables in the Low-Rise Commercial Residential (CR-LR) component of the Florida Public Hurricane Loss Model (FPHLM), for buildings with 1 to 3 stories. The FPHLM is a probabilistic risk model sponsored by the Florida Office of Insurance Regulation to estimate insured losses in residential buildings due to hurricane-induced wind and rain, in the State of Florida. The FPHLM uses Monte Carlo simulations to model the wind, wind debris, and wind-driven rain induced damage to the components of the building envelope and the building interior. The final outputs are a library of vulnerability functions that provide monetary damage ratio (cost of repair to building value) as a function of wind speed for different building classes. This paper discusses the most recent model updates (version 8.1), with a focus on the quantification of interior and contents damage from water ingress, and the corresponding uncertainties. In the new 8.1 version, the model adopts a component approach to explicitly propagate the water ingress among interior and contents components of the building. The resulting moisture content level of each interior component defines its damage while the volume of water reaching each component defines the damage to contents. In this new approach, many of the variables involved are stochastic, including the water absorption capacity of each interior component. A variety of sources, including laboratory tests, industry standards, and manufacturer catalogs informed the probability distribution functions (pdf) of these variables. The paper describes how the FPHLM team characterized the pdfs of these variables and investigates the relationship between the variables and the non-linear processes leading to the vulnerability functions. The types and sources of uncertainty are identified, and strategies to quantify and reduce the uncertainty are proposed.


ABSTRACT. Calibration of model parameters is increasingly playing a key role in the process of accurately predicting the responses of full-scale dynamical systems. Such systems often exhibit complexities arising from the assembling process and nonlinearities manifested at various modelling levels, from material to component to sub-system to system level, during operation under harsh environments. Recent advances [1-3] have enabled to calibrate the model parameters, quantify the uncertainties and predict uncertainties to output quantities of interest using data obtained from the system level. However, data at the system level may be lacking or be expensive to obtain or, usually, are not adequate to reliably calibrate material, component or sub-system parameters. In this context, we extend the framework in [2, 3] and present a systematic approach to calibrate the system model parameters using information and data from lower system levels which share common parameters with higher system level. The proposed approach can properly take into account the uncertainty in the component model parameters due to variabilities in experimental data, environmental conditions, material properties, manufacturing process, assembling process, as well as nonlinear mechanisms activated under different loading conditions. For this, the uncertainty is embedded within the structural model parameters by postulating a probability model for these parameters that depend on hyper-parameters. Sampling techniques as well as asymptotic approximations are used to carry out the computation or reduce the computational burden in the proposed Bayesian multi-level modeling framework. Selected applications in structural dynamics are used to demonstrate the effectiveness of the proposed framework.

A probabilistic approach for the consideration of measurement errors in metrology

ABSTRACT. Metrology is a key stage in industry as it validates the quality requirements at different steps of the production process. In dimensional metrology in particular, validation consists in its classic form, in measuring dimensions of interest – dimensions intended for an assembly for example. Two sources of uncertainties can be associated with measured dimensions: (i) Uncertainties in the true value of the dimension, which are caused e.g. by the vibrations of the machines during the manufacturing process, tear and wear of the tools, etc.; (ii) The measurement errors, which cannot be avoided and are caused e.g. by thermal expansion of the parts, inaccuracies of the measurement tools.

The probabilistic approach is used and these uncertainties are modeled as random variables. The distribution associated with the measurement error is assumed to be known.

The objective of this work is the “correction” of the measurement errors. At first, the probability density function associated with a measurement for a batch of parts is considered. A deconvolution procedure is applied to identify the distribution of the true dimension. The method of the maximum of likelihood is used here. An arbitrary distribution (Gaussian, lognormal, Weibull, etc.) is selected for the true dimension and its parameters are identified such that they lead to the best fit with the measurement data.

The correction is then applied for each measured value using a Bayesian method. The probability density function identified at the previous step is used as the prior distribution and the measurement error defines the likelihood function. The posterior distribution is then associated with the true value of the dimension. This provides the engineers with the best information available regarding the true dimension; its exact value cannot be identified and is therefore modeled as a random variable.

14:00-15:20 Session TH4E: Human Factors and Human Reliability
Location: Amphi Jardin
Examining the effect of a proposed operator support system on human error probability estimation
PRESENTER: Awwal Arigi

ABSTRACT. As the name implies, an Operator Support System (OSS) is meant to ease the work of operators as they perform the task of managing the operations of a complex system; in this case, a nuclear power plant (NPP). This is to ensure that from a human reliability analysis (HRA) perspective, the human error probabilities (HEPs) are reduced while keeping the same level of efficiency in the operations. However, it is necessary to confirm that the intended goals of the OSS are being achieved. The advanced power reactor – 1400 (APR-1400) can be regarded as an evolutionary nuclear power plant and has a fully digitalized main control room (MCR). Recently, further improvements are being planned to support the operator tasks within its MCR. The possible OSS features currently undergoing testing are introduced in this paper. This work evaluates the effects of some proposed operator support features in the MCR of the APR-1400 NPP. This evaluation is done by comparing the human error probabilities estimated via the cause-based decision tree (CBDT)/THERP method for both general and abnormal operations. The results show that the effect of the OSS based on the HEPs depends on the type of operation and scenario. The limitations to the use of the current HRA methods in general, including the CBDT/THERP method for estimating HEPs in such MCRs are discussed. The framework presented in this paper will be useful in future efforts to analyze OSS effects with other human reliability analysis methods.

Handling the uncertainty with confidence in human reliability analysis
PRESENTER: Caroline Morais

ABSTRACT. Most of the attempts aimed at substituting expert-driven human reliability assessment methods with empirical data-driven techniques have failed due to the high uncertainty of human reliability databases and limitations of traditional probabilistic tools to deal with it. Although recent research suggests Bayesian and credal networks could be a more suitable approach to model human reliability data, such analyses implies the need for the assessment of a conditional probability distribution for each variable – requiring a much larger amount of data than other traditional tools. Therefore, ‘the problem of sparse data’ continues to play a crucial role in hindering the feasibility and credibility of human reliability analysis. This has fuelled research aiming at tackling data scarcity through the use of expert elicitation and, more recently, of imprecise probability. In addition to issues inherent to the nature of the available data, some modelling procedures such as normalisation have the potential to implicitly affect the degree of knowledge carried by such data, resulting in loss of reliability. For instance, our confidence about the probability of an event that has been observed in only one of ten trials (1/10) is not the same as that of an event observed to occur ten times in one hundred trials (10/100). Hence, the output of such a procedure does not carry any information regarding the unevenness of sample sizes. In this paper, we propose to tackle these limitations by using confidence boxes (c-boxes) with credal networks, aiming at providing risk assessors with a rigorous framework for data uncertainty guiding towards more efficient and robust modelling solutions. The approach is tested with a simple model of the causes of fatigue in the work environment.

Analyzing the validity of a systematic Human-HAZOP method for human error identification in the process industries

ABSTRACT. The rate from process-industry accidents has risen considerably in recent years. These accidents often cause human casualties, economic loss, and environmental pollution. Statistics show that majority of process-industry accidents (over 80%) are resulted from unsafe behaviors. Identifying human errors allow the development of appropriate prevention and mitigation strategies. Therefore, a systematic Human-HAZOP method is proposed in this paper. To illustrate its validity, the “7·12” major explosion and fire accident of Yibin Hengda Technology Co., Ltd. is selected as a case study. The effectiveness and rationality of the proposed method are also verified by comparing with similar identification results from the SHERPA. In conclusion, the systematic Human-HAZOP method can be popularized for a thorough and consistent identification of human errors in the process industries.

14:00-15:20 Session TH4F: Civil Engineering
PRESENTER: Robert Lanzafame

ABSTRACT. In practice, when it comes to the reliability--based design of infrastructure it is most common to treat random variables as independent. While this assumption simplifies the probabilistic analysis considerably, it leaves out useful information that may result in a safer and more efficiently designed structure. Often there are suitable data to quantify the dependence between random variables in a particular problem, but appropriate numerical methods are not readily available or known to the analyst. Alternatively, design codes and assessment protocols in general have not been developed which provide guidance on how to include multivariate analysis in the design of a structure, especially in the context of a univariate return period (or exceedance probability) based procedure. Thus, many design decisions are made after assessing multiple failure mechanisms independently, despite being influenced by the same random variables, or by relying mostly on well-informed, but subjective, decisions.

As such, the need for incorporating multivariate analysis into the reliability-based design approach is becoming increasingly recognized. Three example cases from our experience at the Delft University of Technology are provided where vine-copulas have been used to improve the design of a structure. The case studies consider open-sea waves or ship berthing loads, with up to six variables considered in the vine-copula model. Apart from vine-copulas, probabilistic methods include extreme value analysis, Monte Carlo simulation and the first--order reliability method. When compared to a conventional reliability-based design approach, a more efficient final design was produced for some case studies, in terms of size and/or cost, for the same safety level. In addition, for a design methodology based on load scenarios with a specified exceedance probability, a higher confidence in the final design reliability is obtained when information from the dependent multivariate analysis is considered. The approach described herein is based on open-sourced computational tools and can be immediately used to improve the insight and decision making capability of infrastructure designers and owners. Future work should consider design cases where a vine-copula can be applied to a situation with strong dependence between up to seven variables.

Surrogate-assisted versus subset simulation-based stochastic comparison between running safety and passenger comfort design criteria of high-speed railway bridges

ABSTRACT. Limiting the maximum vertical acceleration and deflection of the deck are two principal design criteria of high-speed railway bridges. The former prevents ballast instability to ensure running safety, and the latter attempts to limit the acceleration of the car-body below the level at which passenger comfort is disturbed. The previous studies are mainly concerned with the destabilization of the ballast, nevertheless the possibility of the maximum deflection occurrence should not be underestimated. Moreover, the literature indicates the need to improve the current design requirements including the minimum allowable mass and frequency of bridges, which requires solving optimization problems based on modern requirements. Therefore, a probabilistic framework with simulation-based techniques is used to evaluate the violation probability of the above limit states and distinguish dominant criteria under different conditions, i.e., bridge span length and operational train speed. First, the performance of the subset simulation method is compared with the Latin Hypercube-sampling based Monte-Carlo approach supported by surrogate models. Polynomial chaos expansion (PCE) surrogate models are trained for this objective. Then, the resulting violation probabilities are evaluated for the two considered limit states using the approach with better performance.

From a microscopic model to the determination at the structure scale of the reliability of an alkali-silica reaction affected dam

ABSTRACT. The Song Loulou hydropower dam in Cameroon is affected by alkali-silica reaction (ASR) as many others dams on each of the five continents worldwide. ASR occurs between non-crystalline silica in aggregates and the highly alkaline pore solution in concrete. It produces an alkali-silicate gel which absorbs water. Gels expand and cause cracking of the concrete at the micro-level, thus causing deformations and structural cracks at the macro-level. AAR macroscopic models have been generally developed by combining an approximate chemical reaction kinetics with linear or nonlinear mechanical constitutive laws, trying to reproduce and predict the long-term behavior of affected structures. Moreover, microscopic chemo-mechanical models go to the heart of the reaction, as they simulate both the diffusion and the expansion at the micro-level. Even if they might give a better prediction, they are not used at structural scale since such calculations are not cost effective. This study proposes to use the microscopic ASR model developed at the LMDC to estimate the reliability of the Song Loulou hydropower dam. First and foremost, we present the Song Loulou dam and the modeling (geometry, material properties, loads, etc.) of its concrete spillway pier in a FE code. Secondly, we describe the LMDC ASR micromodel and its variables ranges corresponding to the Song Loulou case. In the third place, surrogate models based on the polynomials chaos expansion of the parameters of a sigmoid have been constituted at several scales, in particular to reduce the computation time. At the scale of the structure, they helped to obtain displacements at the points of interest, related to the operating limits states of the spillways, and thus to estimate the residual reliability of the dam.

14:00-15:20 Session TH4G: Nuclear Industry
Location: Atrium 3

ABSTRACT. The safety analysis of nuclear power plant is moving toward a realistic approach in which the simulations performed using best estimate computer codes must be accompanied by an uncertainty analysis, known as the Best Estimate Plus Uncertainties (BEPU) approach [1]. The most popular statistical method used in these analyses is the Wilks’ method, which allows to estimate a one-side tolerance limit. Wilk´s method is based on the principle of order statistics for determining a certain coverage of the Figures-of-Merit with an appropriate degree of confidence. However, there exist other statistical techniques that could be used in this context. Quantiles represent useful statistical tool for describing the distribution of results and deriving one-side confidence limit for a quantile, i.e. a one-side tolerance interval, in BEPU analysis. In the literature, different non-parametric methods for quantile estimation [2-4] and for interval confidence interval estimation [5-6] have been proposed. In this study ten different quantile estimation methods and two confidence interval estimation methods to sample sizes between 59 and 153 are analyzed. The different approaches are used in the uncertainty analysis of a Large-Break Loss of Coolant Accident, LBLOCA, in the cold leg of a Pressurized Water Reactor using the thermal-hydraulic code TRACE. The results obtained with the different methods considered are compared with the Wilk’s method respect to (a) the average (i.e., median and mean) coverage probability and the closeness to the nominal coverage level and (b) their variability.

Characterizing Previously Unknown Dependencies in Probabilistic Risk Assessment Models of Nuclear Power Plants

ABSTRACT. The US Nuclear Regulatory Commission (NRC) maintains a set of Level-1 probabilistic risk assessment (PRA) models, called standardized plant analysis risk (SPAR) models, which are the analytical tools used by the agency to perform risk assessments. The SPAR models include elements of the initiating events (IE), mitigating systems (MS) and to a limited extent barrier integrity (BI) cornerstones of the NRC’s Reactor Oversight Process.

Over the last 10 to 15 years, several events have occurred at nuclear power plants (NPPs) in the US which had substantial risk and where multiple cornerstones were simultaneously affected. The risk insights from these domestic events may indicate an existing completeness uncertainty, specifically that there are ‘dependencies’ between certain initiating events and availability/reliability of mitigating systems which are not currently captured in the PRA models.

These previously unrecognized dependencies can be included in the SPAR models and thus captured in subsequent risk assessments. This paper will review several examples from US commercial NPPs where these dependencies manifested themselves and demonstrate that the risk of lower intensity events (far less than a beyond design basis event) can be significant. Further, this paper will describe potential PRA modeling improvements and provide insights that may lead to modifications to existing procedures, plant structures, systems & components such that the previously unmeasured risk might be lowered, providing a benefit to public health and safety.

PRESENTER: Clement Hardy

ABSTRACT. Infrared spectroscopy is a widely used technology for non-destructive testing of materials. We propose a novel approach to automatically and simultaneously analyse a data set of infra-red spectra. In this approach, the spectra are modeled by linear combinations of spikes whose shape and position are parametrized. The observed data consist of the discretized linear combinations of spikes to which a noise is added. In order to recover the spike parameters, common to all the data set, and the associated amplitudes, which are specific to each spectrum, we formulate a penalized optimization problem whose linear and nonlinear variables separate. In this work, a group-Lasso penalization ensures that the spectra can be decomposed into a reasonable number of constituent spikes. In addition, it brings out patterns among all the analyzed spectra. Due to the highly non convex nature of the problem, a resolution via standard procedures is out of reach. A way to make the problem convex would be to discretize the space of spike parameters. However, such method would face a resolution limit imposed by the grid. Instead of discretizing the parameter space, we propose an algorithm which interleaves convex optimization updates (to determine the amplitudes of the spikes) and non-convex steps (to locate and scale the spikes). We show that in practice our off-the-grid algorithm gives satisfactory results by returning sparse solutions which decompose spectra in a reasonable number of spikes. On the numerical side, we provide a study of the practical performances of the algorithm on real infra-red spectra as well as on simulated spike-train data. The infra-red spectra used in our study come from neoprene samples at different ageing levels. Finally, we define a procedure to detect abnormal ageing processes from the data set.

14:00-15:20 Session TH4H: Maritime and Offshore Technology
Location: Cointreau
Future Risk Scenarios regarding the use of the Northern Sea Route

ABSTRACT. The Northern Sea Route (NSR) is the North-Eastern Passage going from the Barents Sea to the Bering Strait. Due to the effects of global climate change, the ice sheet of the Arctic is melting, which has caused numerous stakeholders to gain interest in its future prospects. Not only can the route serve as a viable alternative to the Malacca Strait, and the Panama and Suez Canals to support global trade infrastructure, but it also holds prospects within petroleum and liquefied natural gas, and tourism industries. Consequently, this spur of interest has given rise to new challenges within polar maritime and environmental safety, as well as the development of its corresponding international and national legal frameworks. The purpose of this paper is therefore to uncover the possible futures of the Northern Sea Route and their corresponding risks, by taking into consideration the involved stakeholders, and the development of technologies, legislature, regulation, and the global geopolitics that shape it. Future Risk Scenarios of the Northern Sea Route revolves around the operationalization of wicked problems into analyzable scenarios. Wicked problems are problems that are not predefined, nor do they have a single solution. They are problems that exist due to the opinions and wants of the stakeholders involved in the formation of a problems in the future. The main issue of analyzing wicked problem is that they are subjective, non-quantifiable, non-linear, and non-delineated. In other terms they are the products of different forms of analytic and heuristic thinking of different individuals in different networks that occur across a timespan. As a cumulative effect, one cannot objectively assign them ontic quantities such as risk or uncertainty. Thus, one cannot make a form of casual deterministic inference about the possible manifestation of futures, but one can use discrete models to fix futures. In this paper the general morphological analysis framework was employed to create an interactive inference model that allowed for the investigation of future risk scenarios of the Northern Sea Route. Following this methodology, a model containing the main dimensions that influence the problem, and the corresponding conditions it can take was constructed. By the process of cross-assessment of the criterion of whether different dimensions’ conditions can coexist. The criterion of pairwise coexistence was constrained by logical, empirical, and normative assessments. Which left an interactive inference model in which one investigated scenarios and scenario clusters that can realistically occur by selecting independent variables. Through the interactive inference model, it was deduced that the dimensions: East/West Relations, Global Environmental Politics, and Technical and Navigational Requirements, were the most pivotal for the formation of different futures. The interplay between the connections of these dimensions as parameters clearly formed distinct opposing scenario clusters for possible futures. Effectively identifying relations, technology, and environmental politics as the strategic areas to target to shape the future of the Northern Sea Route.


ABSTRACT. Maritime transportation plays a crucial role in global trade. The environmental issues caused by the shipping industry imply that certain measures are necessary to be taken in order to deal with the problem of gas emissions applying the IMO’s directives. The reduction of the emissions is a complicated challenge and possibly it is difficult to reach without intervention to the exhaust system of the ship. Ship scrubber systems are an alternative solution to manage the problem of gas emissions in maritime shipping industry. Thus, they are important for the normal operation of the ship. Since there are significant costs related with the use of these systems, the evaluation of their operational performance and availability is a major tool contributing to the decision-making process of the maritime industry stakeholders. This paper is an attempt to study the use of the ship scrubber systems aiming to the evaluation of their availability. Using different scenarios on the layout of the system, its components, the maintenance intervals and the stochastic modeling approach on the other hand, the authors attempt to identify that combination maximizing the availability of the system, proposing a set of operational and maintenance parameters that improve the environmental related performance.

Classification of the subsea Christmas tree components by a FMECA

ABSTRACT. Offshore installations spend millions annually trying to guarantee the integrity of their equipment. The challenge lies in determining where to apply the industry's always finite and limited resources to provide the greatest benefit. Risk-based inspection (RBI) was developed in the oil industry to assist in the identification of the highest risk equipment (working with the respective failure modes) and also to design an inspection program that not only identifies the most relevant failure modes but also reduce your chances of occurrence. This study is part of a project whose objective is to develop a methodology to monitor the integrity of the equipment, optimizing inspection policies, based on the risk associated with the operation of Christmas trees in subsea operations. The results of this phase of the project include the elaboration of two typical configurations of Christmas trees and the construction of their FMECAs; which were based on the database OREDA [1], on information available in the literature and information of equipment suppliers. Finally, a classification of Christmas trees components is provided based on their risk indices. These results contribute to the performance of more effective risk analyzes in the offshore industry, since data on these equipment is scarce in the literature [2], especially data related to their configurations, failure probabilities, inspection methods and failure detection probabilities.

An Artificial Neural Network based decision support system for cargo vessel operations

ABSTRACT. There is increasing interest in understanding fuel consumption from the perspective of increasing energy efficiency on a vessel. Thus the aim of this paper is to present a new framework for data-driven estimation of fuel consumption by employing a combination of (i) traditional statistical analysis and (ii) Artificial Neural Networks. The output of the analysis is the most frequently occurring fuel-speed curves corresponding to the respective operational profile. The inputs to the model consider important explanatory variables like draft, sea current and wind. The methodology is applied to a case study of a fleet of 9000 TEU vessels, in which telemetry data on the fuel consumption, vessel speed, current, wind direction and strength were analysed. The performance of the method is validated in terms of error estimation criterion like R2 values and against physical phenomena obtained from the data. The results can be used to study the economic and environmental benefits of slow-steaming and or fuel levies, or by extending this part of the model into exergy analysis for a more holistic review of energy saving initiatives.

14:00-15:20 Session TH4I: Human factor in the smart industry
Location: Giffard
PRESENTER: Silvia Carra

ABSTRACT. Human-Machine Interaction (HMI) appeared during the first Industrial Revolution, when machines were introduced as tools functional to workers’ needs. With the arrival of automation systems in the 70s, workers discovered the necessity to be re-skilled to work alongside machinery. The recent advent of Industry 4.0 also led to Human-Technologies Interaction (HTI), since new technologies furnish connections and algorithms for a safe and effective interaction between humans and autonomous systems. Today a transition from “interaction” to real “collaboration” between humans and machines can be observed and companies need to adopt adequate decision making approaches. A careful assessment of both advantages and disadvantages of using technologies is needed, since they can improve safety, but they can also add new industrial and occupational risks. Based on an analysis of international literature and European technical standards, the present study aims to lay the foundation for a methodological framework supporting companies during the decision process of establishing safety convenience of integrating human and technological components. Some possible models and decisional approaches are extracted from literature and used to identify useful criteria to be declined for the specific topic of safety. The study is also thought as a support for possible future introductions of new safety standards and validation procedures for safety skills of industrial collaborative technologies.

PRESENTER: Marianna Madonna

ABSTRACT. Recent studies have highlighted the need to investigate the new human-machine interaction in smart factories. In fact, the enabling technologies of the fourth industrial revolution allow new modes of interactions between operators and machines; they cooperate with each other to execute numerous complex and high-precision tasks sharing the same workspace. According to the concept of Operator 4.0 proposed by Romeo [1], the collaboration between man and machine includes both “cooperative work” with robots, but also “work aided” by machines. In particular, among the types of Operator 4.0, the collaborative operator works together with collaborative robots, while the super-strength operator uses wearable exoskeletons to increase his strength to perform manual activities without effort. In this perspective, this work wants to highlight the regulatory aspects concerning the safety aspects of the human interaction with collaborative robots, with autonomous mobile robots and with exoskeletons. This paper focuses on emerging safety aspects related to these collaborative machines and their impact in terms of Essential Health and Safety Requirements (EHSRs) in the Machinery Directive. In this regard, it is necessary to underline the gap among the increasing maturity of the technology and the lack of analysis and route for this specific field in the current legislation and standardization.


ABSTRACT. The current European regulatory framework for the work health and safety has specifically identified work-related stress as one of the risks to be consider and to be managed properly. In particular, the new legislation has implemented interest in the topic of “work-related stress”. The proposed paper aims to offer theoretical principles and operational information about how to assess work-related stress risk. The study proceeds from the assumption that the danger is due to the human being and consequently that certain "event" depends on the combined disposition of the industrial process to which the subject is assigned and above all by the intrinsically individual characteristics of the human resources to be considered. The paper meant to be a contribution to the identification of methodologies applicable to the assessment of work-related stress in the workplaces for the identification of shared diagnostic paths in the field of psychosocial risk. In this context the allostatic load of the subject under examination has the objective of measuring, monitoring and observing the resilience or the risk in developing a mood disorder, depressive, anxiety, psychosomatic or somatoform, (overt or potential) through vital signs and personality of the subject. It is a way to observe the consequences (stress) of factors that seem apparently unrelated to each other and allows us to relate a series of behavioral profiles that are correlated by known pathophysiological links with industrial processes, both in terms of probability of occurrence and consequences, that is the severity of the impact. The proposed procedure based on a semi-quantitative approach, considers two hyper-matrices that will allow the definition of a risk matrix based on an objective indicator of assessment risk. Subsequently, the work will suggest a series of preventive and protective measures aimed at the risk management itself, and which will be completed by an applicative approach in a partially simulated form. These matrices will tell us the tendency to develop a somatoform or psychosomatic as well as mood, depressive or anxiety disorder, relative to a specific activity and, considering the different hazards present in the company, it will be possible to reconstruct the overall level of risk held by the plant under examination.

Situation Awareness to be set free from the Collaborative Robot
PRESENTER: Nicole Berx

ABSTRACT. Human Factors (HF) is the scientific discipline concerned with the understanding of interactions among human and other elements of a system with the aim to optimize system performance. One important concept within HF is Situation Awareness (SA). Since the early 1990s, the original models of individual SA have been complemented by additional models of shared and distributed SA. Although SA has already been the subject of investigation in many complex socio-technical systems, there is not a great body of scientific work on SA in relation to collaborative robots (cobots). This paper explains and reviews the most relevant models and evolutions in SA thinking and examines to which extent the concept of SA, and more particularly Distributed Situation Awareness (DSA), can be applied to cobots. Individual models of cobot SA to detect human presence and predict future human activity from a cobot-centered perspective are technologically feasible and are currently applied in industrial applications. We defend that the SA within the cobot has to be of a higher order than the level of the individual human or technological agent, and needs to be ‘set free’ in order to not only be shared with human team members, but also to understand how SA is distributed throughout the system as a whole. This paper proposes to inform future cobot design with DSA analysis from a socio-technical perspective.

14:00-15:20 Session TH4J: Resilience Engineering
Location: Botanique
An innovative approach for ongoing assessment of critical infrastructures’ resilience based on a nonfunctional requirement ecosystem
PRESENTER: Alexandre Weppe

ABSTRACT. Geopolitical context or climate change induced more and more disasters in the two last decades. Particularly, Critical Infrastructures (CI - e.g., water distribution, health care) that support the daily life of societies are impacted by these disasters. These CI are indeed essential. By their various interactions and links, they become more fragile when facing complex situations. For instance, a local event, occurring in a CI (e.g., an accident), can propagate throughout these interactions, impacting other CI, leading to a higher intensity and to a global impact. Classical risks analysis is limited in terms of global and dynamic vision of these CI, to manage these events efficiently and to recover to an acceptable functioning state for the end users. To this purpose, resilience is a useful concept, highlighted by numerous research works and organizations to characterize the best way a CI has to react to an undesirable event and avoid, if possible, its propagation. The purpose of this paper is to present the main principles of a methodology to assess and analyze resilience of a CI based on a multi views and systemic model formalized as a digital twin. This work is done in the frame of the project RESIIST supported by the French research agency ANR (Résilience des infrastructures et systèmes interconnectés, 18-CE39-0018-05) to provide scenarios to test and evaluate the proposed methodology.

Investigating resilience

ABSTRACT. The notion of resilience is an interdisciplinary notion whose development and use has accelerated significantly over the past twenty years. Since its first use in materials science, ecology, and psychology, many disciplines have adopted it, particularly the sciences of safety and security. More recently, resilience is at the center of public policies for preventing and managing crises globally, nationally, and locally. The diversity of theoretical and empirical perspectives on resilience allows this concept to integrate : i) objects of different scales (from the technical object to the socio-ecological system); ii) different temporalities (before, during, and after the occurrence disturbance); iii) disturbances of various kinds (internal or external origin to the object, anticipated or unanticipated nature, positive (opportunity, innovation) or negative (accident, crisis, disaster)); iv) different perspectives concerning the disturbance (return to a normal state, robustness, management of the limit of the object's control capacity). The design of definitions, models, methods, and indicators of resilience creates a potential for enhancing the safety and the security of objects. Nevertheless, the absence of a unique resilience culture creates confusion, ambiguities, misunderstandings, or contradictions between stakeholders. The paper's contribution is an interdisciplinary conceptual and methodological framework aiming at supporting resilience studies. John Dewey's theory of inquiry and methodological literature about qualitative analysis ground the framework. The diversity of perspective on resilience guides the application of the four phases of the framework: 1) defining the resilience problem; 2) collecting data; 3) analyzing data, and 4) restituting results. The first part of the article presents the diversity and the complexity of resilience theories. The second part describes basis of the framework. The third part presents and illustrate the framework.

Insights about what authorities and emergency services need and expect from society
PRESENTER: Leire Labaka

ABSTRACT. The society plays a significant role in the face of a crisis. This role appears in many areas such as assisting victims, helping vulnerable groups, sharing information, allocating resources etc. To fully utilize this kind of effort, it needs to be aligned with the needs and efforts of the authorities and emergency services. This is one of the main purposes of the ENGAGE, a European project that aims to identify a set of solutions that foster the interaction and collaboration between the society and emergency services, and authorities. To identify the characteristics these solutions need to fulfill, we designed a survey to ask authorities, emergency services, and volunteers about their needs and expectations from the society to better handle a crisis. The questionnaire is divided into two main parts: the first part covers the responders’ risk perception and the second part gathers the need and expectation they have from society in the following dimensions: improving communication, building community networks, sharing knowledge, building trust, allocating resources, improving preparedness and involving and empowering the society in the decision making processes. We launched the survey in 7 European countries of different nature (Spain, Norway, Italy, France, Sweden, Romania, and Israel). The difference between the characteristics of each country enriches the findings and allows for developing solutions that consider different contextual aspects. In this paper, we present the results gathered from the survey in Spain aligning the results with the contextual characteristics of the country. In total, 103 responses were collected where most of the responders work in the civil protection area followed by health services and police. The results show that the responders are more aware of pandemics followed by social disruptions. We think this aligns with the current situation given the current coronavirus pandemic. Regarding the needs and expectations from society, for example in the case of preparedness, the authorities and emergency services believe that society should have self-adapting capacities to deal with crisis followed by their ability to share information with other individuals. With this kind of information, the aim is to provide guidelines for defining the solutions that improve the interaction between the authorities and emergency services with the society to better manage crisis.

A Comparative Analysis of Dynamic vs. Quasi-static Approaches for Resilience Assessment of a Bulk Power System Against Severe Wind Events
PRESENTER: Farshid Faghihi

ABSTRACT. Severe weather-related events, e.g. windstorms, are one of the main causes of large-scale electric power outages worldwide. Although the probability of occurrence of these events is low, they are considered into the high-risk category due to their significant consequences. The intensity and frequency of these events have gradually increased in the last decades and are expected to keep increasing in the future due to climate change. To this end, power grid resilience is critical to reduce the risk and vulnerability to these events. In this context, the objective of this research is to apply a probabilistic approach to quantify the resilience of a bulk power system, in terms of associated resilience metrics, against severe windstorms. The probabilistic methodology applied for resilience assessment is as follows. In the first step, the probabilistic methods are used to model the wind speed in severe windstorms. Next step is dedicated to modelling the failure of the system elements during the windstorms. A Monte Carlo sampling method is applied to generate scenarios of multiple failures. An important step of the methodology is the simulation of the behaviour of the power system, which can be performed either by quasi-static or dynamic approaches. The quasi-static approach refers to applying Optimal Power Flow (OPF) analysis while, in a dynamic approach, the behaviour of the power system is simulated via tracking the evolution of the system variables (e.g. voltages and frequency) and possible interventions of the protection systems. Finally, the resilience metrics (e.g. the expected energy not served) are estimated after restoration phase. The methodology is applied on the IEEE 39-bus system and a comparative analysis is provided to highlight the difference in the estimated metrics for quasi-static versus dynamic approach. This comparison is meaningful to reveal the effect of the system element trips due to the electrical instability in propagation of the disturbance through the power system.

14:00-15:20 Session TH4K: Software Reliability
Location: Atrium 1
Improving the Reliability of Autonomous Software Systems through Metamorphic Testing
PRESENTER: Arnaud Gotlieb

ABSTRACT. See the joint 1-page paper (in PDF)

A Comparison of Different Approaches for Verification and Validation of Software in Safety-critical Systems.

ABSTRACT. The increased reliance on software in safety-critical systems requires an added emphasis on software verification and validation. To ensure, with sufficient confidence, compliance with functional safety requirements and avoidance of hazardous states, effectiveness and coverage of approaches to software verification and validation are crucial. This paper presents an all-electric control system in subsea oil production that faces this challenge, a case study for our research into a digital twin for safety demonstrations. The paper highlights some promising ideas and approaches from academia and industry seen as beneficial for cross-domain knowledge transfer, set in context to recommended tools, methods, and techniques from standards, e.g., IEC 61508 and ISO 26262. Beyond industrial insights, the article is self-serving for knowledge and ideas adaptable to an intended approach of verification and validation of safety-critical software using a digital twin. Based on the literature survey, recommended ideas to utilize in our approach are presented in this paper.

PRESENTER: Albin Tarrisse

ABSTRACT. In the past few years, Artificial Intelligence (AI) and more precisely Machine Learning (ML), have been developing quickly and have found applications in a lot of various fields, including industry. The process industry and the use of machinery are critical because they represent a high risk for the surroundings, humans, goods and environment. To mitigate those risks, safety devices carrying out safety functions are used to monitor and control processes and machinery in real time. The IEC 61508 which gives a framework to certify the Safety Integrity Level (SIL) of such systems is not suitable for complex algorithms like ML algorithms can be. We will attempt in this paper to discuss the general concepts of the standard that hinder the development and assessment of a safety devices using AI in the current state of knowledge. Firstly, the AI development should be integrated in the lifecycle development of the device: the IEC 61508 impose to define a software lifecycle which is usually based on a V-model. The life cycle of ML software is different from that of critical embedded software as implicitly defined by the standard. A safety lifecycle must therefore be adapted to include among other things the collection, sampling and formatting of the data as well as the learning process and the tests of the software. Secondly, the standard is largely based on the traceability of specifications through the different stages of the lifecycle and on controlling the complexity of the software. The means recommended by the standard are not adapted to ML, however, a parallel can be made with the current works on the topic of ML explainability, trustworthiness and robustness. Several options can be considered to demonstrate the robustness of a device, such as formal verification, white-box and black-box testing. Furthermore, ML relies on the formation of learning database. Some requirements have to be defined for this database and the related process (sampling, labeling, etc.) in order to prevent the introduction of systematic faults in the learning process ant to be able to determine the limit of validation and therefore use of the safety device. This may lead to the qualification of the learning base with respect to a specification and intended use. Finally, from the IEC 61508 standard perspective, the software of a safety device has a deterministic behavior. Because this paradigm does not necessarily apply to ML algorithms, new metrics could be figured out to assess and quantify this property.

Review of Reliability Modelling Methods for Safety-critical Software in Nuclear Power Plant

ABSTRACT. Safety-critical software plays an important role for the safe operation of a nuclear power plant (NPP). However, it also brings challenges both to the reliability analysis of the safety system and to the Probabilistic Safety Assessment of the NPPs. The reliability analysis of safety-critical software is also expected by the nuclear regulation agencies and the software development groups for test evaluation and optimization. It is essential to carry out reliability modelling research on nuclear safety-critical software. The detected faults during software test process are regarded to have close connection with the software reliability and there have been hundreds of test-based software reliability models. Software reliability growth models are very commonly used in software reliability evaluation. To incorporate more information and provide more accurate analysis, modelling software fault detection and correction processes has attracted widespread research attention recently. This paper reviews the research progress of software reliability in the field of nuclear power in recent years. Combining with the characteristics of safety-critical software, this article makes a detailed analysis of software reliability growth model which helps to analyse the reliability of safety-critical software.