next day
all days

View: session overviewtalk overview

08:15-09:00Coffee Break
09:30-10:10 Session Plenary I
Location: Auditorium
Artificial Intelligence, Safety and Reliability : an old story or a new age?

ABSTRACT. The new proposals of artificial intelligence, and especially data science, are now flooding the scientific field, the industrial field, and, more generally, society. These proposals are not limited to the progress in machine learning itself, especially in deep learning. Indeed, many scientific areas are impacted and are facing significant evolutions: image recognition, scientific computation aggregating physical models and data-based models, data augmentation, natural language processing, optimization and reinforcement learning, new digital twins, cyber-detection by machine learning, supervision of large industrial systems, bidirectional digital assistants (learning from and to the expert), hybridization between symbolic AI and connectionist AI... The fields of safety, maintenance, and reliability are not exempt from this surge and these promises. But is this safety-IA hybridization new? One could say that until now, data science has come to the rescue of safety to help describe aging processes, stochasticity of states and their transitions, .... In its unsupervised versions, it can also be of great help in detecting cases of failure not yet observed in real life. Today, with the explosion in the number of AI algorithms integrated into industrial and service systems, the question arises about their validation and confidence placed in them. Safety sees before it a new field of application of exponential size where the usual tools are helpless in the face of the combinatorial nature of states and operational domains with an increasingly wide spectrum. The talk proposes response elements to these dual-entry questions. It will illustrate them through current applications such as autonomous vehicle validation, fault diagnosis of large critical systems, and rail maintenance.

10:10-10:25Coffee Break
10:25-11:25 Session MO1A: Risk Assessment
Location: Auditorium
PRESENTER: Marcelo Póvoas

ABSTRACT. This study proposes a method to be used in the strategic decision-making process, taking into account the identification and prioritization of the potential risks of Management of Change (MOC) in an industrial environment. The analytical hierarchy process (AHP) and the Bayesian Belief Networks (BBN) were used to assess the risks that could affect the regular operations, in order to generate data for an effective decision-making process, in addition, concepts of machine learning and artificial intelligence (AI) were introduced so that the analyzes can be done in a more automated way, generating reports that can assist in decision making. No previous work was found dealing with analysis to prioritize risks arising from MOC in any type of industry. As a result of this study, a global risk matrix was proposed. The factors that most impact are Lack of Stakeholder involvement, Lack of risk assessment in the MOC process and Lack of knowledge of employees involved in the MOC process. Culminating 12 steps were created to implement a risk-free MOC process. The study provides a method to be used by professionals, engineers and decision makers to identify risk factors that could affect companies operations.


ABSTRACT. The classical approach to design is based on a deterministic perspective where the assumption is that the system and its environment are fully predictable, and their behavior is completely known to the designer. Although this approach may work fairly well for regular design problems—given, of course, the possible extra costs of over-designing for the sake of safety and reliability—it is not satisfactory for the design of highly sensitive and complex systems where significant resources and even lives are at risk. In this paper, a risk-based design framework using Simulation Based Probabilistic Risk Assessment (SIMPRA) methodology is proposed. SIMPRA allows the designer to use the knowledge that can be expected to exist at the design stage to identify how deviations can occur; and then apply these high-level scenarios to a rich simulation model of the system to generate detailed scenarios and identify the probability and consequences of these scenarios. SIMPRA approach is much more efficient in covering the large space of possible scenarios as compared with, for example, biased Monte Carlo simulations because of the planner element which uses engineering knowledge to guide the simulation process. The value-added of this approach is that it enables the designer to observe system behavior under many different conditions. The designer can also modify the design to compare the results of the risk assessment under different design specifications. This process will lead to a risk-informed design in which the risk of negative consequences is either eliminated entirely or reduced to an acceptable range. For illustrative purposes, an earth observation satellite system example is introduced. The goal is to consider the early stages of the satellite’s design as an example of how the proposed methodology improves the design by making it risk informed. This example highlights the flexibility of the risk scenario planning process in adapting itself to the new system settings after applying design risk management tools.

Margins Assessment using Dynamic PSA

ABSTRACT. The purpose of this communication is to present an innovative method to assess Initiating Event Frequencies usually evaluated with “static modeling” by Fault Trees.

Probabilistic Safety Assessment (PSA) models developed for nuclear industry consider a list as exhaustive as possible of Initiating Events (IE), for all reactor states. Initiating Events induced by the failure of a system used in normal operation are commonly modeled using Fault Trees. This method can lead in some cases to large over estimation between the practically observed Initiating Event frequency (from Operating Experience) and the theoretically built one using Fault trees.

From several years, Framatome is developing dynamic PSA based on Petri nets (see Dosda et al. 2021). Compared to “static modeling” by Fault Trees, the use of dynamic PSA allows more realistic modelling of plant behavior leading to reduce conservatisms and more representative results. This could be of particular interest for slow degradation scenarios, for which it may be possible to take credit for repairs and grace periods.

In this context and for development perspectives, Framatome is experiencing a method for dynamic PSA model of the loss of Heating Ventilation and Air Conditioning (HVAC) Initiating Event. The use of dynamic PSA on this study case leads to a large reduction of the calculated Initiating Event frequency. As initiating Events induced by the failure of systems used in normal operation have important contribution to the Core Damage Frequency (CDF) of reactors, application of this method could lead to non-negligible reductions of this CDF. In this way, dynamic PSA model is a perfect tool to highlight available margins compared to a static PSA model.

The communication presents the main concepts of this approach, its implementation with Petri nets, the first results and concludes with the perspectives.

10:25-11:25 Session MO1B: Mathematical Methods in Reliability and Safety
Location: Atrium 2
Reliability assessment with a compound Poisson process-based shock deterioration model

ABSTRACT. Existing structures may suffer from resistance deterioration due to repeated attacks. The modeling of resistance deterioration is a crucial ingredient in the reliability assessment and service life prediction of these degraded structures. In this paper, an explicit compound Poisson process-based model is developed to describe the shock deterioration of structural resistance, where the magnitude of each shock deterioration increment is modeled by a Gamma-distributed random variable. The moments (mean value and variance) and the distribution function of the cumulative shock deterioration are derived in a closed form. Subsequently, the overall resistance deterioration is modeled as the linear combination of the gradual and shock deteriorations. The proposed model can be used in the time-dependent reliability assessment of aging structures efficiently. A numerical example is presented to demonstrate the applicability of the proposed deterioration model by considering the time-dependent reliability of an aging bridge.

A Comprehensive Probabilistic Assessment Method of UAS Ground Collision Risk

ABSTRACT. Context Unmanned Aircraft Systems (UAS) are widely experienced in domains such as transportation, delivery or infrastructure surveillance. However using these systems for missions near to populated areas presents new safety challenges. To address these challenges, the European Aviation Safety Agency has published safety assessment guidelines for unmanned operations. This document requires to assess, for a given operational profile, the likelihood of on-ground collision with critical infrastructure or people.

Problem statement Despite these regulatory requirements, the probabilistic assessment of on-ground collision is only partially addressed by existing works. On one hand, various works promote the Model Based Safety Assessment to identify the failure contributing to the crash. One the other hand some works provide probabilistic estimation methods of an on-ground collision knowing that the drone is unable to ensure flight continuation. Moreover in these methods, the assessment is performed thanks to Monte Carlo simulation. However, with the growing complexity of UAS, the computational effort to estimate the probability of rare events with standard Monte Carlo method becomes intractable for modern UAS.

Contributions The contribution of this paper is thus to provide a comprehensive tooled method to estimate the on-ground collision probability by considering the contribution of on-board failures, tolerance mechanisms and operational specificity. To tackle Monte Carlo limitations, variance reduction methods and more specifically importance sampling is used to obtain quicker and tighter estimation of the probability than standard Monte Carlo method.The paper provides a detailed presentation of the method, and a demonstration of importance sampling benefits over Monte Carlo through a comparative study on a UAS case study. The experiments are based on a safety model formalized with the Open AltaRica platform and on a custom simulator to perform both Monte Carlo and important sampling simulations.

PRESENTER: Julien Beaucourt

ABSTRACT. Common cause failure (CCF) are known to significantly contribute to the risk as shown by the probabilistic safety assessments (PSA). They have been taken into consideration since the early ages of PSA developments in the nuclear industry [1], as in other fields [2,3]. Nevertheless, the modeling of CCF is also widely recognized to be a challenging part of PSA, especially when the systems’ failure is described through a fault tree (FT) structure. Among the most commonly used models for assessment of CCF parameters, the alpha factors model is generally considered to be one of the most relevant for the integration of operating experience feedback (OEF). This parametrization of the CCF is based on two sets of parameters: the total failure rate of each item of the CCF group (due to CCF and independent failures), and the k parameters, which are the fraction of the total failure rate associated to k components CCF in a group of size m>k. The evaluation of these parameters based on operating experience is generally difficult: for highly reliable systems such as the ones that are used in nuclear industry, the number of failures is generally very low, and is it is therefore difficult to rely on classical frequentist approaches. In this context, the Bayesian approach is generally recognized as a good alternative to the frequentist approach since it allows for introduction of exogenous data (such as expert judgment, generic data…) in the evaluation [4]. Moreover uncertainties are naturally considered and evaluated in the Bayesian modeling. The difficulty is that analytical determination of the posterior distribution is generally not possible, except for the very specific case of conjugate prior and likelihood. In this paper, we present a Bayesian quantification of CCF parameters of alpha factors models. The Bayesian computation is performed using the Stan software [5]. The Stan software is a state-of-the-art tool for statistical modeling, especially Bayesian analysis, based on a Markov Chain Monte-Carlo (MCMC) algorithm for posterior sampling. As a first step, the posterior distribution is sampled from a conjugate prior/likelihood model (Dirichlet/multinomial), using controlled data. Then the alpha factors CCF model parameters are assed in a two-stage (non-analytical) Bayesian model, using OEF data.

10:25-11:25 Session MO1C: Maintenance Modeling and Applications
Research on Concept Modeling of Mission-based Aviation Equipment Support System of System

ABSTRACT. In view of the operational characteristics of aviation troops under the condition of informationization, the concept of aviation equipment support system of system (SoS) and the conceptual models of aviation equipment support system of system are defined. The concept model of mission-based aviation equipment support system of system is established according to the logical main line of the “task-function-entity-relationship”. Task model, function model, entity model and relationship model of aviation equipment support system of system are constructed. Through four categories and seventeen views, the overall conceptual model is described in a more complete and detailed way. The research work in this paper can lay a foundation for the follow-up simulation, evaluation and decision-making of support system operation, and provide help for the aviation combat intelligence support and decision-making.

Criticality-based predictive maintenance scheduling for aircraft components with limited spares

ABSTRACT. In this paper we propose a criticality-based scheduling model for aircraft component replacements. We schedule maintenance for a fleet of aircraft, each equipped with a multi-component system. The maintenance schedule should take into account the limited stock of spare components and the Remaining-Useful-Life prognostics for the components in this system. We propose a component replacement scheduling model with three stages of maintenance criticality: i) critical aircraft that are not airworthy due to a lack of sufficient operational components, ii) predictive alerts for expected component failures, and iii) non-critical aircraft with some failed components. An Adaptive Large Neighborhood Search (ALNS) algorithm is developed to solve this criticality-based aircraft maintenance planning problem. The ALNS algorithm has two sub-heuristics: a constructive heuristic, which provides an initial feasible solution, and an improvement heuristic, which iteratively improves this initial solution with an adaptive destroy-and-repair approach. The framework is illustrated for a fleet of 50 aircraft, each equipped with a $k$-out-of-$N$ system of components. The results show that we can obtain a predictive maintenance planning for a fleet of aircraft an outstanding computational performance (less than 6 seconds for a fleet of 50 aircraft). Moreover, our planning with three levels of criticality ensures aircraft airworthiness while making use of less expensive maintenance slots, in comparison to a planning with two levels of criticality.

PRESENTER: Mohamed Jmel

ABSTRACT. Nowadays, the warranty of a product is clearly a competitive marketing tool and a commercial instrument which allows the contractualization of the the relationship between customers and companies while balancing all the risks and benefits for the sake of the manufacturer. Classical approaches are based on determining a warranty policy based on stationary characteristics [1]. They generally couple reliability and cost specific to the product. The recent industrial interest in collecting and storing operating data on the machines motivates the thoughts on evolving towards a specificity of these warranty contracts. Instead of designing contracts based on expertise and gut feelings, we want to tailor our warranty contracts regarding customers profiles and the use of the products. Within this context, the aim of this paper is to develop a methodology to define customer-specific warranty contracts applicable in an industrial context. This methodology is developed from real data collected on industrial printers. We propose here to extend the work of Zhang et al. [2] to take into account the heterogeneity of customers profiles. A process of segmentation from the company’s existing database and based on data mining techniques will approach the heterogeneity of customers behavior and therefore the establishment of specific contracts. Let us recall the existence of a large number of segmentation methods [3], each with its limitations and advantages. In our case, the proposed segmentation must ensure the best qualification of the various risks (client-supplier). A first analysis of the performance of several clustering methods will therefore be performed. Based on the results of this analysis, we will then propose the development of the methodology to define the various warranty strategies that we will apply on real data.

References [1] Xiaolin Wang and Wei Xie, Two-dimensional warranty: A literature review, Journal of Risk and Reliability 232, 284–307 (2018). [2] Zhaomin Zhang, Zhen He and Shuguang He, A Customized Two-dimensional Warranty Menu Design for Customers with Heterogeneous Usage Rates, IFAC-PapersOnLine 52-13, 559 - 564 (2019). [3] Konstantinos Tsiptsis and Antonios Chorianopoulos, Data Mining Techniques in CRM: Inside Customer Segmentation (2010).

10:25-11:25 Session MO1D: Prognostics and System Health Management
Location: Panoramique
Investigation into real-time influence of time varying operational conditions and sensor signals on reliability of engineered systems

ABSTRACT. In spite of recent developments including several applications of machine learning approaches in the field of Prognostics and Health Management (PHM) with particular emphasis on the Remaining Useful Life (RUL) prediction, it still remains a challenge to investigate real-time influence of time-varying operational conditions and sensor signals on system reliability and the corresponding RUL. Since machine learning methods have difficulties capturing this, we propose a hybrid model integrating survival analysis techniques and multivariate time series approaches. This is analysed using a subset of the C-MAPSS turbofan failure data set, with the aim of identifying the real-time influence of operational conditions and sensor signal variations on the degradation behaviour of turbofan units. More specifically, the Cox Proportional Hazards Model (PHM) is employed to generate heterogenous reliability indices for different turbofan units in both the training and test sets. Then, a Vector Autoregressive model with Exogenous variables (VARX) using pairwise Conditional Granger Causality tests for feature selection is employed to model and analyse dynamic degradation behaviour of individual turbofan units in both training and test sets. Finally, the time varying effects of operational conditions and sensor signals are investigated by means of the Impulse Response Function (IRF) which is intrinsic in the VARX model. Results show that, compared with baseline methods, the proposed approach is competent in reflecting the real-time influence of operational conditions and sensor signals upon system reliability.

A Fast Fault Diagnosis Method For The Unlabeled Signal Based On Improved PSO-DBSCAN Algorithm

ABSTRACT. The fault diagnosis of different components or signals with supervised learning method usually requires a large number of training samples. In practical engineering applications, the diagnosis efficiency is low and the failure rate is high due to the small amount of training samples. In order to solve these problems, a step-by-step fast fault diagnosis method based on improved Particle Swarm Optimization (PSO)-Density Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm and Least Squares Support Vector Machine (LSSVM) is proposed. Firstly, the original signal is preprocessed by normalization and wavelet packet de-noising. Then, the dimensionality reduction by Principal Component Analysis (PCA) is used as the input of the improved PSO-DBSCAN algorithm to cluster the data, and the train samples are formed after the data categories. Secondly, the train samples are used as the input of LSSVM to train the fault classifier. Finally, by using the trained classifier to classify other data, the working state of the component or system can be obtained. The feasibility and effectiveness of this method can be verified by the simulation analysis of the oil data of a certain type of engine. And it also prove that this method can realizes the fast fault classification and diagnosis for the unlabeled signal.

An Intelligent Fault Diagnosis Method of Gear Based on Parameter-Optimized DBN Using SSA
PRESENTER: Kunyu Zhong

ABSTRACT. As an important component of transmission system, gear usually suffers from complicated fault modes. Although most current methods based on traditional machine learning show good performance in identifying different fault modes, their diagnostic accuracy and generalization ability are obviously insufficient for detecting fault severities with highly similar signal features. Aimed to this problem, an intelligent fault diagnosis method based on deep belief network (DBN) and sparrow search algorithm (SSA) is proposed. Firstly, vibration signals of different gear fault modes and severities are acquired as input samples for the DBN model, then SSA incorporated with elite opposition-based learning (EOBL) is introduced and to search the optimal combination of learning rate and batch size during DBN training. Finally, a parameter-optimized DBN is established for gear fault diagnosis and severity detection. The experiment analysis demonstrate that the proposed method can avoid complicated signal processing and subjective interference. It is proved to have superior feature extraction ability, diagnosis accuracy and stability compared with the methods based on shallow learning and the non-optimized DBN.

10:25-11:25 Session MO1E: Human Factors and Human Reliability
Location: Amphi Jardin
The use of driving simulator for training learner drivers belonging to a high-risk group.

ABSTRACT. Migration drivers are considered a high-risk group in traffic, especially drivers from Middle East, and Africa are represented more than other groups in road accident statistics (Nordbakke & Assum, 2008). There are several factors why this group are at a higher risk than others. First of all, this group often consist of people with another cultural understanding of risk and road safety and a significant different driver training than the Norwegian driver culture and training (Haldorsen, 2011). In addition, the language and terminology used in driving is different from what they are familiar with (Holmquist, 2019). For this reason, the group is specified in the Norwegian national transport plan (NTP) as a group where research-based measures for increasing safety are in demand. Thus, our research question was: Can driving simulator be a beneficial measure for safety for the high-risk group migration driver trainers? Method: Five interviews with driver instructors who used driving simulators to train migration driver trainers were conducted in addition to observations of teaching situations. Grounded theory was used for analysis. This research is part of a larger project of driver training for migration driver training where optimal driver training has been the main issue. The project lasted for 3 years of which driver instructor students, driver trainers, and driver instructors has been part. Results: The core category was “The simulator could increase safety training” This was based on the 2 main categories “The simulator is used like a car” and “Self-training in simulators”. The findings indicate that the simulator is mostly used instead of a car with the driver instructor present during a training session. Further, the safety aspects for the driver instructor being in a car such as knowing the migration driver trainer had an understanding of terminology such as braking and duty to yield. This gave the driver instructor a feeling that when the first ride in a car IRL, the driver instructor knew more in advance what the migration driver trainer needed to work more on, and what challenges the driver instructor needed to look out for in IRL traffic. Conclusion: The conclusion was that the use of a simulator in migration driver training is a measure that could be considered to increase safety during driver training and give the migration driver trainer a more thorough understanding of how to drive safely according to Norwegian standards. References: Haldorsen, I. (2011). Høyrisikogrupper i vegtrafikken, samlerapport. Vegdirektoratet rapport nr 15. (High-risk groups in road traffic. A review. Our translation) Nordbakke, S. & Assum, T. (2008). Innvandreres ulykkesrisiko og forhold til trafikksikkerhet. Oslo: Transportøkonomisk institutt. (Immigrants’ risk and relation to road traffic safety. Our translation).

Studying mental stress factor in occupational safety in the context of Smart factory
PRESENTER: Azfar Khalid

ABSTRACT. The use of collaborative robots (cobots) in the industrial setting has grown and continues to expand globally, especially in the context of the smart factory. Humans and cobots are ever increasingly expected to share their workspace and associated issues related to workplace health and safety are expected to rise. This research study seeks to gain an understanding in relation to the impact on the workers mental health, as measured by mental stress, in relation to the variables like task complexity, time constraints, production speed and robot payload etc. whilst working alongside a cobot. Non-invasive neuroimaging data acquisition and processing is used to find the correlations of mental stress with respect to variations in work environment conditions. Mental states often correlate with the brain's alpha rhythms and changes in haemoglobin concentrations and are observable only by a multimodal technique such as EEG+fNIRS. These patterns are responsible for increasing the information content of the measured signals and increase the accuracy of the decoding of mental states. This research defines a strategy for experimental design and the initial acquired patterns against the designed process tasks.

Best Practices in digital Human-System Interfaces at Nordic Nuclear Powerplants
PRESENTER: Lars Hurlen

ABSTRACT. This paper summarises the findings from the project “Best Practices Human-System Interfaces at Nordic Plants” funded by the HAMBO Group, a consortium of Nordic Nuclear Powerplants. It describes a 2-year project with the goal to collect experiences and good practices from the operators/users of digital interfaces in Nordic plants’ control rooms. Ten units from four different power plants in Sweden and Finland participated in the project and data was collected through observations and interviews with main control room operators, training instructors, and staff involved in design and modernisation projects at the plants. The visits included dedicated times for crew observation in scenarios in the training simulators and visits to the main control rooms of the plants.

In this paper we highlight the main findings, present the methods for interviews and observations, define the criteria used for the selection of best practices, and describe, illustrate and provide reference designs for each of the identified best practices. We further highlight current unsolved challenges with digital interfaces and explore observed good practices in visited units. We present an overview of the current state of the art regarding digital interfaces in the Nordic plants, with a focus on the user experience and implications of the modernisations on everyday tasks. We complement the findings of the previous work on display design and visualisation techniques by placing emphasis on the user perspectives on the advantages and current challenges of digitalising the main control room.

To our knowledge this is the first effort to map the state of the art in Nordic plants regarding control room digitalisation and as such provides a unique overview of the operating experience with digital interfaces.

10:25-11:25 Session MO1F: Degradation analysis and modelling for predictive maintenance
PRESENTER: Franck Corset

ABSTRACT. We consider a gamma process for degradation modeling of a system, which is periodically inspected. At each inspection, a decision is taken in respect to the level of the degradation process. We consider the following maintenance framework: - The system degradation indicator is more related to a service quality than a health indicator and the system failure is not considered in this paper; - A perfect preventive maintenance is performed if the degradation level exceeds a fixed safety threshold L: the operation leads to a "as good as new" system; - An imperfect preventive maintenance is performed if the degradation level is between L and a preventive threshold M: the imperfect action is modelling to an arithmetic reduction of degradation of order 1. In this later case, the improvement is proportional to the degradation level at the inspection time (with a reduction factor); - No action is performed if the degradation level is lower than M; - The safety threshold exceeding is not self-announced and this event is detected only during an inspection; - Maintenance actions do not induce a delay.

Considering the cost of the inspections, of the perfect and imperfect preventive maintenances, we derive the closed form expression of the average long run cost of the maintenance policy in order to optimize the expected total cost between two CM (thanks to the renewal theory). A sensitivity analysis is performed and the robustness of the optimal parameters with respect to uncertainty.

Filtering noisy Gamma degradation process: Genz transform versus Gibbs sampler
PRESENTER: Xingheng Liu

ABSTRACT. Stochastic processes are widely used to describe continuous degradations, among which the monotonically increasing degradation is most common. However, in practice, the observed degradation path is often perturbed with undesired noise due to sensor or measurement errors. When the noise is Gaussian distributed with constant variance, different approaches such as Monte Carlo integral, Gibbs sampling, and Genz transform can be used to compute the expected likelihood.

In this paper, we show the limitations of the Genz transform approach and the consequences of its inappropriate use. We also improve the Gibbs sampler by proposing an enhanced rejection sampling algorithm. In the presence of noise, the calculation of the likelihood function involves multivariate normal integrals. Genz transform is a sequence of transforms that convert the original multivariate normal integration domain into a unit hypercube. Compared to the Monte Carlo integral, Genz transform is more efficient since it avoids sampling from the domain outside the integration limits. However, suppose during a time interval, the hidden degradation growth is negligible compared to the noise. In that case, we can prove that there is an accumulating error between the observed path and the sampled paths obtained using Genz transform, and the error cannot be eliminated once it appears. This will then result in an incorrect evaluation of the expected likelihood, biased estimates of model parameters, and erroneous prediction of degradation growth. As a comparison, we provide the sampling result using Genz transform, and the improved Gibbs sampler on time-dependent Gamma process perturbed with Gaussian noise.

PRESENTER: Nicola Esposito

ABSTRACT. In this paper, a hybrid condition/age-based maintenance policy is proposed for a deteriorating unit whose observed degradation path is affected by three forms of variability: time variability, unit to unit variability and measurement errors. The perturbed degradation process is modelled by using the gamma-based model proposed in [1]. The unit is assumed to fail when its (hidden) degradation level exceeds a (fixed) given threshold. The failure is assumed to be not self-announcing. So, it is also assumed that a perturbed measurement does not allow to say with certainty whether a unit is failed or not. The maintenance policy is defined by considering three different scenarios. The first one is inspired by the one considered in [2]. In fact, we assume that an optimal age-based replacement time and a single intermediate inspection time are initially planned and that based on the outcome of the inspection it is possible either to immediately replace the units or to continue until the replacement time defined a priori. After each replacement the unit is considered as good as new. The main difference with respect to [2] is that, being unable to detect failures, we assume that replacements can occur either at inspection or at the a- priori planned replacement time only. The second scenario extends the first one by assuming that based on the inspection outcome a new conditional (age- based) replacement time can be possibly planned. Finally, as a third scenario, we extend the second one by assuming that a more expensive, not perturbed measurement can be optionally performed at inspection. The conditional pdfs of both time to failure and actual degradation level of the unit given the perturbed measure are computed by using a particle filter. Finally, obtained results for each scenario are analyzed and compared by highlighting the benefits of each individual setup over the others in terms of long-run average cost rate.


1. Castanier B., Esposito N., Giorgio M., and Mele A., A perturbed gamma process with random effect and state-dependent error, Eds Baraldi P., Di Maio F. and Zio E., e-proceedings of the 30th ESREL conference and 15th PSAM conference, 1-6 November 2020, Venice, Italy. 2. Finkelstein, M., Cha, J. H., & Levitin, G. On a new age‐replacement policy for items with observed stochastic degradation, Quality and Reliability Engineering international, 2020 (36), 1132–1143.

10:25-11:25 Session MO1G: Nuclear Industry
Location: Atrium 3
PRESENTER: Dana Prochazkova

ABSTRACT. The power plants with small modular reactors (SMR) have been developed several tens of years. Their favorable position is in the first place small installed output. This plus is conditioned by high degree of inherent safety of this power plant. Due to low installed power, it is possible to reduce the emergency planning zone and generally reduce the licensing time due to the greater simplicity of the system towards the large nuclear sources. Since power plants with SMR contain hazardous substances and complex technology, their risks need to be managed in favor of integral safety. The paper shows the sources of their risks and the way in which they are managing, which is considering in the Energy Well reactor design, created in the Czech Republic [1. Current knowledge shows that uncontrolled risks of technical facilities sooner or later will cause losses, damage and harms, both on public assets and of technical facility assets [2. Because each territory has its own sources of risks, so in connection with the design, construction and operation of the power plant with SMR it is necessary to deal with external and internal sources of risks and special attention to give to sources of organizational accidents. Research based on critical evaluation of data summarized in [3 shows that: - in itself design of power plant with SMR it is necessary to apply: All-Hazard-Approach; inherent safety principles; Defense-In-Depth Principle; protective barriers and systems; technical protection measures; type of fail safely; quality backups; the integrity of risk management measures; measures to provide the operator with information on the condition of the equipment; and measures to enable an effective response to critical equipment failures, - terms of references need to solve the critical tasks of power plant with SMR which are the personnel activities which contribute to: triggering an unacceptable phenomenon; detection and prevention of the phenomenon in question; management and mitigation of the phenomenon in question; and emergency response, - in the project of the power plant with SMR from the point of view of safety, it is necessary to monitor the requirements for: durability; controllability of equipment and processes; life cycle; human resources; costs; technical services; security of employees, humans and environment in the surrounding; conditions for creation of safety culture at operation.

PRESENTER: Karel Vidlak

ABSTRACT. In the Temelin nuclear power plant (further NPP) with a WWER reactor, the steam generator is an important part of the technology that physically separates the media of the primary and secondary circuits. It is a horizontal heat exchanger with a large heat exchange area, formed by bundles of "U" pipes. The temperature and pressure ratios in the steam generator are set in such a way that there is intensive steam development on the surface of the pipes, as well as the need to drive turbo generators [1]. The safety of the NPP depends on the correct operation of the steam generator, which is conditioned by a high-quality supply of water supply. Therefore, the supply pipeline belongs to the critical facilities of the Temelin NPP [2]. As part of the risk based maintenance [3], the supply line is, therefore, specially monitored and, if necessary, a remedy is carried out in a timely manner. The work shows the method of technical solution of special defect. During one of the risk-based inspections, a non-satisfying heterogeneous weld connection was detected on the supply water supply. Since it is a critical device, the time demands of the preparation of the reconstruction of the supply water pipe, the correction was carried out urgently with the help of a compensatory device. Compensatory measure is a disassembled structure consisting of four bolts by which a heterogeneous weld joint is withdrawn on the supply water pipe [4]. Non-destructive tests, visual method and capillary method in the range of 100 % [5] were carried out on the welds, which showed a temporary solution to the problem; approximately 6 years. A long-term solution is now being worked on. Subsequent revisions of individual steam generators will be carried out reconstruction of the supply water, which will ensure a permanent solution to the problem.

PRESENTER: Jan Jiroušek

ABSTRACT. The accident at the Fukushima nuclear power plant has demonstrated new challenges for the safety management of a nuclear power plant (further NPP). The proposed measures aim is to avoid damage to the fuel cladding, which sooner or later will cause fission products to leak outside the NPP. In accordance with the Action Plan [1], the Feed & Bleed method was used for removing the residual heat at the Temelin NPP [2]. The steam generator (further SG) in the NPP is used during normal operation to produce steam, which is used to drive the generator turbine [3]. The article describes long-term measures at accident associated with the loss of power supply from emergency diesel generators [1]. The critical function of SG is conditioned by the integrity of the heat exchange tubes. The risk to the austenitic material of SG tubes is mainly posed by chlorides and sulphates in untreated water. Tests show that emergency supply with untreated water from the Vltava River will cause limit concentrations to be reached after only 94 hours of operation [4]. Therefore, the possibilities of keeping the saltiness of boiler water in SG within the specified limits even in "Station-Black-Out" conditions were explored. The measures used will increase both, the robustness and the resilience [5], which will extend the operating time of SG with coolant within the limits set by the project [1], from 223 to 705, or up to 3500 hours. In the meantime, the radioactive source term will also fall. This will also reduce the risk associated with the leakage of fission products, in particular iodine-131 as a result of de-hermitization of the primary circuit. The considerable benefit of the described solution is the possibility of predominantly manual control or regulation of the valves for blowdown of each of the SG in the room, sufficiently far from the containment. In addition, the bled water from SG, with a potential content of suspended radioactive particles, remains inside the nuclear facility - in the pools of Essential Service Water System and it is not dispersed in the form of aerosols to the surrounding area.

References 1. CR, Post Fukushima National Action Plan on Strengthening Nuclear Safety of Nuclear Facilities in the Czech Republic. 2. ČEZ ETE, The Temelin NPP - document 0TC033R1/DZ0. Temelin: NPP 2018. 3. Author Team of ČEZ, The Temelin NPP - WWER 1000 primary circuit. Brno: ČEZ, 2008. 4. RFAEM, Fittings for equipment and pipelines of Nuclear Power Plants (NPP). General technical requirements OTT – 87/99, Moscow: Ministry of the Russian Federation for Atomic Energy 2004. 5. D. Prochazkova, Critical Infrastructure Safety. Praha: ČVUT 2012.

10:25-11:25 Session MO1H: Aeronautics and Aerospace
Location: Cointreau
Predictive aircraft maintenance: modeling and analysis using stochastic Petri nets
PRESENTER: Juseong Lee

ABSTRACT. Predictive aircraft maintenance is a complex process, which requires the modeling of the stochastic degradation of aircraft systems, as well as the dynamic interactions between the stakeholders involved. In this paper, we show that the stochastically and dynamically colored Petri nets (SDCPNs) are able to formalize the predictive aircraft maintenance process. We model the aircraft maintenance stakeholders and their interactions using local SDCPNs. The degradation of the aircraft systems is also modeled using local SDCPNs where tokens change their colors according to a stochastic process. These SDCPN models are integrated into a unifying SDCPN model of the entire aircraft maintenance process. We illustrate our approach for the maintenance of multi-component systems with k-out-of-n redundancy. Using SDCPNs and Monte Carlo simulation, we analyze the number of maintenance tasks and potential degradation incidents that the system is expected to undergo when using a remaining useful life(RUL)-based predictive maintenance strategy. We compare the performance of this predictive maintenance strategy against other maintenance strategies that rely on fixed-interval inspection tasks to schedule component replacements. The results show that by conducting RUL-based predictive maintenance, the number of unscheduled maintenance tasks and degradation incidents is significantly reduced.

comparing Passenger Satisfaction, Employee’s Perspective and Performance on Quality and Safety Indicators: A Field Study in a Regional Airport
PRESENTER: Luca D'Alonzo

ABSTRACT. This paper aims to analyse the impact of the attributes related to the perceived quality of a Regional Airport service taking into account also different socio-economic characteristics for the passenger’s overall satisfaction, and comparing to these the employee’s perceptions of the service to identify possible critical areas of improvement for the service operator. A field study was conducted in an Italian Regional Airport, a passenger satisfaction modelled was developed using an Ordinal Logistic Regression (OLR) approach. Furthermore employee’s perceptions were elicited on similar topics as the one used for the customers using a modified version for the customers satisfaction survey and the results were triangulated considering also quality and safety performance indicators as objective anchor point of the company. The findings indicate interesting areas of differences in the perceptions of the passengers and airport employees regarding company’s offered services and their performance, both useful in highlighting necessary areas of improvements. The passenger’s overall satisfaction appeared to be influenced by ‘Air conditioning’, ‘Staff skills’, ‘Terminal tidiness’, ‘Bar restaurant services’, ‘Website’, ‘Recharge points’, ‘Airport punctuality’ and ‘Ticket counter’; whereas among the socio-economic factors ‘Sex’ and ‘Age’ were useful to explain differences between passenger’s overall satisfaction. From the analysis of the employee’s business perspective, ‘Communication of objectives’ and ‘Roles and responsibilities’ were the critical business aspects to be raised. To decide how to proceed with the improvements company managers in the key areas of operations were asked to take part in a choice experiment to select the main area of improvements among the ones highlighted by the surveys results. The main area resulting from this choice experiment was “the redistribution of the workforce for better matching between roles and responsibilities”. Quality and safety indicators were also helpful in enriching the analysis and pointing out good synergy with the suggestions collected from the passengers and the employees surveys and offering yet another complementary perspective.

Research on suitable temperature and humidity technology methods for UAV storage microenvironment

ABSTRACT. In recent years, artificial intelligence, big data and machine learning have developed rapidly. As a new class of technology, UAVs(Unmanned Aerial Vehicles) are gradually applied to the military field for their low cost, easy operation, powerful all-weather and all-airspace reconnaissance and strike capabilities, long range and small size. With the mass production of military UAVs all over the world, the long-term storage of UAVs is facing new challenges. During the storage period of UAV, due to the different storage conditions, the failure modes of UAV systems are various. The root cause is most of the impact of the environment, so there are many environmental factors involved. Among them, temperature and humidity are important parameters of the UAV storage environment. Therefore, based on the UAV storage profile, this article analyzes the characteristics of the UAV storage microenvironment, and through the research on the importance of UAV storage environmental factors, determines the most important environmental factors that affect the UAV during the UAV storage process. On this basis, a technical method to determine the suitable temperature and humidity range of the UAV storage environment. This research provides a suitable storage environment for UAV products and improves the reliability and combat readiness of UAV products during storage.

10:25-11:25 Session MO1I: Foundational Issues in Risk Assessment and Management
Location: Giffard
Actors and Risk: Trade-offs between Risk Governance and Securitization Theory

ABSTRACT. Risk governance and securitization theory are generally thought to offer competing or even incompatible perspectives on risk. Correspondingly, assumptions concerning the actors involved in risk management differ. In risk governance, assumptions about the actor are primarily related to differing conceptions of rationality. In securitization theory, the assumptions relate more closely to the way in which social position and power relations shape the formation of meaning.

Despite these and other differences, this article argues that both theories are useful to researchers, practitioners, and policy makers as they expose different dynamics in issues of risk and security and provide alternative explanations of them. By comparing assumptions about actors’ behavior in the two theories the article describes a framework of trade-offs. Treating the differing actor assumptions as trade-offs can enrich empirical study, where risk policy, discourses, governance, and security processes are intertwined in complex relationships, and both rational actions and meaning formations are indispensable to understanding and coping with compound societal challenges.

Hybridization of safety and security for the design and validation of autonomous vehicles: where are we?
PRESENTER: Jeremy Sobieraj

ABSTRACT. More and more ground transports are being used (vehicles, trucks, buses, taxis...) and they remain one of the most dangerous means of transport in the world. However, vehicles are increasingly connected and autonomous with the aim of making travel safer, cleaner and more efficient. They are now able to share and communicate information between themselves and their environment in real time, helping to reduce accidents, traffic congestion and greenhouse gas emissions. These vehicles are Cyber-Physical Systems (CPS), i.e. systems made up of mechanisms that capable of controlling physical entities. In order to guarantee the robustness of such systems, they must meet two main criteria: safety and security. However, safety and security are currently dealt with independently. The reasons for this are both historical and normative. One idea is therefore to combine these two criteria in order to obtain the most robust vehicle possible. In this article, we propose to highlight recent advances in the combined study of safety and security, focused on the autonomous vehicle. To do this, we have carried out a preliminary analysis of the existing situation and a cartographic study listing the articles dealing with this combination. Various qualitative and quantitative analyses of the existing situation are present in the literature, generally focused on CPS. Then, based on this study, we grouped the articles according to two categories: those highlighting the interests and possibilities of such a combination and those presenting hybrid methods in detail.

PRESENTER: Eleonora Pilone

ABSTRACT. The adaptation to the challenges of climate change in terms of national legislative actions, compensation mechanisms, sectorial risk guidelines is slowly proceeding among EU member states; in this context, local land-use planners lack of adequate tools to properly know and face the effect provoked by Climate-related events. In particular, as far as it concerns Natech risk, the effective implementation at local level of dedicated measures is an objective difficult to reach: some risk assessment methodologies have been settled for selected natural hazards and industrial activities, but many criticalities are still hindering a widely shared approach to Natech Risk Management (NRM). In addition, the methodologies elaborated till now rarely focused on the management of Natech risk from the point of view of local urban and land-use planners and managers. The present paper proposes an easy-to-use Na-Tech indicator, aimed at providing Local administrations with a survey of the industries exposed to NaTech risk on their territory, and to signal possible critical situations to be managed. The first step for the development of the indicator consists of a questionnaire aimed at identifying potential vulnerable items and hazardous substances detained; then, both items and substances are rated to obtain a classification of potential NaTech vulnerability. Na-Tech indicator could be useful to increase the awareness and preparedness of public administrator and planners towards the increasing probability and impact of Na-Tech events; it has the advantage to be easy to use also for not expert users, and can guide the decision-makers in identifying the most vulnerable Na-tech areas in their territory. The Na-tech indicator can be integrated with further in-depth studies, including i.e. the Integrated Quantitative risk-assessment.

10:25-11:25 Session MO1J: Probabilistic tools for an optimal maintenance of railway systems
Location: Botanique
Propagating local measurements along a railway network
PRESENTER: Lina El Houari

ABSTRACT. The rail system is concerned by a wide number of interactions between the infrastructure and the rolling stocks. Contact between rails and wheels is one of the most important interaction, as the forces and energy transmitted can lead to significant degradation of infrastructure Track assets and components (Rails, Sleepers and Ballast). The intensity of these contacts is various along a network and depends on the configuration of the Track and the status of the infrastructure assets and of the rolling stocks which circulate on it. This affects the maintenance effort, and budgets, as well as the life costing of those assets. In order to support Maintenance management, way side monitoring systems (WTMS) exist, [1, 2], but these ones only provide local measurements (axles load, wheel anomalies, etc.).

SNCF-Réseau, the Railway Infrastructure Manager in France, aims at optimizing the way infrastructure assets are designed, maintained and operated regarding the risks, the costs and the performances. The deep integration of Asset Management principles leaded to increase the need of predictive modelling approach and tools for supporting short, medium and long-term decisions. In the case of the Track Asset Management, SNCF Réseau aims at improving continuously these models, by integrating new data sources and testing new methods.

This article presents an innovative modelling approach, which aims at using local measurements provided by WTMS for predicting Track degradation along a network. This approach is based on 2 main pillars: • Crossing and merging data gathered for Maintenance purposes (Track geometry levelling, Maintenance reports), for Traffic Management purposes (logs of trains along their mission) and measurements provided by WTMS, • Using Machine Learning algorithms, [5], and innovative propagation modelling [3, 4]. This leads to propagate the characteristics of the wheel/rail contact, measured by the WTMS, along the railway network and to use them in predicting the track geometry degradation. In order to achieve this objective, the network is divided into three different types of Track sections, regarding trains traffic, the data available and the Track configuration. This article presents the results obtained for a real use case (several Track sections in the south of France). These results confirm the relevance of the proposed approach and motivated on-going additional tests.

PRESENTER: Jorge Rodríguez

ABSTRACT. Increasing the capacity of railway lines and managing maintenance activities with current practices are preventing the efficient use of the network. Optimized and efficient management of railway infrastructure is essential for the development of railway services. For that purpose, new tools and strategies are needed to support decision-making in the maintenance domain. In this paper, we explore how advances in digitization and commoditization of technologies, e.g., Internet of Things (IoT), Artificial Intelligence (AI), and Cloud Computing are enabling new approaches to optimize the management of railway infrastructure. We propose a new modular tool with enhanced analytics that supports decision-making and enables prescriptive maintenance. It combines physical modeling, formalization in the description of maintenance, and advanced analytics to leverage indicators. By presenting a use case that assesses the life cycle cost and remaining useful life of rails owed to wear, we argue that having Software tools that optimize the maintenance of railway infrastructure is key to support decision-making and enable prescriptive maintenance. A digital twin description of the infrastructure and the use of advanced analytics to generate high-level strategic indicators (e.g. safety, availability, costs, and RUL – Remaining Useful Life) is a feasible approach for the demands of such a Software tool.


ABSTRACT. Context Behavioral modelling of signaling components is an indispensable tool for Asset Management. Asset management experts know that behavioral models of assets are essential asset management tools. Infrastructure managers seek to optimize policies and expenditure for the maintenance of signaling installations. This article aims to formalize the link that exists between the cost of maintaining signaling components and the nature and volume of renewal of installations. Based on the failure model for components, the first step in applying this method is to construct a model of availability and maintenance costs (maintenance and renewal). It is subsequently extended to models of a complete installation by consolidating the data of the elementary components. Finally, since the maintenance cycles are characterized by periods of partial and/or full renewal, the curves of the costs associated with these cycles are compared and this allows the choices appropriate to achieving an economic optimum to be identified. This process can be extended to other types of infrastructure constituents. Content The construction of the economic model describes here is based on a model failure of the components of a subset of the infrastructure, the signaling, model to which are then applied replacement unit costs in the framework of maintenance or renewal. The construction of the failure model will be explained by a replacement law of a population of constituents, without then with, identical replacement of a component defaulting. The first does not provide for its replacement, resulting in a decrease over time in the initial population. The second integrates the identical replacement of a defaulting constituent, thus the population remains constant over time. The construction of the cost model will then be described. Finally, the application to signaling equipment and/or signal boxes will be on shown. Conclusion Any model must consider ageing phenomena to get as close as possible to reality. On this point, the method chosen has considered the data available. Behavioral modelling of signaling components, functions and other families of functions is proving to be an essential tool for the asset management of signaling installations. This is the major interest of this approach. The methodology and tool developed in this study are today used by Railways IM.

11:30-12:30 Session MO2A: Risk Assessment
Location: Auditorium

ABSTRACT. Construction projects are characterized as fragmented, temporary and complex. Therefore, they are exposed to a multitude of varying and interdependent risks that may lead to delays, overcosts and other failures which can undermine their successful realization. Project risk management plays then a vital role to increase the probability and impact of positive events, and decrease the probability and impact of negative events in the project [1]. It consists in the risk identification, assessment, prioritization, treatment, monitoring and control. In this paper, a framework is proposed for construction project risk assessment and prioritization based on multiple criteria decision making methods to define the notion of “Weighted Criticality”. A model of risk interactions and propagation behavior is then built using bayesian network, breadth first search algorithm and marginal tree interference. The outcomes of this novel risk analysis provide project managers with a support for decision-making regarding construction project risk management and help them to design more effective response actions. A case study that concerns the construction of a medium-voltage power line is used to illustrate the effectiveness of proposed approach. It effectively demonstrated the "snowballs" effect of risk propagation on the reliability of the project.

Evolved Methods for Risk Assessment
PRESENTER: Andrew Jackson

ABSTRACT. The foundations of risk assessment tools such as fault tree analysis and event tree analysis were established in the 1970s. Since then, research has made considerable advances in the capabilities of analytical techniques applicable to safety critical systems. Technology has also advanced and system designs, their operation conditions and maintenance strategies are now significantly different to those of the 1970s.

This paper presents an overview of a new methodology developed, retaining the traditional ways of expressing system failure causality, which aims to develop the next generation of risk assessment methodologies. These evolved techniques, appropriate to meet the demands of modern industrial systems, aim to overcome some of the limitations of the current approaches. These new tools and techniques will seek to retain as much of the current methodology features as possible to reduce the learning curve for practitioners and increase the chances of acceptance.

The new approach aims to increase the scope of event tree/fault tree analysis through the incorporation of Petri net and binary decision diagram-based methodologies. Use of these techniques incorporates features such as: non-constant failure rates, dependencies between component failure events, and complex maintenance strategies to boost the capabilities of the methods.

In addition, it considers dedicated routines to analyse the accident risk of transport systems formulated as phased mission models. This type of modelling is demonstrated through the application to an aeronautical system, where the system is modelled as a mission consisting of a series of phases. Mission success requires the successful completion of each of the phases. This approach allows the requirements for success (and therefore failure) to differ from one phase to another. It is also possible to model scenarios whereby a system fault that occurs in one phase of a mission may not affect the system until a later phase of the mission.

A new model-based approach combining safety and security risks
PRESENTER: Tamara Oueidat

ABSTRACT. For many years, critical industrial systems worldwide are integrated by digital and communicating technologies like the connected objects (Industrial Internet of Things IIoT), the connection of the control system to the internet, the technological convergence, and the interconnection between Information Technology (IT) and Operational Technology (OT) [1]. This digitization creates new cyber-security threats leading to undesirable safety accidents. Thus, these cyber-security threats became a critical subject in the critical industrial systems [2] and must be analyzed during the risk analysis. Safety and security risks are treated separately in most of proposed risk analysis approaches, despite their interdependencies and their common consequences, a security threat can lead to the same dangerous phenomenon as a safety incident. In our paper, a new approach that combines the safety and security risks during industrial risk analysis is proposed. This approach is model-based, it aims to model the physical and the IT architecture of the system, in order to understand its structure, and generate the vulnerabilities, attacks, and hazardous situations scenarios that lead to physical undesirable events in the best effective way. It includes the evaluation of probabilities and impacts. The application of this approach will be demonstrated using the case study of a critical chemical system.

11:30-12:30 Session MO2B: Mathematical Methods in Reliability and Safety
Location: Atrium 2
Estimating parameters of the Weibull Competing Risk model with Masked Causes and Heavily Censored Data
PRESENTER: Patrick Pamphile

ABSTRACT. In a reliability or maintenance analysis of a complex system, it is important to be able to identify the main causes of failure. The Weibull competing risk model is then very often used (1) . However, in this framework estimating the model parameters is a difficult ill-posed problem. Indeed, the cause of the system failure may not be identified and may also be censored by the duration of the study. In addition, the other causes are naturally censored by the first one. For the maximum likelihood method and its variants (EM or SEM), the estimator of the shape parameters has no closed-form expression : it is necessary to use iterative numerical approximations sensitive to the starting point and the censoring rate. For Bayesian methods, there is no conjugated prior distribution for Weibull distribution when the two parameters are unknown. The estimation of the posterior distribution or its moments requires, here again, approximations by iterative numerical methods. In addition, when data is heavily censored, those classical methods become ineffective in terms of bias and variance. To adresse this, Bacha and al(2) have proposed the Bayesian restoration maximization (BRM) method. It is based on the restoration of missing data using Bayesian sampling of the parameters and an importance sampling technic (3) : the proposal distribution is obtained from the maximization of the likelihood completed by the missing data. In this work, we first of all propose to improve the BRM method by amending the proposal distribution to make it more effective in terms of variance. In addition a new proposal distribution is obtained from the maximization of the mean of the posterior distribution completed by the missing data. The efficiencyoftheproposedmethodswasevaluatedbyalargenumberofsimulationsfordifferent levels of censoring rate. Simulations have shown that the new proposed methods are effective both in terms of relative bias and relative root mean square error. Then, as a comparison, these methods were implemented on a real data set from the literature.

References 1. D.N.P. Murthy, M. Xie, R. Jiang. Weibull models. ,(John Wiley & Sons, 2004.). 2. M. Bacha, G. Celeux, E. Idée, A. Lannoy, and D. Vasseur. Estimation de durées de vie fortement censurées. (Eyrolles, 1998). 3. C.P. Robert and G. Casella. Monte Carlo Statistical Methods. (Springer-Verlag, 2013).

Identifying critical failure-propagation in function models of complex systems
PRESENTER: Yann Guillouet

ABSTRACT. Methods for reliability analysis, like Function Mode and Effect Analysis (FMEA) or Fault Tree Analysis (FTA), are widely used to anticipate and avoid failures of technical systems. These methods, however, may fall short when it comes to exhaustive and formal analyses, as they are time consuming to conduct and often require a considerable manual effort 1 . Based on this motivation, the present paper focuses on algorithms that exhaustively analyze possible combinations of failures and their impact on the system. Such algorithms prove to be very valuable particularly for complex systems, where cascading effects may not be intuitive and which may contain interdependent factors. In a first step, functional networks, which are commonly used to represent graphically interdependencies, are extended by adding to each element of the system a binary failure characteristic. In addition, a system dynamic is developed that enables the description of the propagation of failures throughout the system. All information necessary for the analysis are implicitly encoded in the model, and thus, can be treated in an automated manner. Identifying the most critical combination of failures in such binary functional networks breaks down to a combinatorial optimization problem (COP), which consists of maximizing the impact on the system with respect to the failing elements. Two algorithms are proposed that are able to solve the presented problem: While the first ’naive’ algorithm basically generates each possible combination of failures and compares their outcomes, the second more elaborated one makes use of the concept of Minimal Cut Sets (MCSs): By respectively restricting the current search space to the neighborhood of the elements of interest, the proposed algorithm efficiently generates the MSC of each element independently of the overall system size, and avoids redundant effort. Successively combining those sets results in growing combinations of failures, eventually indicates the most critical ones. Both algorithms are exhaustive, and thus, taking into account every potential combination of failing elements and providing the exact solution to the COP. This allows to easily identify the weakest links in the system and to derive related countermeasures. Obtained simulation results indicate that this advantage especially plays out for large combination of failures.

Adaptive Learning for Reliability Analysis using Support Vector Machines
PRESENTER: Nick Pepper

ABSTRACT. In this work we present a novel algorithm for adaptive learning of a Limit-State Function (LSF)having a Support Vector Machine (SVM) structure. A key requirement for engineering designs is that they offer good performance across a range of uncertain conditions while exhibiting an admissibly low probability of failure. In reliability analysis we want to assess the probability of the system violating a set of performance requirements cast as inequality constraints. Denote the following: x as an uncertain parameter having joint density f(x) with x∈X⊆R^{nx}; y=M(x) as the system response with y∈Y⊆R^{ny}; and g(x) =M(x)−y0≤0 as a set of requirements imposed upon the system. The uncertain space X is divided into a failure and safe domain, separated by the LSF{x:g(x) = 0}. The main goal of reliability analysis is to evaluate the probability of failure, P[g(x)>0]. In many cases M is cheap to evaluate so the probability of failure can be readily estimated by Monte Carlo sampling. Otherwise, a metamodel must be employed to make the process computationally viable. This paper proposes an adaptive metamodeling strategy that learns the LSF from a limited number of function evaluations of M. SVMs have been used to approximate complex limit state functions by formulating the problem as a two-class classification problem. Through the ‘kernel trick’ training data is mapped to a higher-dimensional space z where a separating hyper-plane can be identified using convex optimization, provided that positive definite kernels are used. Mapping the separating hyper-plane back to physical space x yields the desired surrogate of the LSF.

At each iteration the algorithm selects an informative parameter point by solving an optimization program and then add it to training data used by the SVM metamodel. This selection is based on three criteria: the point must be on the predicted limit state function, the point must attain a comparatively high likelihood, and the point must be sufficiently far from training data used previously. The latter criterion, which seeks points where the density of training points is comparatively low, promotes exploration by safeguarding against the possibility of recurrent convergence to the same optimum. Furthermore, we use the moment-SOS approach to tackle the case in which the LSF is polynomial and x is a sliced-distribution. The algorithm is demonstrated firstly for the Four Branch function, a two-dimensional benchmark test, and secondly for a realistic, high-dimensional test case. The efficiency of the algorithm is demonstrated by comparing its performance to that of a previously proposed algorithm based on pool sampling.

11:30-12:30 Session MO2C: Maintenance Modeling and Applications
Modelling the Maintenance of Membranes in Reverse-Osmosis Desalination
PRESENTER: Frits van Rooij

ABSTRACT. Biofouling of membranes, amplified by recurring algal blooms, significantly reduces the efficiency of reverse-osmosis desalination. Degradation or wear of membranes caused by biofouling manifests as a loss in pressure, and maintenance is required otherwise membranes will fail. In this research, we model membrane wear and maintenance in a novel way, describing the hidden states through time of individual membrane elements in a reverse-osmosis pressure-vessel. Our mathematical model provides the basis for a simulation platform. We estimate parameters of the model using statistical methods, among them the particle filter. Maintenance planning is interesting because membrane elements can be replaced or swapped or cascaded or cleaned, and these differing interventions have different restorative effects. We demonstrate the potential for our model to support decision-making for maintenance planning and to reduce maintenance costs.

Accessibility evaluation method based on D-H model and comfort

ABSTRACT. In virtual maintenance, the most widely used accessibility evaluation method is to use virtual human entity accessibility envelope surface to judge and evaluate. However, this method can only give two kinds of evaluation results: reachable and unreachable. There is not enough data and theoretical support for the construction of envelope surface, and the precision and accuracy of evaluation need to be improved. In this paper, a parameterized accessibility evaluation method and the construction method of accessibility envelope surface are proposed. Firstly, a 4-joint and 4-link Denavit-Hartenberg (D-H) link model is established from the waist to the fingertip of the human body, and the range of 10 degrees of freedom and angles of the 4 joints are determined according to ergonomics. Then, the reachable points are generated by Monte Carlo simulation. The accessibility envelope surface is composed of the outermost random reachable points. Then, comfort is introduced to refine the accessibility evaluation level, and a multi-level accessibility evaluation system based on comfort is constructed according to the rapid upper limb assessment (RULA). Finally, the comparison experiment with the reachable envelope provided by DELMIA in virtual environment and real environment shows that the proposed method has better evaluation accuracy and precision. Based on this method, an accessibility evaluation tool has been developed in CATIA and has been applied in some scientific research institutes.


ABSTRACT. Many critical systems with dependencies do not collapse immediately due to single-point failures but are more vulnerable to the cascading effects of these failures. Condition-based maintenance (CBM) has been found useful not only in improving availability of technical system but also in reducing the risks related to unexpected breakdowns especially the cascading failures. The serious disasters created by such failures and increased requirements for CBM policy due to dependencies urges a comprehensive study on current research and future challenges. In this study, a systematic literature review on the implementations of CBM in the systems with dependencies is conducted. Relevant papers are deliberately selected and analyzed in the VOSviewer program, to identify co-occurrences of keywords and so to distinguish CBM from other types of maintenance. Specifically, considering various types of dependencies, challenges, research advancements and research perspectives in the three main steps of CBM, including data acquisition, data processing and maintenance decision making are then identified. Opportunities of CBM for improving availability and reducing risks of dependent systems are finally explored.

11:30-12:30 Session MO2D: Prognostics and Health Management: From Condition Monitoring to Predictive Maintenance
Location: Panoramique
Combination of long short-term memory and particle filtering for handling uncertainty in failure prognostic

ABSTRACT. Failure prognostic is generally conducted following two approaches, model-based or data-driven. On the one hand, model-based approaches offer better physical interpretability and may be easily embedded in the structure of Bayesian processors for uncertainty characterization purposes. However, it is challenging to identity degradation models in complex systems, since it is required to understand all the underlying degradation phenomena. On the other hand, data-based approaches are more applicable for monitoring the health condition of complex systems. However, this latter approach suffers from a lack of interpretability and low uncertainty consideration. Nevertheless, these two characteristics are crucial for critical equipment in industries such as transport and energy production. In this paper, we propose a method combining long short-term memory (LSTM) and particle filter (PF), namely PF-LSTM, for handling uncertainty in the estimation and prediction of system states. In detail, a trained LSTM is used to propagate particles for prior system health state estimation. Then, using the Bayes formula, the weight of these particles is updated regarding the in-field measurements. Finally, when a fault is diagnosed, i.e., when the health indicator exceeds the fault threshold, LSTM is used to propagate the last health state posterior distribution to determine the system's remaining useful life.

Remaining useful lifetime prediction and noisy stochastic deterioration process considering sensor degradation
PRESENTER: Hassan Hachem

ABSTRACT. Condition monitoring is important to ensure system reliability and safety; it is the basis for prognostics (Remaining Useful Lifetime (RUL) prediction) and predictive maintenance. The monitoring data is usually erroneous due to sensor errors. In the literature, these errors are usually neglected or modeled by a Gaussian noise during the prediction of the RUL. However, in the reality, due to the varying operating environments and the ageing effect, the sensor itself will eventually undergo a deterioration process and its performance will impair over time. The use of the Gaussian noise with a constant mean is unable to model the sensor degradation phenomenon and leads to inefficient forecasts of the RUL. For this reason, the paper focuses on the modeling and taking into account the sensor degradation in the RUL prediction process. To this end, we propose first an integrated degradation model that considers not only the system degradation but also the sensor degradation. In the proposed model, the sensor degradation is modeled by both Gamma and Wiener processes. Finally, to take advantage of the knowledge about the degradation model and the available data, filtering methods (Particle Filter and Metropolis-Hastings) were adopted. Several numerical experiments have shown that the consideration of the sensor degradation can help to improve the RUL prediction precision. In addition, the performance of the PF and MH are also analyzed by varying different parameters of the proposed degradation model.

11:30-12:30 Session MO2E: Human Factors and Human Reliability
Location: Amphi Jardin
The Effect of Imperfect Maintenance on a System's Condition considering Human Factors

ABSTRACT. Human Factors (HF) have a significant impact on the maintenance quality and are recognized as a specific cause of Imperfect Maintenance (IM). Technician inexperience, poor procedure quality, or environmental factors are exemplary HF and can lead to insufficient repair or inspection. However, due to limited data, many studies describe HF only qualitatively as a possible cause of IM or highly simplify the effect of HF. This paper attempts to analyze the effect of HF on the system's restoration level in order to provide a more realistic post-maintenance operating system condition. Furthermore, the economic savings potential of a fully automated condition-monitoring approach and the subsequent reduction of adversarial HF within the maintenance process will be analyzed. As use case serves the tire pressure maintenance task of an Airbus A320. Based on fuzzy logic, an IM model is developed, considering the effect of HF individually on restoration and inspection tasks using human reliability assessments. The aircraft maintenance eco-system is simulated with a prescriptive maintenance model to enable the evaluation of monetary and non-monetary performance indicators. A comparative study revealed that perfect maintenance approaches tend to vastly underestimate the maintenance-related cost aspects. Moreover, the study showed that a technician's lack of experience cannot be compensated by an improved working environment. Only the use of an automated monitoring system to replace error-prone inspection tasks resulted in an overall cost reduction. The developed model allows the individual evaluation of the technician's expected performance to enable a more targeted deployment of the available workforce and to improve the effectiveness of maintenance.

Is the performance of control room operators affected by time on task or time of day?

ABSTRACT. Control room work is characterised by a large variation in intensity or workload, varying from periods of uneventful monitoring to demanding emergency situations. Fatigue is assumed to be an effect of a combination of multiple interacting factors, including time awake, time of day and workload (Sadegniiat-Hagigi & Yazdi, 2015), and has been related to decrements in cognitive performance (Sadeghniiat-Haghighi & Yazdi 2015; Hopstaken et al, 2015). In addition to being a result of prolonged task work (time on task), there is also evidence that fatigue levels may be influenced by the time of day, with subjects being more prone to fatigue in the afternoon and less prone in the evening (Zhang, Wu, & Zhang, 2020). A drop in performance in the early afternoon (the "post-lunch dip") and early morning, also associated with a tendency for sleepiness, is a well-known and studied phenomenon (Carrier & Monk, 2000). It is important to understand to what extent and in what way performance in the control room is influenced by mental fatigue. This question was examined by analyzing data from six previous simulator studies within nuclear power operation with a total of 312 simulator runs. Mental fatigue was assumed to increase and performance decrease throughout the day with increasing cumulated time on task. Thus, performance in the first simulator runs of the day was expected to be higher, on average, than that in the last simulator runs of the day. No effect of the sequence of simulator run was found in the overall data, but one study showed higher performance on the first runs. Another study, where data collection started in the afternoon, showed higher performance on the last runs. When further examining the data, it was found that the majority of the simulator runs with poorest performance, as rated by expert observers, took place in the early morning and early afternoon. The difference in performance between these periods and the other data collection periods was marginally significant. Since data collection for most of the simulator studies started in the morning and ended in the afternoon, the factors of time on task and time of day were confounded, and a definitive conclusion could not be drawn. But the performance drops in early morning and early afternoon correspond to similar performance effects found in other areas, including road traffic incidents (Carrier & Monk, 2000). The results point to a need for awareness that the performance of control room operators may be considerably influenced by time of day.

References 1. J. Carrier, and T.H Monk, T. H. Chronobiol Int, 17(6), 719. (2000). 2. J. F. Hopstaken, D. van der Linden, A.B. Bakker and M. Kompier.. Psychophysiology, 52, 3015. (2015) 3. K. Sadeghniiat-Haghighi and Z. Yazdi, Z. (2015). Ind Psyc Journal, 24(1), 12-17. (2015) 4. Q. Zhang, C. Wu and H. Zhang. Journal Advanced Transportation, Volume 2020, Article ID 9496259. (2020).

A Maintenance Performance Framework for the South African Electricity Transmission Industry

ABSTRACT. Maintenance performance measurements have always reflected the changes in industry and maintenance revolutions. Industry 4.0 has a strong focus on social dimensions and a clear strategy is needed to measure these social dimensions in a maintenance performance framework. This article summarises maintenance human factors and measurements within the South African Electricity Transmission Industry. These maintenance human factors are the cornerstone of social dimensions that affect the maintenance technician’s ability to perform work at an optimum level. High workload, time pressure, fatigue and communication were found to be the most significant maintenance human factors within the South African Electricity Transmission Industry. Furthermore, an organisational hierarchical maintenance performance framework was developed for this industry. The framework provides a methodology to calculate an overall maintenance performance score that is inclusive of these maintenance human factors. By implementing a maintenance performance framework that includes the up-and-coming social dimensions of Industry 4.0, the successful implementation of Maintenance 4.0 can be improved.

11:30-12:30 Session MO2F: Degradation analysis and modelling for predictive maintenance
Analysis of a condition-based-maintenance policy in heterogeneous systems subject to periodic inspections

ABSTRACT. A maintenance strategy for heterogeneous systems consisting of monitored and non-monitored components is analyzed. Monitored components are subject to a continuous gamma degradation. We assume that a monitored component fail when its degradation level exceeds a failure threshold. Times between failures in the non-monitored components follow an exponential distribution. To prevent failures, monitored components state is periodically checked through inspections. In these inspection times, if the degradation level of a monitored component exceeds the preventive threshold, this component is replaced by a new one. Failures of monitored and non-monitored components are self-announcing: when a component fails, repair team is immediately called and performs the replacement of the failed component by a new one after a fixed delay time. In fact, these maintenance times are seen as opportunities for preventive maintenance of the rest of monitored components. If the degradation level of a monitored component exceeds the preventive threshold at the time of another maintenance action, this component is preventively maintained.

The expected cost rate for this heterogeneous system is evaluated by assuming a sequence of costs for the different maintenance actions, and using a semi-regenerative approach. A reward that decreases as the degradation level of the components increases is provided by the working monitored components. Numerical examples of the optimization problem are given, with the aim of finding the optimal maintenance strategy. Monte-Carlo simulation method and meta-heuristic algorithms are employed to minimize the preventive thresholds and times between inspections.


ABSTRACT. The degradation of a mechanical system as a function of time or of the number of operations performed is often characterized by a timing value, that is the change of the time required to perform a certain operation. This timing value is in many practical cases determined as the difference between a start and an end time stamp, each of which is subject to measurement errors. In order to determine the future evolution of the system with the aim to predict the end of life, given by the first passage time of a critical value, a good estimation of the parameter of the underlying stochastic evolution and measurement error is required.

In this work an analysis is done using as basic model a Wiener process with measurement error. This is motivated by a practical case, where the timing of the opening or closing operation of a mechanical mechanism is used. The two time stamps are measured using two mechanical switches. Such electrical contacts are known to be subject to bouncing, which makes an accurate determination of the instance of time, when they close, difficult. A debouncing algorithm is used in this application: The time stamps are only recorded, if no change of signal is recorded over a certain time interval. As typical settling times are comparable to the changes of timings themselves, the error coming from this effect need to be modeled appropriately.

In a first approach the measurement effect is modeled as interval censoring. A more detailed model takes the random character of the bouncing into account as well. The underlying evolution of the timings is modeled as a Wiener process having both a drift and a variance. The model is similar to the one in Whitmore (1995), but without assuming a normal distributed measurement error.

Bayesian inference using MCMC is used to get the distribution of the different parameter estimations. Due to its practical simplicity a comparison is made with the Whitmore model, using the normal distributed case, showing when this method can be used instead.

With simulated data the performance of different approaches is confirmed and the algorithm is applied to some real data with an evaluation of the validity of the model assumptions with respect to the prediction of the failure time distribution. The analysis is a specific example how more complex measurement error models can be used together with a stochastic model of the degradation beyond the assumed normal distribution assumption.

PRESENTER: Hsueh-Fang Ai

ABSTRACT. Degradation analysis has become the most important technique and efficient method for developing statistical models of highly reliable products. If there are quality characteristics, whose degradation of physical characteristics over time (referred to degradation paths) is related to product reliability, an alternative option is the use of sufficient degradation data to accurately estimate the product's lifetime distribution. When there are measurement errors in monotonic degradation paths, the assumption of the non-monotonic model can lead to contradictions between physical/chemical mechanisms and statistical/engineering explanations. To settle the contradiction, this study presents an independent increment degradation-based process that simultaneously considers the intra-unit variability, inter-unit variability, and measurement error in the degradation data. In order to efficiently estimate the model parameters, we use a quasi-Monte Carlo approach, separation-of-variables transformation and parallel computing to overcome high-dimensional integrals of the likelihood function. The likelihood ratio test is performed to assess whether the measurement error term is necessary for a specific dataset. In addition, the study uses a bias-corrected bootstrap method to obtain confidence intervals for reliability assessment and provides a model-checking procedure to assess the validity of model assumptions. Some case studies are performed to demonstrate the flexibility and applicability of the proposed models. The results of analyses are in agreement with the material theory and empirical experiments in the literature.

11:30-12:30 Session MO2G: Nuclear Industry
Location: Atrium 3
Interpretability Improvement of Convolutional Neural Network for Reliable Nuclear Power Plant State Diagnosis
PRESENTER: Ji Hyeon Shin

ABSTRACT. When an abnormal event occurs in a system in a nuclear power plant (NPP), it can cause severe safety problems if it is not mitigated. Therefore, an operator diagnoses the abnormality from alarms and monitoring parameters and conducts appropriate action. Among the tasks, diagnosis can increase the workload of the operator because it should be accurately performed as soon as possible to minimize the consequence of the occurred event. Recently, to support the diagnosis task, operator support systems using an artificial neural network (ANN) have developed. However, an ANN which is a black-box model cannot logically infer its prediction. For this reason, an operator cannot back up a misdiagnosis of the model, and they also cannot trust its diagnosis. For this issue, we intend to provide evidence with the diagnosis of the NPP abnormality classification model. To find more appropriate evidence of the NPP state diagnosis, this study verifies the improvement of interpretability when Guided Backpropagation is used with the explanation method. A convolutional neural network that can classify each NPP abnormal state with high accuracy is used as a diagnosis model, and the model calculates each classification contribution of plant parameters in input data using explanation methods. The interpretability of each method is compared by reclassifying the NPP states using each dataset composed of high relevant parameters from calculation results. By making the model more transparent, operators can trust model diagnosis.

An Operator Support System Framework and Prototype for Initial Emergency Response in Nuclear Power Plants
PRESENTER: Jung Sung Kang

ABSTRACT. Nuclear power plant operation can be categorized with three: normal, abnormal and emergency operation. Especially in emergency operation, operators are exposed to highly stressful condition conduct mentally taxing activities, since immediate and appropriate mitigations are required. To reduce human errors in such a harsh situation, emergency operating procedures (EOPs) are used, which provide appropriate tasks to mitigate situations and support diagnosing symptoms of nuclear power plants. However, human error is still a major contributor for nuclear power plant accident. In order to reduce human error, many operating automation methodologies are currently being researched. However, in the nuclear field that requires a high degree of safety, it is difficult to quickly apply automation technology, so it is necessary to apply and verify low-level intelligent operator support systems. In this paper, we propose concept of an intelligent operator support system replacing EOPs that have initial responses and diagnosis tasks. The proposed operator support system has a parallel structure that monitoring tasks are conducted parallel in contrast with existing EOPs that are performed sequentially. From this monitoring information, it provides intuitive and accurate information to the operator through the state of critical safety functions and the master logic diagram. In addition, information on latent risks due to auxiliary system failure is also provided to the operator using multilevel flow modeling technique. This operator support system automatically performs diagnosis when all emergency initial tasks are performed and recommends appropriate procedures for follow-up actions. There are three main advantages of this operator support system. First, response time is saved compared to the existing procedure since the monitoring tasks are performed in parallel. Second, human error can be reduced because the system performs the information gathering and response planning tasks that the operator must perform manually in existing procedure system. Third, unlike operating automation, the operator can take over at any time when a problem occurs in the system because execution tasks are conducted by an operator. Finally, this system is expected to be used as a transitional technology for nuclear power plant automation.


ABSTRACT. The ongoing COVID-19 crisis renewed scholarly interest in organizational resilience. To ensure resilience, organizations must develop the ability to proactively prepare for ambiguous and unexpected situations (Morel et al., 2008). From this perspective, resilience may be considered as a mindful process leading to reliability (Linnenluecke, 2017) where mindfulness allows to collectively manage stability/vividness tension and extend individual limits of attention (Weick & Sutcliffe, 2006, 2007). A high level of environmental uncertainty increases the risk and may lead to violations of organizational limits (Farjoun & Starbuck, 2007). In addition to the exogenous environmental limits, organizations are affected by the endogenous limits of cognition and managerial control, and also by the non-cognitive factors such as habitus. However, many questions remain. Following a recent call for further research on organizing for resilience (Linnenluecke, 2017, p. 26), the aims of our paper is to explore how the organizational limits restraint the development of mindfulness (foresight and cognition) and how organizations deal with those limits to develop the resilience? We conducted a qualitative case study within a major European nuclear power plant. We wanted to better understand how in a highly controlled and regulated industry managers increase resilience by pushing of organizational limits. Our analysis shows that implemented practices constrained endogenous organizational limits instead of helping to extend them. Our paper highlights the role of mindfulness and attention in building resilience and tensions between managed and regulated safety. The obligation of result (e.g., reliable practice) is in tension with the obligation of means (e.g., procedure to follow). Moreover, our case study illustrates negative effect of organizational context on the extension of the organizational limits. In addition, we enrich the notion of endogenous limits by adding the non-cognitive dimension of habitus of the nuclear energy industry. We believe that a better understanding of organizational limits to develop resilience may offer managers the opportunity to better consider the role of organizational context and to adapt training programs.

11:30-12:30 Session MO2H: Aeronautics and Aerospace
Location: Cointreau

ABSTRACT. The main goal of this article is the comparison of readiness [1] to perform the task of aircraft used for cadet training. During their education at the Polish Air Force University cadets use the following aircrafts: Cessna CA-150, Diamond DA-40, Orlik and Iskra. Each of them is dedicated to the different level of pilots training. The first two are used for the first training and the last one is used for specialist training on a jet aircraft. The guarantee of a valid training and training safety is closely related to the readiness of the aircraft to perform the task and, consequently, its reliability. As a part of the research, data from the process of operation of the aircraft used in the Military University of Aviation were investigated. As the method of the analysis semi Markov model, which is one of the analytical methods based on the analysis of stochastic processes has been chosen. It is based on the assumption that a technical object being in different operation states is a random variable. Unlike the Markov model, the process does not require exponential form of the probability distribution. However, such an approach requires more complex mathematical methods [2]. Semi-Markov model allows to determine the probability of aircraft being in one of the states of the operating conditions: waiting, before-flight service, flight, after-flight service, hangar service. On these basis, it can be concluded that over time the dominant element turns out to be the after-flight service. Different results were obtained in the initial phase depending on the assumed initial state. Constant levels (called limit probabilities) were reached after about 30 days. It has been assumed that hangar service operations are performed regularly in accordance with the guidelines described by the manufacturer in the operating instructions


1. B. S. Dhillon, Reliability, Quality and Safety for Engineers, CRC Press (2005). 2. J. J. Raimondo Manca, Applied Semi-Markov Processes, Springer US (2005)

Power flow based fault state propagation model and its application to aircraft actuation system
PRESENTER: Yajing Qiao

ABSTRACT. In order to optimize the layout of sensor measuring points for accurate fault location, this paper establishes the power transfer model in the complex electromechanical system and studies the power-transfer based fault state propagation process. Taking aircraft actuation system as an illustration, this paper analyzes the power flow transferring process from permanent magnet DC motor to piston pump and hydraulic cylinder. Based on the power transfer model, the fault state propagation process is then established and analysed. The results indicate that as the distance between the measuring point of the sensor and the fault location becomes farther, the fault detection sensitivity attenuates. Therefore, it should be detected as close as possible to the fault location. The proposed method in this paper can describe the fault state propagation process of the system and verify the validity and accuracy of the theoretical analysis, which provide the basis for fault location and diagnosis based on power flow.

DROSERA: a Drone Simulation Environment for Risk Assessment

ABSTRACT. Use of Unmanned Aerial Vehicles (UAVs), or drones, flying over large infrastructure networks appears to be a very efficient way for aerial inspection and data collection. This solution is now commonly investigated e.g. for railway [1] or power line [2] maintenance. However, as such facilities are closely interspersed with inhabited areas, their aerial monitoring must take into account the risk induced at ground for population, transportation networks, critical infrastructures, etc. Thus, drone operators have to ensure that the flight trajectories respect a given level of safety wrt third parties before applying for flight authorization. This paper presents a tool (DROSERA) for Probabilistic Risk Assessment of fixed wing UAV missions. Probability of getting casualties for people or accidents on transportation networks is computed based on the evaluation of conditional probabilities related to four events: loss of control of the UAV, non-controlled ground impact, collision with third-party, casualty or accident. This tool integrates several models for such probability computations, presented in the literature [3] or developed by the authors in previous works [1,4]. New models are presented in this paper to account for some specific flight termination strategies for risk mitigation (terminal spiral), sensitivity to the hour of the day (people and traffic), effect of wind conditions. A new feature to account for specific airspace crossing is also investigated as a complement to ground risk assessment. A synthesis of these models and their integration into the DROSERA tool are described in this paper, along with required inputs and computed outputs (e.g. probabilities, risk maps). Illustration is proposed by considering risk assessment of an UAV mission in a semi-urban scenario, combining results obtained by the implemented models.

11:30-12:30 Session MO2I: Balanced System Reliability
Location: Giffard
Component assignment of circular k-out-of-n: G balanced system with 2 sectors considering component degradation
PRESENTER: Chenyang Ma

ABSTRACT. Balanced systems are widely used in the aerospace and military services featuring the symmetry structure with spatially distributed components, which increasingly becomes an important issue in the research field of the reliability theory. This paper studies a circular k-out-of-n: G balanced system with 2 sectors, which is operating if at least k components in each sector are working and the system is balanced. Firstly, an extended system balance condition is proposed and the system reliability model is presented based on minimal path sets. Secondly, considering the gamma degradation process of components, the optimization model is established to find the optimal component assignment for maximizing the lower boundary of the system reliability during the required mission time. Thirdly, the ∆-importance (DI) based heuristics are developed to solve the model. The analytical results and the numerical experiment illustrate the efficiency of models and the solving method, shedding light on making design and maintenance strategy of balanced systems.

Reliability Analysis of Load-Sharing Consecutive-k-out-of-n:F Balanced Systems by Considering Mission Abort policies

ABSTRACT. This paper considers a consecutive-k-out-of-n:F balanced system composed of m sectors and each sector is composed of n identical components with exponential distributed lifetimes. The lifetime distribution is subjected to a total load that is equally shared by all the working components in the same sector. The components fail due to internal failure or external shocks. If one component fails, one component in the remaining sectors should be forced down or one forced-down component in the same sector should be resumed to keep a balance. In this paper, the balance is achieved when the number of working components in each sector is same and the system fails if there are at least k failed and forced-down components in any sector. To enhance the balanced system survivability, mission abort policies are conducted if the failure risk becomes too high. Moreover, the mission reliability and system survivability are derived. Numerical studies are presented to confirm the obtained results.


ABSTRACT. In this paper the first passage time of two symmetric rail-wheel pairs is studied. We consider two rail-wheel pairs which submit to degradation due to the unbalanced vibration and wear during long time running under the minimum allowable value of flange thickness and tread diameter. Once the rail-wheel pair on the one side starts to degrade, the rail-wheel pair on the other side will be affected due to the unbalanced pressure and eventually starts to degrade. This paper will not consider the impact of the other wheel pairs of the same carriage. The thickness of flange and the diameter of wheel tread are dependent on each other by maintenance action such as profiling. In order to keep the thickness above a given value for the sake of safety, when the thickness of flange decreased, the diameter of the tread will be reduced by reprofiling (rotary cut). Besides, for a pair of wheels the wheel diameter difference should be less than a given threshold. Assume that the natural degradation of thickness of flange and the diameter can be modeled by Wiener process. The reduction of the diameter by profiling can be modeled by a general jump process. The dependence between two wheels will be affected by the wheel diameter difference. The inspection of the diameter of the rail-wheel is period. When the difference of diameters between two rail-wheel pairs is greater than the tolerance, the system is regarded as failed. The first passage time arrives when anyone of the rail-wheel pairs’ diameter reduction exceeds the pre-described threshold and then a perfect preventive maintenance will be planned on it. A maintenance policy will be proposed to optimize the amount of diameter reduction of each reprofiling based on the thresholds of thickness of flange and diameter of the paired wheels. Analytical results will be verified by numerical simulations according to different assumptions.

11:30-12:30 Session MO2J: Risk-Informed Digital Twins // Healthcare and Medical Industry
Location: Botanique
A cloud-based computational platform to manage risk and resilience of buildings and infrastructure systems

ABSTRACT. The primary responsibility of asset managers is to ensure that their assets, such as buildings and infrastructure systems, provide adequate service needed. They have the continuous task of executing interventions to help prevent the loss of service and to restore service after it is lost, which can happen, for example, due to natural hazards such as floods, landslides, and earthquakes. In other words, they have the continuous task of making their assets resilient. To provide optimal mitigation measures, the risk and resilience of buildings and infrastructure systems have to be assessed. Therefore, different computational models from different disciplines have to be executed, and their results have to be brought together in order to make profound quantitative statements. Nonetheless, conducting such assessments can be a particularly challenging task due to numerous scenarios and chains of interrelated events that require considerations, the modelling of these events, the relationships among them, and the availability of support tools to run the models in an integrated way.

Cloud-based simulations offer a solution to this problem, by providing almost unlimited storage and computational resources; furthermore, the cloud enables and facilitates collaborative approaches, and provide a Digital Twin of the assets for prediction and disaster management. This paper introduces a computational platform which enables cloud-based simulations to estimate risk and resilience of buildings and infrastructure systems. The setup of the computational platform follows the principles and ideas of systems engineering and allows to incorporate and link different events. The platform is centred on the integration of the spatial and temporal attributes of the events that need to be modelled to estimate the risk and resilience. Furthermore, the platform supports the inclusion of the uncertainty of these events and the propagation of these uncertainties throughout the risk and resilience modelling. Through the modular implementation of the simulation platform, the updating and swapping of computational models from different disciplines - according to the needs of engineers and decision-makers - is supported. The platform enables high-performance computing for simulation-based risk and resilience assessments, considering the occurrence of time-varying multi-hazard events affecting buildings and infrastructure systems.

Beyond the modelling of complex scenarios, the proposed computational platform provide technologies and tools to help decision-makers in determining the best mitigation policies. This is reached by collaborative technologies like data sharing, real-time collaboration, a continual process of creating, editing, and commenting, as well as a cheap and easy way of creating visuals and reports.

Digital Twins of Infrastructure
PRESENTER: Armin Tabandeh

ABSTRACT. This paper proposes a systematic approach to create digital twins of infrastructure for regional risk and resilience analysis. A digital twin consists of a virtual representation of infrastructure intended for specific analyses (e.g., in reliability and resilience analyses considering relevant hazards.) We formulate creating digital twins as a model selection problem whose objective is to ensure predicting the response quantities of interest (e.g., infrastructure performance or resilience measures) with the desired accuracy level under computational resources constraints. The virtual representation requires collecting and integrating data about infrastructure physical and operational characteristics from multiple sources. The required data depend on the considered analyses to predict the response quantities of interest (e.g., considering only reliability analysis or also including functionality analysis.) The collected data are typically unstructured and incomplete; so, they need to be processed and synthetically augmented to, for example, capture infrastructure’s future developments. Creating digital twins also entails deciding the scales, boundaries, and resolution of the virtual representation and selecting models for intended analyses from multiple candidates, each of different computational fidelities and evaluation costs. A digital twin is hardly a perfect representation of infrastructure reality; there are missing or limited data about infrastructure, and several sources of uncertainty affect predicting infrastructure’s states for decades ahead. Uncertainty propagation is an integral part of creating digital twins to understand how missing data and different sources of uncertainty affect predicting the response quantities of interest. Uncertainty propagation requires evaluating the created digital twins at multiple realizations of the sources of uncertainty. However, using a detailed digital twin with high-resolution and high-fidelity models can lead to a high computational cost. The proposed approach guides the creation of statistically equivalent digital twins following two general principles: 1) the selected scale, resolution, and computational fidelities collectively ensure the desired accuracy in predicting the response quantities of interest, and 2) the allocation of computational resources is based on each contribution to the uncertainty of the response quantities of interest.

Predicting clinical outcomes of ovarian cancer patients: deep survival models and transfer learning

ABSTRACT. With the advent of high-throughput sequencing technologies, the genomic platforms generate a vast amount of high dimensional genomic profiles. One of the fundamental challenges of genomic medicine is the accurate prediction of clinical outcomes from these data. Gene expression profiles are established to be associated with overall survival in cancer patients, and this perspective the univariate Cox regression analysis was widely used as primary approach to develop the outcome predictors from high dimensional transcriptomic data for ovarian cancer patient stratification. Recently, the classical Cox proportional hazards model was adapted to the artificial neural network implementation and was tested with The Cancer Genome Atlas (TCGA) ovarian cancer transcriptomic data but did not result in satisfactory improvement, possibly due to the lack of datasets of sufficient size. Nevertheless, this methodology still outperforms more traditional approaches, like regularized Cox model, moreover, deep survival models could successfully transfer information across diseases to improve prognostic accuracy. We aim to extend the transfer learning framework to “pan-gyn” cancers as these gynecologic and breast cancers share a variety of characteristics being female hormone-driven cancers and could therefore share common mechanisms of progression. Our first results using transfer learning show that deep survival models could benefit from training with multi-cancer datasets in the high-dimensional transcriptomic profiles.

12:30-14:00Lunch Break
14:00-15:00 Session Panel I: RAM and PHM Synergy
Location: Auditorium
RAM and PHM Synergy

ABSTRACT. Abstract

RAM (Reliability-Availability-Maintainability) Engineering is by now a mature discipline, tracing its roots back to the immediate post-World War II period.It deals with statistical properties of a population of assets, characterizestheir failure modes and aims at optimizing the design of future equipment based on past experience, analysis and tests. It takes into account maintenance and operations context and also provides inputs to the elaboration of maintenance plans and logistic support.

RAM analysis at the design stage is typicallymodel-based(while exploiting past data), andRAM Monitoring is data-driven but may besupported by models.

PHM (Prognostics & Health Management)emerged at the turn of the 21stcentury. It deals typically with individual assets which are equipped withsensors and aims at monitoring their health and its progressivedegradation,so as to diagnose impending failures, and to avoid them by taking preventive actions when possible. Three main pillars of PHM are detection, diagnostics and prognostics.

PHM relies on model-based, data-driven or hybrid approaches.

Typically the two communities are separate for historical reasons.

The question we would like to raise here is: can the two communities both benefit from stronger links?

For example, the first stage in developing a PHM system is a failure mode, mechanisms and effects analysis (FMMEA), which is typically a RAM task. And one key expected outcome from a PHM strategy is increased availability.

While RAM focuses on estimating the probability distribution of the lifetime of a population, in a givenenvironmentwith an average mission profile, PHM, on the other hand, focuses on predicting the rate of function loss for an individual asset, with a customized profile and context. RAM deals with discrete events (failures), and metrics such as failure rate and MTTF or MRL (mean residual life); while PHM deals with continuous, degradation data, and metrics such as health indicators (or indices) and RUL (remaining useful life). While RAM leads to maintenance decisions (such as spares management and determination of maintenance intervals) for a whole fleet, PHM supports maintenance decisions for one individual asset. The ‘M’ (maintainability) in RAM includestestability, which addresses the ability todetect and diagnosefaults — an essential concern for PHM.

The IoT paves the way for individualized monitoring. Both RAM and PHM can be supported by machine learning techniques along with classical statistics. Therefore the borders between the two disciplines may be getting blurred.

Can cross-fertilization occur in both directions ? For instance, can PHM not draw on the considerable body of knowledge accumulated by RAM specialistsover decades (including the venerable theories of the ‘founding fathers’, such as Barlow & Proschan, or Gnedenko, which PHM engineers often are not familiar with). And at the same time, cannot RAM Engineering be rejuvenated by machine learning algorithms and perspectives (which often RAM Engineers do not necessarily have in their toolbox)?

Should corporate RAMS departments give way to RAM/PHM departments?

And what are the implications for higher education and research ?


Dersin, Pierre,, ALSTOM, France, and Luleå University of Technology(LTU), Sweden


Kumar, Uday (Prof.),, Luleå University of Technology (LTU), Sweden

Fink, Olga (Prof.), Fink Olga, ETH-Zürich, Switzerland

Stoelinga, Marielle (Prof.), Marielle Stoelinga, University of Twente, the Netherlands

Remy, Emmanuel,, Electricité de France (EDF) R&D, France

Dupouy, Francis,, SERMA, France

Jedruszek, Marion,, (SAFRAN AIRCRAFT ENGINES), France

14:00-15:00 Session Panel III: Autonomous system safety, risk, and security
Autonomous system safety, risk, and security

ABSTRACT. Description

This special session will be organized in the format of a panel for discussing autonomous systems safety, risk, and security (SRS). The session will discuss the results of the First International Workshop on Autonomous Systems Safety (IWASS), as well as early findingsof the 2ndIWASS. Key experts will be invited to discuss autonomous systems SRS from an interdisciplinary and cross-industrialperspective. The panelis expected to present the results from IWASSdiscussions and make them more accessible to a wider audience. Participants may present additional thoughts on the discussions and workshop outcomes.


The First IWASS was organized by the Department of Marine Technology at the Norwegian University of Science and Technology (NTNU) and the B. John Garrick Institute for the Risk Sciences at the University of California, Los Angeles (UCLA). IWASS took place in Trondheim, Norway, from March 11thto 13th,2019. The 47 participants were selected and individually invited to the workshoponly and included 47. The subject matter experts came from Europe, Asia, Australia, and the U.S.A., working in both academia and industry. The 2ndIWASS is planned as a virtual event to be held in April 2021. Autonomous systems on land, in the air and on the sea are being widely applied. Thesafety issues concerning these systems are the focus of many research projects and publications, yet each industry and academic field attempts to solve arising safety issues on their own. Given theidentifiable similarities, could common solutions be envisioned and developed? Answering these and related questions motivatesthis panelas an opportunityfor an interdisciplinary discussion on risks, challenges, and foremost potential solutions concerning safe autonomous systems and operations.


This session aims at using the results of the First IWASS, early findings of the 2nd IWASS, and participants expertise to discuss autonomous systems SRS, identifying key challenges, and possible solutions for those.


Thieme, Christoph A.,, Norwegian University of Science and Technology

Ramos, Marilia,, University of California Los Angeles Utne,

Ingrid B.,, Norwegian University of Science andTechnology

Mosleh, Ali,, University of California Los Angeles

14:00-15:00 Session Panel IV: Digital twins to improve decision making in the built environment
Location: Panoramique
Digital twins to improve decision making in the built environment

ABSTRACT. Abstract

Digital twins still mean different things to different people, but all descriptions seem to share a common characteristic – connection between the physical and the digital worlds. This panel aims to explore the opportunities provided by digital twins in the built environment. Discussions will address examples, challenges and guiding principles of digital twins like federation, trust and purpose.


Peter El Hajj - Head of UK National Digital Twin Programme Delivery

Panel list

Sebastien Coulon - Cofounder/COO - SpinalCom

Emmanuel Di Giacomo - EMEA BIM & AEC Ecosystem Business Development

Mary Juteau - Responsable du Service Information Géographique - Angers Loire Métropole

More TBC

14:00-15:00 Session Panel IX:
Location: Amphi Jardin

ABSTRACT. Artificial intelligence requires a great deal of research and innovation to reach its full potential. In particular, the integration and safe use of AI technologies are essential to support the engineering, development and diffusion of innovative products and services.

The program, an important initiative coordinated by IRT SystemX and supported by major industrials and academics, aims to provide an environment for the design, validation and testing of AI systems to strengthen trust, explainability, dependability and move towards the certification of these systems.

The session will highlight the motivations for the program, its main components and look at the scientific challenges associated with it, especially in the domains of safety and reliability

Panel list

  • Bertrand Braunschweig, scientific coordinator of
  • Loïc Cantat, technology coordinator, IRT SystemX
  • Morayo Adedjouma, CEA LIST, Research Scientist.
  • Christophe Alix, Thales, Senior Expert in Autonomous Systems
15:00-15:20Coffee Break
15:20-16:00 Session Plenary II: Plenary session
Location: Auditorium
From Parameter Design to Data Science

ABSTRACT. In this talk I will first trace the history of my role in the coining of the term “data science”. For many years since the early 1980s, I had grown dissatisfied with using the term “statistics” to describe my profession because it is usually connected with descriptive statistics, while what statisticians do can be summarized as a trilogy of data collection, data modeling, and problem solving. Thus I proposed the terms data science and data scientist in a public lecture at the U. of Michigan in 1997. With the explosion of huge data collected through the internet, data science has grown to become a very popular, fashionable and impactful profession. I will describe how it is so different from the traditional meaning and work of statistics. A major new component is the role played by computer scientists and the new emphases on algorithms, coding and huge data they bring in. I will end with some examples about the applications to uncertainty quantification and variation reduction.

16:10-17:30 Session MO3A: Risk Assessment
Location: Auditorium
PRESENTER: Johan Cobbaert

ABSTRACT. In the context of energy transition, the development of wind energy projects situated in an industrial environment or close to cities is a preferred option in regions with high population densities, since it represents some major advantages related to landscape and noise pollution, NIMBY (Not In My Backyard) and the availability of an electrical connection to the grid. On the other hand, it also represents a drawback in terms of safety during winter conditions due to the presence of people in the vicinity of the wind turbine where ice accretion on the wind turbine blades represents a major risk as ice fall may cause incidents, even lethal accidents. The current common methodology to identify the potentially risky areas below and around wind turbines uses the Seifert formula which is based on a deterministic approach. The safety factors associated to this method lead to excessively large zones around the turbines without granularity or circumstantial sub-zones. The approach presented in this paper is a probabilistic risk-based Monte Carlo methodology associated with an acceptance framework. Developed by Tractebel, this methodology allows a much more detailed mapping of the risk zones and also enables to model the impact of mitigating measures. This represents a real risk-based decision tool for windfarm developers and operators. The approach is fully compliant with the IEA Wind ‘International recommendations for ice fall and ice throw risk assessments’ and recent international safety standards. The tool has been translated into a cloud-based application called TRiceR (TRactebel Ice Fall Risk Assessment Digital Application).

Comparison of risk analysis approaches for analyzing emergent misbehavior in autonomous systems
PRESENTER: Nektaria Kaloudi

ABSTRACT. The evolution of autonomous systems depends on their constituent parts’ ability to act, seemingly independently, so that their collective behavior, termed emergent behavior, results in novel properties that appear at a higher level. Although these emergent behaviors can be beneficial, systems can also exhibit unintentionally and intentionally malicious emergent misbehaviors. As systems are becoming more complex and sophisticated, their emergence characteristics may result in a new type of risk, called emergent risk, which would affect both the systems and society. Although there have been several studies on achieving positive desirable emergent behavior, little attention has been given to the risk of undesirable emergence from either the safety or the security perspective. The main objective of this paper is to provide a structured approach to understanding emergent risks in the context of autonomous systems. This approach has been analyzed based on an emergent risk application example – a swarm of drones. We explore different security and safety risk co-analysis methods with a causal interpretation, and provide a comparative analysis based on theoretical factors that are important for assessing the emergence of various threats. The study results reveal each method’s strengths and weaknesses for addressing emergent risks, by providing insights into the need for the development of an emergent risk analysis framework.


ABSTRACT. With the publication of the General Data Protection Law - (LGPD) many companies having their headquarters in Brazil need to work on adapting their processes. Most companies are seeking compliance, however, many still do not know how to do this adjustment and the risk of non-compliance is huge. This study reviews the current percentage of companies in Rio de Janeiro that have a Chatbot service and that are complying with the LGPD. It also reviews the steps to adjust the Chatbot used by companies in the European Union to be complying with the General Data Protection Regulation - (GDPR). As a methodology approach, a search in the state-of-the-art literature was conducted to identify the most current literature about the subject. A survey was conducted with several companies in Rio de Janeiro that have a Chatbot service and that are complying with the LGPD. As a result, a flowchart showing the steps for adapting the ChatBot to the LGPD is presented, as well as the risks of non-compliance. This study completes a gap in the literature, since no specific previous work has been found covering this topic. Many companies may benefit from this study by knowing the steps to adapt the Chatbot with an LGPD and so avoid the risks of non-compliance.


ABSTRACT. Nowadays, the number of infrastructures and facilities where natural gas (NG) is handled in liquified form (LNG) is constantly increasing. However, for most of the end users (e.g., power stations), LNG has to be vaporized, usually at high-pressure conditions, back to NG form. Therefore, in the risk analysis activities for plants involving natural gas both liquid and gaseous accidental Loss of Containment (LOC) have to be accounted. As well consolidated risk analysis practice, Quantitative Risk Assessment (QRA) is largely used to support design phase of complex technical systems with potential for Major Accidents (MA), such as process facilities related to natural gas. QRA can be defined as a formal and systematic risk analysis approach to quantifying the risks associated with the operation of an engineering process, aiming to quantitatively demonstrate risk levels at which personnel as well as near-facilities-population or assets are exposed in case of occurrence of a MA. Combining natural gas handling facilities and QRA safety assessment, this work proposes the analysis of a case study, involving the final steps of the supply chain of a natural gas facility in the design phase. In detail, the assessment regards the operations to supply natural gas to an onshore located power station from the ship unloading of LNG occurring at a near seaport to the transportation of natural gas to the power station site in gaseous (NG) or liquid (LNG) form. In particular, the present work aims to compare the risk to individuals and population related to supplying natural gas through pipeline with an alternative consisting of supplying of LNG through iso containers by trucks. Since the QRA methodology is based on well-known industrial standards, the results achieved can serve as reference case for safety analysts when alternatives solutions of natural gas supply are to be considered, providing useful indications on different risk situations.

16:10-17:30 Session MO3B: Risk Management
Location: Atrium 2
Justifying the basis of Risk Decisions in a Pandemic – framing the issues

ABSTRACT. The COVID 19 pandemic has posed difficult and contentious issues for society to deal with. At this scale, the kind of utilitarian calculus that has traditionally underpinned contentious developments in more normal situations (1), seems no longer to command the unquestioning acceptance of impacted populations. The paper discusses the applicability of classic Cost Benefit Analysis approaches to a range of issues that have arisen that have parallels and implications beyond the pandemic context: Investment in preplanning and preparedness); Equivalence of the values of lives. One shot or two – decision to give single shots justification – efficacy versus speed of roll out; How far can you afford to really follow the science? Decisions on preplanning and preparedness involved maintaining stocks on PPE and the availability of intensive care beds and personnel. The question of equivalence of values of lives arises not only in these decisions on preparedness but also on decisions regarding postponing planned and even life serving treatments for other than COVID-19 patients. Lock down society to protect the vulnerable affects the life expectancy of the younger and also their quality of life. Postponing second shots reduces the protection level but increases the number of people protected. All these decisions can be brought under the heading of trying to balance the economy versus compassion. The ethics of responses to the pandemic are going to be debated for some time. A ‘rational’ utilitarian calculus that balances QALYs (Quality Adjusted Life Years) of immediate lives saved against lives lost, mental health impacts etc. due to lockdowns calculated on an individual basis and summed, may suggest lockdowns should not have been the answer. Some argue that the future costs of lockdowns could be orders of magnitude greater than the immediate benefits. However, there are equally compelling arguments that in the long run the benefits of lockdowns will exceed the costs (1,2) Moreover, this calculus whichever way it goes, seems to leave out a lot of reasonable ethical dimensions. For instance, the need not to ‘leave anyone behind’ i.e. not directly sacrifice people even if that could result in greater indirect overall group harm down the track. Of course, what counts as ‘direct sacrifice’ is open to debate and a government that does not seek to mitigate the known impacts of lockdown could also be reasonably seen to be ‘directly sacrificing’ people. Health authorities and health economists have used QALYs for decades to judge allocations in health care. It is mandated in how the UK buys medicines. The public service has well-developed methodologies for using QALYs in cost-benefit analyses throughout government. However, there are at least three problems in using QALYs and the value thereof in assessing whether measures such a lockdowns and which patients are prioritized. The first problem is that the QALY approach is not used as a single absolute determinant for decisions on life saving actions, even although it sometimes is presented that way (3). The second problem is that the value of QALY is a political choice. There is no scientific basis for any value, nor is there a law of nature on which such a value can be based, This means that at least - in principle – the use of a value has to be justified for each instance that it is used and that using a value “because it is used in another context’ is insufficient justification. The third problem is that the QALY approach discriminates people solely on their age. The elderly has less expected QALYs left than the young. There are counter arguments such as the “fair innings” approach, but these are qualitative and much less absolute that the strictly financial-economic considerations in which the value of a QALY is one of the parameters. The ethical issue touches on the fundamental approach in a market driven society, where numbers trump people. At the hands of people trained toward profit represented only by numbers and currencies rather than human beings, people become expendable commodities represented by numbers. A reverence for human life for its own sake is probably the most fundamental of all human social values, What matters is that, in one form or another, they form part of almost everyone’s intuitive values.

PRESENTER: Sharmin Sultana

ABSTRACT. According to experts, inherent safety is the best approach to risk reduction (Khan and Amyotte, 2002, Kletz, 1985). Researchers, in academia and industry, have studied this topic for a long time. However, many misconceptions and lack of clarity still exist in the industry. Also, there have been many variations in defining the concepts and principles of inherent safety. Present work is performed based on the findings of past work. The work focuses on the in-depth and systematic identification of hazards for better understanding. Inherent safety measures are proposed based on the factors contributing to creating the hazard. Identifying inherent hazard and risk factors makes it easier for the user to find the effective inherent safety solution. This approach helps draw a clear distinction between three risk reduction measures, inherent, passive and active. Inherent safety measures try to reduce the hazard from origin or try to attenuate inherent hazard and risk factors, while passive and active measures only focus on reducing the consequences of accidents or hazardous events. This paper presents a new definition of inherent safety with a new perspective and identifies the principles to be used to achieve the inherent safety of a system. Application of inherent safety measures and integration at the early design stage is vital for any chemical or other process industry. Systematic classified identification of inherent hazard and risk factors can make it easier to find appropriate inherent safety measures considering their constraint. Future work can develop a model to quantify the interaction of various inherent risk factors and quantify the relationship between risk factors and the system's risk level.

How to include malicious intent in risk analysis
PRESENTER: Franck Belpomo

ABSTRACT. Global economy is constantly changing and becoming more complex, less manageable. So, the risks that can affect an organization increase steadily and their consequences can potentially reach critical thresholds and sometimes fatal outcome. Thus, a top manager must assess the risks to choose, but if the occurrences of accidents can be measured, the threats are now beyond any method of classification and quantification. Generally, this danger is ignoring because it is not evaluable, to show that a new profitable and efficient way is possible, we propose to systematize the analysis of malevolence and demonstrate that this approach is part of a process of continuous improvement. This tool for resilience will be a guarantee of efficiency in crisis management and will minimize its negative consequences. After a bibliographical analysis and the observation of the lack of suitable tools, we will present the Ebios RM method which has advantages to meet the identified need and can provide a basis for a future solution. We will conclude our reflection by demonstrating the opportunities offered by our hypothesis by analyzing a concrete case of a recent industrial accident. In conclusion, we’ll expected a new vision about security like an investment and a asset for the value chain of economical or public organizations.


ABSTRACT. Smart City Lighthouse projects are a specific European innovation instrument for large-scale deployment and replication of Smart City and energy solutions. This cross-country and multi-disciplinary setup fosters innovation, but it also leads to complexity. Risk management represents a key approach for handling this complexity and meet the various types of risks that can occur in smart city lighthouse projects. A review of current risk management practices in smart cities lighthouse projects has been conducted including all the existing seventeen Lighthouse projects. The review showed that the risk management in most lighthouse projects is in line with common standards as described in ISO 31000 and the Open PM2 Project Management Framework highlighting identification, analysis, evaluation, and treatment of project risks. However, the occurrence of several high-profile cybersecurity and privacy related vulnerabilities has uncovered the need to expand the risk management beyond these standards. The present paper investigates how the risk management can be improved through collaborative governance, highlighting stakeholder participation and involvement, and adopting an integrated risk-resilience based approach. A specific smart city lighthouse project is used to illustrate the discussion.

16:10-17:30 Session MO3C: Decision-making
PRESENTER: Loick Simon

ABSTRACT. Implementation of multisource sensors combined with data analysis systems (e.g. machine learning) might provide new solutions for predictive maintenance to improve socio-technical system reliability. The Seanatic project aims to develop a decision support tool to increase maintenance processes in the maritime field, considering limits and benefits of human factor expertise. Under this perspective, this paper describes the Cognitive Work Analysis (CWA) approach for investigating new key functions that emerge in future maintenance socio-technical system. After phase one of the CWA was completed (WDA - Work Domain Analysis), the functions identified were used in the subsequent phases (ConTA - Control Task Analysis and SOCA - Social Organisation and Cooperation Analysis) to highlight different implications for cognitive humans’ activities. Real-time and prediction of machine breakdown of a vessel could be significantly reduced by assisting the chief engineer for supervision and planning activities [4]. Based on a CWA approach, ecological design interfaces could support those activities.

An overview of Machine Health Management in Industry 4.0

ABSTRACT. Nowadays, the fourth industrial revolution is happening with the new paradigm and technologies. One of the pillars of this revolution is the Industrial Internet of Thing (IIoT), which integrates sensors into the manufacturing system and helps to connect the machines, products and methods as an interconnected system. The amount of available data (5V: Volume, Velocity, Variety, Value, Variability) then continues to increase through various components (sensors, PLCs, etc.). These data are usually used for the purpose of improving the performance of the production system. The important objective is keeping the production systems under continuous monitoring of system’s function and corresponding health states. System health state can be found out by observing system behaviour data which are collected from installed sensors. Then we apply diagnostics and prognostics techniques on the observations: Prognostics and Health Management (PHM). Various precision sensors, high-speed data acquisition devices, computers and servers in IIoT constitute a new development space for PHM. In this study, we provide an overview of PHM methods. The review focuses on data-driven approaches that rely on available observed data and statistical models. We have two main types of observed data: direct data and indirect data. The direct data are directly related to the system health status while this relation is indirect or partial for the latter. Thus, our study focuses on two types of models: the direct-observed-state models and the partial-observed-state models. Firstly, we review recent advancements of the direct-observed-state models which can be distinguished into continuous process and discrete processes. Secondly, we focus in more detail on the partial-observed-state models. Thirdly, we illustrate the implementation of PHM methodologies by a real world application where data collected from sensors are exploited to predict system health states. Finally, we identify the gap in this field and highlight future research challenges.

A robust optimization model for maintenance planning of complex systems

ABSTRACT. Nowadays, industrial systems become more and more complex [1]. They are usually composed of many dependent components. The component dependencies can be classified into different groups such as economic, stochastic, and structural dependence [2]. Among these dependencies, the economic dependence has been extensively studied because it can help to reduce significantly the maintenance cost when the maintenance activities are grouped. In the literature, a number of grouping optimization models have been developed and successfully applied to different industrial sectors [3]; however, the robustness of the optimal grouping solution has not considered yet. In fact, the grouping solution is usually determined based on the expected values of random variables (failure times, maintenance costs, etc.). This way of calculation does not guarantee the precision of the grouping solution in the short-term. The performance of the existing models is then very sensitive to several dynamic contexts that may occur over system life in real applications [4]. To overcome this limitation, we present in this paper a new grouping maintenance model with consideration of the solution robustness. For this purpose, a robust optimization technique [5], which allows taking into account the uncertainties of the optimization model, was applied to find the robust grouping solution. The effectiveness of the proposed model and the robustness of the grouping solution are then analyzed through different numerical examples.

Maintenance selection and technician routing on a geographically dispersed set of machines

ABSTRACT. Several articles deal with the problem of grouping maintenance operations in order to both better manage maintenance resources and to achieve cost savings related to various intervention opportunities. Grouping maintenance is even more important when systems are geographically distributed. In such a context, it is important to be able to guarantee acceptable levels of operation for each machine and to optimize the various associated maintenance routings. The literature dealing with these issues, can be classified into 2 categories. Namely, parametric approaches which seek to optimize an a priori decision structure, and non-parametric approaches for which the groupings are provided by an optimization model, generally dynamic, whose conditions and existence of optimal decision structures can be sought a posteriori. None of them deals with the maintenance selection defined by the level of efficiency of the operation that should be done at each maintenance. It is clear that when the problem increases in complexity, such an analysis is no longer possible. This is the problem addressed in this work.

We consider a set of geographically dispersed machines, and a set of multi-level skilled technicians based at a depot. The traveling time between each machine and the depot is known. The time horizon is discretized into a set of periods. The state of a machine is represented by a given discrete-time Markovian model over the horizon of time. There are several possible types of maintenance, each one aiming to bring the machine from its current state of degradation to a better one with a given different cost and a given different operating time, depending of the current state of the machine. The objective is, for each period, to select a set of maintenance operations to be carried out and to construct a route for each technician in order to visit all the machines to be maintained, while minimizing the total costs. These costs include travel costs, failure costs and maintenance costs.

A heuristic method is proposed to produce solutions. Indeed, the problem is complex and not suitable to exact solving. The proposed method iteratively inserts a new maintenance operation at a given period based on a utility cost that reflects the benefice of carrying out this operation in that period. The solution produced is afterwards optimized with a local-search procedure. The effectiveness of the solution, both in quality of the solutions and on scaling capacity, is studied with several experiments.

16:10-17:30 Session MO3D: Uncertainty Analysis
Location: Panoramique
Bayesian Identication of Time-varying Parameters using Sequential Ensemble Monte Carlo Sampler with Variational Bayes
PRESENTER: Adolphus Lye

ABSTRACT. This work presents an extended Sequential Monte Carlo sampling algorithm embedded with a Variational Bayes step to approximate the Prediction PDF, and thereby, the Prior PDF for the next iteration of the Bayesian Filtering procedure. Known as the Variational Bayes - Sequential Ensemble Monte Carlo (VB-SEMC) sampler, this algorithm seeks to address the case whereby the State-evolution model does not have an inverse function. When this happens, analytical form of the Prediction PDF could not be determined as it is the composite function of the current Posterior PDF and the inverse State-evolution function. To approximate the Prediction PDF from the Prediction samples, a Gaussian Mixture Model is adopted whose covariance matrix is determined via Principle Component Analysis (PCA).

As a form of verification, a numerical example involving the identification of inter-storey stiffness within a 2DOF Shear Building model is presented whereby the stiffness parameters degrade according to a simple State-evolution model whose inverse function can be derived. The VB-SEMC sampler is implemented alongside the SEMC sampler and the results will be compared on the basis of the accuracy of the estimates, the Coefficient of Variation (COV), and computational time. Following which, a second example is presented based on a Non-linear time-series model whose State-evolution model does not yield an inverse function. The VB-SEMC sampler will be implemented and the results of the estimates will be compared against the true evolution model.

PRESENTER: Bertrand Iooss

ABSTRACT. In uncertainty quantification of a numerical simulation model output, the classical approaches for quantile estimation requires the availability of the full sample of the studied variable. This approach is not suitable at exascale as large ensembles of simulation runs would need to gather a prohibitively large amount of data. This problem can be solved thanks to an on-the-fly (iterative) approach based on the Robbins-Monro algorithm. We numerically study this algorithm for estimating a discretized quantile function from samples of limited size (a few hundreds observations). As in practice, the distribution of the underlying variable is unknown, the goal is to define “robust” values of the algorithm parameters, which means that quantile estimates have to be reasonably good in most situations. This paper presents new empirically-validated iterative quantile estimators, for two different practical situations: when the final number of the model runs N is a priori fixed and when N is unknown in advance (it can then be minimized during the study in order to save cpu time cost). This method is applied to the estimation of indicators in the field of engineering asset management for offshore wind generation. For large windfarms, asset-management Operations and Maintenance (O&M) models are assessed through Monte-Carlo simulation and risk-informed indicators such as quantiles are usually estimated a posteriori based on the results of all replications. Saving these data leads to some issues regarding both computing time and memory that diminish the efficiency of the tools to support decision making. For this reason, Robbins-Monro based estimators are perfect candidates to fix computing issues for wind-turbines O&M models. One of the main challenges being the capability of Robbins-Monro algorithm to deal with complex stochastic variables as output encountered in this field may be multimodal, mixed discrete-continuous or supported on bounded or semi-infinite intervals. This paper shows how the proposed algorithm improves the efficiency of the tool to support risk informed decision making in the field of offshore wind generation.

PRESENTER: Alexander Kremer

ABSTRACT. Each reliability prediction is based on a suitable life model that describes the relationship between stress and life mathematically. In practice, the relation can often not be described analytically. Design of Experiments (DoE) is the unique alternative to solve this problem. However, the DoE approach requires normally distributed residuals and is therefore not valid for life testing. Therefore, the classical DoE approach has been further developed to Lifetime-DoE (L-DoE). Life data are required to estimate the unknown model parameters. Usually, these data are obtained under extreme conditions in the test in order to make conclusions about the failure behavior of the tested products for real operating conditions. The data is obtained using sensors that characterize constantly changing units such as the position of moving objects and the ambient temperature. For example, a torque measuring shaft can measure both the speed and the torque of a toothed belt drive by measuring the angle of rotation and the direction of rotation of a rotating shaft, and a pyrometer can measure the ambient temperature. An important issue here is data variability, which is usually due to experimental and measurement errors. Running processes (machines) always exhibit variability due to fluctuations in the influencing factors, such as the ambient temperature. Any uncertainty in the sensors inevitably leads to uncertainties in the recorded data. However, since these data are directly used for the development of a test plan on which the failure predictions are based, corresponding uncertainties are to be expected here as well. To improve reliability predictions, this paper proposes a simulative approach that implements the uncertainties of test bench into the life modelling based on L-DoE. By using the Monte Carlo method, the effect of the uncertainties of the implemented sensors on the experimental design is projected. This allows to derive the uncertainty in the development of life models and to simulate the effect on failure predictions. The result of the simulation is a life model that takes into account the uncertainties of the sensors used in the test. In addition, a confidence interval is specified to ensure the application of the sample results to the population. The importance of the method is demonstrated in a simulation study. Using the Proportional Hazard model and a full factorial experimental design it is shown that without consideration of test bench uncertainties uncertain reliability predictions can be expected.

Deriving Prior Knowledge from Lifetime Simulations for Reliability Demonstration while Considering the Uncertainty of the Lifetime Model

ABSTRACT. Lifetime simulations are often used to predict the failure of a product or component for field operation at early stages of product development. The results are mainly used to assess design variants and to compare requirements to the products simulated performance. These results very often lack sufficient statistical content for further use, for example as prior knowledge in reliability demonstration testing. Many prior knowledge models described in [1-3] can be used to reduce the experimental effort or to increase the confidence level. Prior knowledge can be obtained qualitatively from an FMEA and expert knowledge or quantitatively from field data, experimental data, previous products or even from simulations and calculations [4]. However, existing models do not consider simulation data with confidence levels, although lifetime simulations are used in almost all development processes nowadays. Obtained prior knowledge from simulations could potentially be of great benefit for reliability demonstration and thus the expenditure of the simulation can be used more efficiently. With the presented approach it is possible to assign a reliability statement with lifetime and confidence level to the simulation results. The tests performed to parameterize the lifetime models of the simulation are used in this work to obtain a confidence distribution of reliability at the corresponding lifetime via a non-parametric bootstrap approach. The approach in [5] also uses a bootstrap procedure for obtaining confidence levels, but is neglecting the uncertainty from the extrapolation to field load level. Using a S-N curve as an exemplary lifetime model, it is shown how the necessary reliability statement can be obtained including a confidence level. The presented approach makes it possible to express the results of lifetime simulations in a reliability statement with confidence level, which can be used for reliability demonstration and the planning of reliability tests.

References [1] GRUNDLER, ALEXANDER ; BARTHOLDT, MICHAEL ; BERTSCHE, BERND: Statistical test planning using prior knowledge—advancing the approach of Beyer and Lauster. In: Safety and Reliability – Safe Societies in a Changing World (2018), S. 809–814 [2] KLEYNER, ANDRE ; ELMORE, DAVID ; BOUKAI, BENZION: A Bayesian Approach to Determine Test Sample Size Requirements for Reliability Demonstration Retesting after Product Design Change. In: Quality Engineering Bd. 27 (2015), Nr. 3, S. 289–295 — ISBN 0898-2112 [3] BARTHOLDT, MICHAEL ; GRUNDLER, ALEXANDER ; BOLLMANN, MARTIN ; BERTSCHE, BERND: Assurance of the system reliability of a gearbox considering prior knowledge. In: Proceedings of International Design Conference, DESIGN Bd. 3 (2018), Nr. 1988, S. 965–974 — ISBN 9789537738594 [4] KROLO, ANNA: Planung von Zuverlässigkeitstests mit weitreichender Berücksichtigung von Vorkenntnissen, Dissertation, Universität Stuttgart, 2004 [5] GRUNDLER, ALEXANDER ; BOLLMANN, MARTIN ; OBERMAYR, M. ; BERTSCHE, BERND: Berücksichtigung von Lebensdauerberechnungen als Vorkenntnis im Zuverlässigkeitsnachweis, VDI-Fachtagung Technische Zuverlässigkeit, 2019

16:10-17:30 Session MO3E: Organizational Factors and Safety Culture
Location: Amphi Jardin

ABSTRACT. During the terrorist attacks in Norway 22th of July 2011, 77 persons was killed and many more injured. The attacks led to massive, multifaceted efforts of civil society, especially concerning the attacks at Utøya, for example - nearby civilians took part in dangerous rescue missions in private boats and more than 250 youths was taken care of in an ad hoc rescue center at a nearby camping. Most of the post-catastrophic research and investigations on the terror attacks have focused on the efforts of the official first responders and their respective authorities, and to a lesser degree highlighting the role of response from community and bystanders. As part of the ENGAGE project, we conduct a document study to shed light on civil society contributions to societal resilience during the terror attacks at Utøya. Based on academic literature investigation reports, newspaper articles and autobiographical books, we represent the Utøya terror attacks from what is known regarding helpers. We emphasize four domains of analysis, where we identify and discuss i) characteristics of the academic literature on the Utøya attacks, ii) a typology of actors, ii) volunteer coping actions, and iii) contextual factors. The findings show a dynamic and autonomous nature of spontaneous volunteering, influenced by contextual factors like degree of trust in formal response organizations, spatial proximity, professional and local knowledge.


ABSTRACT. Purpose: Even though there are numerous attempts of clarifying crucial factors for successfully implementing technological changes in organizations, research shows that such processes very often are considered unsuccessful (e.g. Dwivedi et al., 2015). Recruiting and using so-called super users when introducing new technology in organizations, has become a common trend (Sitthidah & St-Mauritz, 2016). However, little is known about the criteria that optimally should ground the specifics for choosing a super user. From a safety aspect, having well founded process on how a super user should teach staff how to use new technology, is of utterly importance, as wrong understanding of how it works could potentially have fatal outcome. Research question: Which criteria should be emphasized when selecting super users? Method: 10 semi-structured interviews were conducted and analyzed using thematic analysis. Results: The results were that criteria for super users should be (1) availability and local knowledge (2) technological skills (3) pedagogical skills), and (4) proactiveness. Conclusion: Based on a safety aspect, recruiting a super user internally would help provide the important understanding of local knowledge. Further, recruiting internal staff would provide learning on an organizational level. This is demonstrated by a model called the organization’s eco-system of learning (Ecso-Learn).


ABSTRACT. Professional socialization is a major challenge for the safety professional throughout his career (Wybo & Van Wassenhove, 2016). In order to capture the complexity of the process of professional socialization of safety practitioners, a useful framework has been established with the concept of forms of identity which encompasses both professional recognition of skills and a satisfactory career path. The forms of identity are the result of a double transaction (biographical & relational) which structures the professional socialization of individuals (Dubar, 1992). On the one hand, the evolution of professional identity is linked to a process of recognition by peers and institution, which can be seen as a relational transaction between the safety professional and the organization. On the other hand, the construction of social identity can be seen as an internal biographical transaction polarized between continuity and rupture. Each safety professional has to deal with their own way of socializing. This paper presents the educational innovation implemented within the PSL-Mines ParisTech post-graduate master “Mastering Industrial Risks” between 2010 and 2020 as a part of a design that promotes effective learning environment (Foussard & Van Wassenhove, 2019). To support the future safety professional in this sensitive process of professional socialization, several initiatives based on the framework of forms of identity have been set. A discussion on the relevance of this learning device is given on the basis of qualitative feedback of the alumni.

Safety and Security: A cross-professional comparison
PRESENTER: Riana Steen

ABSTRACT. From the theoretical perspectives, traditionally, safety and security represent different contexts, which challenges exchanging ideas, methods, and results between these two scientific fields. Therefore, a distinction between these two contexts, based on the intentionality behind unwanted events, the way risk is understood, and the methods used to assess and manage risk in these contexts. From the practical point of view, distinguish between the roles and responsibilities of these two professional communities are unclear. This study explores the extent of the commonalities and differences in safety and security professionals' current stage. We conduct a qualitative analysis based on the results from 28 semi-structured interviews with the safety and security domain professionals. Our study focuses on the conceptual narratives, responsibilities, and risk assessment approaches from a practical perspective. Our findings indicate that while the professionals in these two fields strongly distinguish between the context of their activities, they share many commonalities regarding their day-to-day tasks. A fundamental common problem in managing risk is that it is difficult to express uncertainty and determine how likely it is that an incident/event happened; we are unable to give strong arguments for specific likelihood assignments of threat occurrence. Yet, a likelihood can always be assigned based on available knowledge. A holistic risk management approach, integrating risk- and resilience-based thinking, acknowledges this and considers a set of qualitative and quantitative methods to reflect this (lack of) knowledge.

16:10-17:30 Session MO3F: Security
Neither hard nor soft prevention – On the need to reconfigure contemporary counterterrorism towards smart powers
PRESENTER: Martin Sjøen

ABSTRACT. Confronted with something of a modern terrorist crisis, European policymakers have reformulated counterterrorism in the direction of a being a task in which the state and the public are combined in preventing individuals from being radicalized towards terrorism. This division of counterterrorist responsibilities between the state and the public sees the combination of using both “hard” state measures with “softer” public measures. Although hard measures are usually favored by governments because they give immediate results, terrorist threats are no longer exclusively hard1, which arguably makes it naïve to rely on hard counterterrorism alone.

Alongside the failure of the war on terror to eradicate global terrorism, a general recognition has emerged regarding how counterterrorism is more complex than merely being a question of using hard military and policing strategies. At the same time, terrorism cannot be prevented solely through soft approaches either, as the rise of foreign fighters have seen increasing need for using punitive factors2. Thus, the multidisciplinary prevention approach applied in most European countries tend to include soft prevention strategies aimed at whole populations (universal), alongside harder interventions aimed at those deemed to be at risk (selective), or specifically targeting those already radicalized (indicated). Additionally, indicated interventions can be both of supporting and controlling nature. Yet, the inclusion of soft and hard prevention strategies has not been adequately integrated into a comprehensive framework. In particular, the role of dialogue with [potential] extremists have not been fully developed within this preventive framework.

This study theorizes the mobilization of shared state and public counterterrorism. A particular focus is placed on the role of dialogue with potential extremist individuals as a means of conflict transformation. Empirical examples from European countries are offered to show how dialogical strategies can help to integrate both hard state and soft public preventions into at framework. Thus, by combining a multidisciplinary prevention, a comprehensive approach may allow for so-called smart powered counterterrorism3. Dialogue in this context is an untapped resource, we argue. and deserves to explored in theory, policy and practice, in which this study makes a small contribution towards.

Physical security risk analysis for mobile access systems including uncertainty impact
PRESENTER: Thomas Termin

ABSTRACT. Protection against threats is a basic human need which brings the use of security technologies into industry and everyday life. Assets should be protected from unauthorized access using suitable measures. The evaluation of security and the justified use of measures to reduce the inherent vulnerability is perceived as a special challenge for the customer and the company, as usually only limited resources are available. A lack of adequate reference works and specifications in the form of concrete recommendations for action, guidelines or standards often makes the security assessment proprietary, which mostly corresponds to a compliance check that is insufficiently application-specific and target-oriented in terms of good cost benefits. In physical security assessment of critical infrastructures, a paradigm shift towards performance-based methods was initiated (Harnser 2010). The Performance Risk-based Integrated Security Methodology (PRISM) allows a performance-based assessment of physical security infrastructures. However, the PRISM is only conducted in the context of critical infrastructures (CRITIS) via semi-quantitative approaches and does not allow the consideration respectively the assessment of uncertainty impact, requiring genuine quantitative metrics. Moreover, the approach has not been applied to mobile access systems (MAS) yet. This paper aims to apply the concept of PRISM on the use case of MAS by extending and optimizing it to allow a holistic risk assessment with respect to uncertainty.


1. Harnser, A Reference Security Management Plan for Energy Infrastructure Prepared for the European Commission, (2010).

Cyber and Electromagnetic Activities and Their Relevance in Modern Military Operations
PRESENTER: Radovan Vasicek

ABSTRACT. The paper examines issues related to coexistence and integration of military activities conducted within the cyber space and electromagnetic environment, as inseparable parts of our security environment. Contemporary and emerging security threats as well as lessons learned from recent military operations have already proved that in order to achieve operational objectives in the traditional physical domains (land, air, maritime, space) it is crucial to ensure dominance in the non-physical domains, i.e. the cyber space, electromagnetic environment and information environment. As they overlap each other, while being exploited by multiple military and non-military stakeholders and actors, it is necessary to identify these overlaps. At the same time, to deliver the synergic effect, an operational battle staff need to deconflict, coordinate, synchronize and integrate cyber and electromagnetic activities (CEMA) with other supporting activities (e.g. intelligence, information operations etc.). The authors describe the fundamentals of the CEMA concept, supported by a case study of its practical employment in military operations. They also compare various approaches applied to implementation of this concept by selected armed forces and security organizations. Based on the findings of this comparison, common and specific features of different approaches are specified. The research identifies potential threats to friendly armed forces stemming from failure to reflect and acknowledge the ever growing complexity of operations in the non-physical operating domains. The results and findings presented in the article can be used during the implementation process of the CEMA concept into the doctrinal documents of national armed forces.

PRESENTER: Moritz Schneider

ABSTRACT. Threats posed by civilian drones are becoming an increasing security risk for critical infrastructures as well as for events or companies. In order to protect an asset against a drone intrusion a security system is necessary, which in general is described by its capabilities of protection, detection, and intervention (Garcia, 2017). The variety of different threat scenarios posed by drones raises the need for detailed analysis of scenario specific requirements on detection systems. Thus, in this paper a scenario analysis is conducted to identify consistent threat scenarios including factors that are critical for drone detection. The study is based on a method for conducting scenario analyses, called Morphological Analysis (Johansen, 2018). In this process, factors influencing the subject of investigation are collected by means of literature reviews or expert interviews. In particular, factors that influence the detectability are specified, such as the classification of the drone as described in Hassanalian and Abdelkefi (2017). In order to identify key factors, an influence analysis is performed, which evaluates the relevance of the factors with regard to the detection capability. Subsequently, feasible characteristics of these key factors are identified based on literature reviews or expert interviews. For the assessment of internal consistency of a scenario, a cross-impact analysis is conducted. This methodology includes a pairwise assessment regarding the joint occurrence of characteristics to eliminate inconsistent combinations (Weimer-Jehle, 2006). The remaining consistent scenarios can be applied to derive requirements for a detection system or to validate existing drone detection systems regarding suitability for feasible threat scenarios.

16:10-17:30 Session MO3G: Oil and Gas Industry
Location: Atrium 3
Novel Application of Technology in Subsea Safety Instrumented System: Battery-based Shutdown System

ABSTRACT. The all-electric paradigm shift in the subsea oil & gas industry brings with it several new technologies, including novel use of existing technologies. Key components in this paradigm are battery systems and battery management systems (BMS). This paper investigates features in commercially available BMSs and how they can be used in subsea valve actuation, and more specifically subsea Christmas Tree (XT) barrier valve actuation. In this way, the safety challenges of implementing a battery-based shutdown system are investigated, where motor control systems, batteries and BMSs are vital elements. These technologies are all well-known, though not commonly used in subsea safety valve actuation applications. Consequently challenging the perception of a subsea safety system, as well as the current requirements and regulations. In a case study, the all-electric control system architectures proposed by Okoh et. al (2019) are utilized for discussion on impact and compliance towards functional safety requirements, and oil & gas industry specific requirements. The safety challenges for the architectures can be overcome, however this paper reveals a demand for updated oil & gas specific requirements to accommodate novel application of technology. It is evident that to reach an optimal safety solution for the different architectures, the requirements should be written such that the safety-related features can be optimized with regards to the true objective of the safety system, enabling complexity minimization through best practice safety engineering.

Rescue of Personnel after Emergency Evacuation of Offshore Petroleum Installations
PRESENTER: Jan-Erik Vinnem

ABSTRACT. The petroleum industry in Norway has developed a set of guidelines (Ref. 1) on how coope-ration on area-based emergency response should be practiced. The standard covers emergency evacua¬tion but has limited focus on rescue of personnel after evacuation. Nevertheless, the Norwegian Petroleum Regulations (Ref. 2) require that operators on the Norwegian Continental Shelf have provisions for rescue of personnel after evacuation. Rescue in this context implies picking up, or escorting, evacuated personnel that are in the water, life-rafts, or lifeboats, to a safe location. Rescue of personnel involves use of shared area-based resources, such as Fast Rescue Craft (FRC) from emergency response and rescue vessels (ERRVs) and search and rescue (SAR) helicopters. Rescued and injured personnel will be given lifesaving first aid and medical treatment and transported to an onshore base. This paper reviews lessons learned from emergency evacuation cases to sea on the Norwegian Continental Shelf and worldwide offshore oil and gas operations. The research is based on a study performed for the Petroleum Safety Authority [Norway] in 2020 (Ref. 3), where rescue of personnel from sea after emergency evacuation was one of the main topics addressed. The lessons learned covers topics such as availability of resources; cooperation between resources; responsibilities when utilizing shared resources; and performance influencing factors for successful rescue operations.


ABSTRACT. The purpose of the work presented in this paper, which has been conducted for the Petroleum Safety Authority Norway (PSA), is to give the oil and gas industry a better understanding of the role and vulnerability of communication networks, especially in emergency situations when a defined situation of hazard and accident (DSHA) has occurred. The paper focuses on external communication between offshore and onshore in emergency situations, i.e., emergency communication to land. This is part of a larger project where the main goal has been to gain knowledge about risks, threats, vulnerabilities, and the importance of ICT security for industrial systems (SINTEF, 2020; PSA 2020). The work is mainly based on document reviews, interviews, and work meetings. Interviews have been conducted with selected oil companies, rig companies and telecom operators. The work has been carried out in an interdisciplinary project team. The content includes: i) the role of external communication networks during DSHAs, ii) risks and vulnerabilities in the communication networks, iii) consequences of the loss of connectivity, and iv) challenges and suggestions for improvements of regulations and standards. Fifteen recommendations are proposed regarding measures for the industry, four of which are aimed at changes in standards (NORSOK T-101:2019; NORSOK T-003:2019), and eight recommendations are given regarding measures for PSA, one of which is aimed at supervision and the other at changes in regulations.


ABSTRACT. The Norwegian oil- and gas industry is being digitalized in search of more efficient operations, increased extraction of resources, and improved HES. Although offering apparent opportunities, we also face considerable challenges when a traditional and safety-oriented industry applies modern information technologies, including cloud-based services. We have interviewed several actors to understand the drivers for ongoing digitalization processes and to uncover challenges related to the increased coupling between IT (information technology) and OT (operational technology), including safety-instrumented systems. Main findings from interviews and analysis highlight growth in coupling from OT systems to IT systems and further to cloud-based solutions, and increasing amounts of data flowing upwards. We found a sound awareness of not imposing control from IT to OT though, but there are however reasons for concern. These are discussed in the paper. We discuss the main findings and their potential implications and conclude with a series of recommendations to the industry and supervisory authorities.

16:10-17:30 Session MO3H: Railway Industry
Location: Cointreau
Statistical Assessment of Safety Levels of Railway Operators
PRESENTER: Jens Braband

ABSTRACT. Recently the European Union Agency for Railways (ERA) has received a mandate for “the development of common safety methods for assessing the safety level and the safety performance of railway operators at national and Union level” [2]. Currently, several methods are under development. It is of interest how a possible candidate would behave and what would be the advantages and disadvantages of a particular method. In this paper, we study a version of the procedure. On the one hand side we analyse it based on the theory of mathematical statistics. As a result we present a statistically efficient method the rate-ratio test based on a quantity that has smaller variance than the quantity handled by the ERA. Then, we support the theoretical results with the help of a simple simulation study in order to estimate failure probabilities of the first and second kinds. In particular, we construct such alternative distributions which the decision procedure cannot distinguish. We will show that the use of procedures that are optimal in the sense of mathematical statistics combined with the use of a characteristics that has small spread – here the number of accidents – is advantageous.

Application of the Cox Regression Model for analysis of Railway Safety Performance
PRESENTER: Hendrik Schäbe

ABSTRACT. The assessment of in-service safety performance is an important task, not only in railways. For example it is important to identify deviations early, in particular possible deterioration of safety performance, so that corrective actions can be applied early. On the other hand the assessment should be fair and objective and rely on sound and proven statistical methods. A popular means for this task is trend analysis. This paper defines a model for trend analysis and compares different approaches, e. g. classical and Bayes approaches, on real data. The examples show that in particular for small sample sizes, e. g. when railway operators shall be assessed, the Bayesian prior may influence the results significantly.

Research on Fault Propagation Characteristics of Fully Automated Operation System Based on Complex Network

ABSTRACT. This paper selects the VOBC subsystem of Beijing Metro Yan Fang line fully automatic operation system as the research object. By analyzing the fault data of the VOBC subsystem, a fault propagation model of VOBC subsystem is established with failure events as nodes and causal relationships between failure events as connections based on the complex network theory. The statistical characteristics of fault propagation network of VOBC subsystem are analyzed from different angles. According to the analysis, the key failure events in the established fault propagation network has been found, and the scale-free characteristics, the clustering characteristics, and the role and influence of the nodes on the network, etc. have been proved. In order to quantitatively analyze the fault propagation path and further dig out the fault propagation law, the propagation process of hazard degree in the network is analyzed and the maximum possible fault propagation path in the network is obtained based on the load-capacity model from the cascading failure theory. Furthermore, by changing the value of the parameters in the model, the fault propagation path under different initial danger degree and danger degree is compared and analyzed. It is found that under the condition of controlling cost, the initial danger degree and danger degree of failure event are reduced by taking certain measures, which can shorten the length of the maximum possible fault propagation path and improve the reliability of the system operation.

A Case Study on Managing the Complexity of Service Failure Modes in IoT Systems
PRESENTER: Marc Zeller

ABSTRACT. With the release of IoT devices (sensors and electronic equipment with internet access) an increasing amount of Operation Technology (OT) functionalities is depending on Information Technology (IT) services. When the OT is involved in the monitoring or the control of critical infrastructure such a railway, energy transmission or healthcare, these edge-based or cloud-based IT services become mission critical. Therefore, they need to be included into the reliability analysis of the system. Functions of a system, that are partly implemented by OT and IT services are subject to a significant amount of failure modes. These failure modes and their effect need to be assessed during the reliability analysis of such complex and heterogeneous systems. The analysis of the combination of component failure modes becomes a challenge, because the systems' failure modes analysis needs to address the combinatorial correlation of component failure modes.

To manage the combinatorial complexity of component failure modes in heterogeneous IoT systems realizing missing critical functionalities, we proposes the application of the Component Fault Trees (CFTs) methodology \cite{Kaiser2018}. The CFT approach, is a model- and component-based methodology to assess the failure behaviour of mission critical systems. This failure behaviour is used to document that a system is reliable and can also be used to identify drawbacks of the design of a system. It has the same expressive power as classic fault trees. Moreover, the CFT methodology enable the automated creation of the resulting fault trees of a complex system based on the description of individual components and has already been applied successfully in industrial practice \cite{KaiHofig2018}. For the reliability analysis of heterogeneous IoT systems, the components' failure modes including the failure modes of the OT devices and the IT services are modelled each by Component Fault Tree elements. The automatically generated system fault trees shows benefits in comparison to traditional FMECAs or Fault Tree analyses approach of complex systems, since the experts can focus on modelling the failure behavior of individual OT/IT components while combinatorial correlation of the component failure modes in generated by the CFT methodology.

We will illustrate the benefits of the CFT methodology for the reliability analysis of complex and heterogeneous IoT systems by a case study in the domain of railway control systems: Traffic control systems provide crucial functions which are enhanced by traffic management systems. Traditionally, traffic control systems provide their functionality by use of dedicated OT hardware and software. An increasing number of functionalities relies nowadays on additional IT services (cloud-based services) and their diverse failure modes.

16:10-17:30 Session MO3I: AI for safe, secure and dependable operation of complex Systems
Location: Giffard
Semi-Supervised Learning with Temporal Variational Auto-Encoders for Reliability

ABSTRACT. Within the field of fault diagnostics and prognostics of industrial machinery and systems, deep learning models have risen in popularity in recent years, mostly due to their ability to automatically extract features from multi-sensor data. Nevertheless, while novel sensing technology has made possible for the mainstream industry to equip their physical assets with plenty of sensors and therefore acquire massive quantities of operational data, the labeling process (i.e., identification of health states) of such data is still an open problem to overcome in order to use it effectively alongside IA techniques. A novel solution to this problem is to develop algorithms with semi-supervised capabilities that can both use the scarce and expensive-to-produce labeled portion of the data as well as the abundant, but unlabeled data samples. In this paper, we present a coupled training algorithm that can be used to conjunctively train a fully unsupervised variational auto-encoder along a fully supervised recurrent neural network to perform fault diagnosis and prognosis as well remaining useful life prediction using time series as input data. The coupled training of the model is capable of encoding information from both the supervised and unsupervised portions of the data into the gradients, effectively performing semi-supervised learning. We demonstrate the proposed approach by a prognosis case study involving turbofan data from the well-known CMAPSS benchmark dataset.

Bearing Fault Diagnosis Method Based on Multi-Class Support Vector Machine and Grey Relational Degree
PRESENTER: Boyang Zhao

ABSTRACT. In modern machinery manufacturing and applications, the components are becoming more and more inseparable. When one of the parts fails, it may affect the normal operation of the entire equipment. That is why the fault diagnosis has become very crucial in industrial applications. Finding the problematic parts as early as possible can avoid costly accidents in time. This paper proposes a rolling bearing fault diagnosis technology based on grey correlation degree (GRD) and multi-class support vector machine. First, the public data set of Bearing Center of Case Western Reserve University (CWRU) is used to form a sample set and build a multi-domain feature set from the perspective of time domain and information entropy. Then, we use the ReliefF algorithm to reduce the dimensionality of the feature set to establish a new feature subset. Then, grey relational degree classifier and multi-class support vector machine classifier are used for fault diagnosis, respectively. Lastly, the fault diagnosis effects of the two classifiers were compared. The results show that both of these two classifiers can efficiently identify rolling bearing faults, and the multi-class support vector machine classifier performs better.

A Temporal Pyramid Pooling-Based Convolutional Neural Network for Remaining Useful Life Prediction

ABSTRACT. Remaining Useful Life (RUL) prediction is crucial in Prognostics and Health Management (PHM). It can provide the basis for predictive maintenance strategy planning. Deep neural networks such as Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) have been widely applied in RUL prediction due to their powerful feature learning capabilities when dealing with high dimensional sensor data. The sliding time window method with a predefined window size is typically employed to generate data samples to train such deep neural networks. However, the disadvantage of using a fixed-size time window is that we might not be able to apply the resulting predictive model to predict new sensor data whose length is shorter than the predetermined time window size. Besides, as the length of sensor data varies from one to another, this unchanged and subjectively set time window size may be inappropriate and impair the prediction model's performance. Therefore, in this paper, we propose a Temporal Pyramid Pooling-Based Convolutional Neural Network (TPP-CNN) to increase model practicability and prediction accuracy. With the temporal pyramid pooling module, we can generate data samples of arbitrary time window sizes and use them as inputs of CNN. In the training phase, CNN can learn to capture temporal dependencies of different lengths since we feed in samples with different time window sizes. In this way, the learned model can be used to test data with arbitrary sizes, and its predictive ability is also improved. The proposed TPP-CNN model is validated on the C-MPASS turbofan engine dataset, and experiments demonstrate its effectiveness.

16:10-17:30 Session MO3J: Decision Science for resilience
Location: Botanique

ABSTRACT. This study proposes a novel method to assess damages in the built environment and its corresponding economic impact after a natural disaster, using a deep learning workflow to quantize it. Thanks to an automated crawler, aerial images from before and after a natural disaster of 50 epicenters worldwide were obtained from Google Earth, generating a 10,000 aerial image database with a spatial resolution of 2 m per pixel. The study starts by using the algorithm U-Net [1] to perform semantic segmentation of the built environment from the satellite images in both instances (prior and post-natural disaster). For image segmentation, U-Net is one of the most popular and general CNN architectures. The U-Net algorithm used reached an accuracy of 95.5% in the segmentation. After the segmentation, we compared the disparity between both cases represented as a percentage of change. To create a numerical vector characterizing the events more precisely, the geographical characteristics of the location (climate, hydrography, and population) and feature descriptor of the satellite images were considered. Moreover, we added to these values specific details about the disaster found in the database EM-DAT [2] such as type of event, magnitude, number of people affected, material losses, and investment in the housing sector by humanitarian organizations. The former numerical features were introduced in a clustering algorithm called Self Organizing Maps (SOM) [3] to cluster similar disasters depending on their previously assigned characteristics. In this way, a map of changes is created where the 50 natural disasters are organized according to the change that occurred and the respective response and consequence. After a natural disaster, this map of changes serves as a predictor for future cases, allowing predicting potential changes based on a satellite image and the sector’s geographical condition. With this information, an urban planning process can begin immediately to mitigate the impact of the disaster.

Impact of distributed decision-making on energy and social systems' resilience: a case study of solar photovoltaic in Switzerland

ABSTRACT. Solar photovoltaic (“PV”) adoption has largely occurred as the result of a distributed decision-making process, whereby individual people and businesses install the technology on their own property. The decentralization of energy systems decision-making is challenging to energy systems planners, who no longer wholly control systems development. In addition, the factors motivating individual solar adopters may be different than those traditionally governing energy systems infrastructures. Electricity systems and their interdependent critical infrastructures may therefore be exposed to new risk, reliability, and resilience challenges due to solar PV deployment. By contrast, solar PV deployment could also address some of these challenges by providing diversity and flexibility in energy generation, as well as by lowering the cost of energy for self-consumers. Given solar PV’s complex impacts, energy system planners ought to be provided with decision-making support for understanding how certain policies will affect overall systems resilience. In that aim, we explore the impacts of distributed decision-making on energy and social resilience using solar PV uptake Switzerland as a case study. Considering the capability of solar PV for providing energy autonomy during normal or emergency operation and financial relief from energy bills, we define a metric for combined energy and social resilience using total energy produced and relative income. We then apply an optimization model to compare historical solar PV deployment to that which might be considered optimal for combined energy and social resilience objective. We also compare our results to those obtained when considering either energy or social equity only. Our methods are readily applicable to decision-makers in other jurisdictions and can be tailored to account for different policy goals, therefore contributing to resilience-based decision making in practice.

PRESENTER: Alice Alipour

ABSTRACT. A transportation network is a critical infrastructure system that serves everyday life, provides the backbone of economy, and is critical to national safety. And careful design of such a system is required to ensure smooth daily operation under stable conditions. However, with ever-changing climates, transportation systems are exposed to significant weather-related hazards, with flooding events shown to be the dominant hazard in the U.S. due to their frequency and intensity. While flood events affect state agencies by requiring direct tax-dollar investments to repair damages, they also adversely influence communities by producing substantial indirect losses, and this can motivate state agencies, asset owners, and planners to develop with cost-effective mitigation strategies. However, the uncertainty of flooding and the inter-system interdependency between infrastructures (such as roads and bridges) and traffic users on one hand and limited budgetary resources on the other hand, combine to challenge the design of a cost-effective risk mitigation strategy. This is exacerbated by the fact that the estimation of indirect losses associated with closures resulting from damaged assets is difficult to achieve. To address such gaps, this paper develops an integrated risk assessment method that synthesizes various inputs, including hazards (here inland flooding), geographic features, spatial distribution of assets, and traffic, to simulate a real-life transportation system. This framework is capable of estimating actual physical infrastructure damages as well as quantitatively evaluating the indirect losses of traffic users such as traffic delays and opportunity costs closely associated with flood risk. Based on risk assessment, a curve fitting of the annual probability of exceeding a state of monetary flood risk can be generated through various simulations of flood scenarios, and decision-makers can use this flood risk curve along with community-based prevention expectations of risk to implement proper mitigation strategies.


ABSTRACT. Complex engineering systems are of paramount importance for the correct operation of installations that allow functioning of the modern society and its economy. These systems are constantly under uncertain and potentially damaging conditions that may alter their operational performance. New system designs should consider safety aspects that maintain safe operating conditions while coping with disruptive events. In response to this need, the relatively new discipline of resilience engineering has been formulated to improve the safety of such complex systems. Resilience assessments must be carried out to study the system recovery after a disruptive event has occurred. Probabilistic models like fault tree or event tree analyses have been widely applied in safety-critical sectors such as process and/or nuclear industry due to their flexibility to model complex engineering systems and uncertainty quantification. However, such techniques moderate the modelling scope when representing the interdependencies of the components in the system and variations in time over a disruption event. Moreover, additional complications in the resilience assessment process arise when considering the epistemic uncertainty due to the lack of knowledge about the events and the operating conditions. Dynamic credal networks are proposed in this work to model complex systems whose performance evolves in time. The methodology aims to quantify resilience in terms of the availability of the components. The novelty of this work resides in the development of a resilience assessment framework that allows taking into account the epistemic uncertainty related to the sparse or defective data. The resilience assessment of the key safety systems of an Advanced Thermal Reactor is carried out to evaluate the system recovery after a mishap adopting the dynamic credal network approach. The application of the proposed approach to producing a resilience analysis is described and results presented to demonstrate the applicability of the method.