previous day
next day
all days

View: session overviewtalk overview

08:30-10:10 Session WE1A: System Reliability
Location: Auditorium
Utilization of multilevel flow modeling to support passive safety system reliability assessment
PRESENTER: Zhiao Huang

ABSTRACT. It has become a hallmark of many newly designed advanced nuclear reactors to incorporate several passive safety systems, which do not need external input (especially energy) to operate, not only to enhance the operational safety of the reactors but also to eliminate the possibility of serious accidents. Accordingly, the assessment of the reliability of passive safety systems is a crucial issue to be resolved before their extensive use in future nuclear power plants. However, the assessment of reliability of a passive safety system is a challenging assignment due to substantial involvement of physical laws and thermodynamic principles in its function. Hence, consistent efforts are required to qualify the reliability of passive safety systems. During the past decades, a few methodologies such as reliability evaluation of passive safety system (REPAS), reliability methods for passive safety functions (RMPS), and analysis of passive systems reliability (APSRA) have been developed to assess reliability of various passive safety systems. The failure of a passive safety system to realize its prospective function is believed to compose of not only the traditional mechanical failures of structural components but also phenomenological failures such as low natural circulation flow. While the phenomenological failures cannot be evaluated directly by deterministic approach, probabilistic methods deploy Monte Carlo (MC) sampling and thermal-hydraulic (T-H) codes are often used. But The computational effort involved can be prohibitive because of the large number of (typically long) T–H code simulations that must be performed (one for each sample) for the statistical estimation of the probability of success or failure. Under this circumstance, a surrogate model to effectively handle the computation of the reliability of a nuclear passive system should be developed as a valid replacement of the long-running T–H model codes. Multilevel Flow Modeling is a method for modeling complex processes on multiple levels of means-end and part-whole abstraction. The modeling method has been applied on a wide range of processes including power plants, chemical engineering plants and power systems. With these advantages, a MFM model of the reference passive safety system is built by representing the components functions and phenomenological functions through the mass flow functions and energy flow functions. Then, the failure or success of the reference passive safety system can be described by a combination of states of various functions in MFM model. In doing so, the MFM model can be used as a surrogate model to infer the status of the system under various environments through its own fast-running reasoning tools. In the end, a process of reliability assessment of passive safety systems is realized by coupling MFM model with MC sampling and limited number T–H code simulations. This method can reduce the computational effort on a great deal and make the industrial application of passive safety system reliability assessment method more feasible.

Detection and Localization of Time Shift Failures in Timed Event Graphs: Application to a Remanufacturing Line
PRESENTER: Eric Gascard

ABSTRACT. Since the last few decades, industrial companies develop remanufacturing processes to respond to the challenge of having a sustainable future by limiting the over-exploitation of natural resources. In addition to environmental benefits, remanufacturing also provide opportunities for the creation of skilled jobs and economic profits.

Remanufacturing is defined as the transformation of an end-of-life product into a product to at least its original performance with an equivalent warranty and consists of the following five steps: (1) product disassembly; (2) cleaning of all components; (3) component inspection and sorting; (4) component reconditioning; (5) product reassembly and test.

The correct temporal execution of this remanufacturing flow must be obtained to satisfy the specified timing constraints of the process. However, a remanufacturing line might get subjected to some time shift failures when a remanufacturing equipment is late to ensure its functionality. So, the detection and localization of time shift failures must be done as soon as possible to avoid a degraded remanufacturing flow.

This paper addresses the problem of detection and localization of time shift failures in a remanufacturing line in the following way: we model a remanufacturing system as a Timed Event Graph (TEG) which is a subclass of timed Petri net.

The first contribution of the paper is to develop a TEG simulation performed by an event-driven simulator which takes into account time shift failures. The second contribution is to propose an algorithm which takes as input timed observations of the functioning of a remanufacturing system and allows to detect and help to localize time shift failures if there are present. Some academic examples illustrate the feasibility of our approach.

Dynamic Reliability Approach For a Complex Offshore System

ABSTRACT. Dynamic reliability is the ability of a system or process to perform a given task on a given mission without failure, and to maintain it over time. It is attractive for offshore operations assessment, such as floating, production, storage and offloading (FPSO) platforms. In year 2010, new discovery emerged from offshore Rovuma Basin (RB) in Mozambique and about 150 TCF of natural gas was found. Ullgren et al. (2016) indicate that the RB registers currents at speeds of over 20 ms-1, with peaks of 170 ms-1. Extreme events have been frequent in recent years, cases of cyclones Idai and Kenneth in 2019, and Chalane recently (2020-2021), affecting the center and north of the country. Production operations at the onshore LNG and floating LNG plant could be plagued by leaks stimulated by failures induced by environmental loads and shuttle tankers. Our study proposes to develop an assessment of these risks using the dynamic reliability approach based on differential equations. According to Devooght (1996), this approach is the most suitable for evaluating system's capacity to perform a given task in a mission over time. In this study, we firstly provide an assessment of the offshore operations, such as FPSO. Secondly, we develop a model for assessing the likelihood of leakage in an FPSO induced by environmental loads. Finally, from the existing methods, we propose to use the dynamic quantitative risk assessment (DQRA) method, Ahmadi et al (2020). ..The study stick to mechanisms of leakage failures in an FPSO induced by environmental loads taking into account the FPSO considered in three subsystems: riser, hull and station-keeping . A balance point between the three subsystems is desired for greater system availability resulting from better knowledge of failure mechanisms and risk forecasting.

Sub-Safety Recognition and Reliability Evaluation for Motor Drive System in High Speed Trains
PRESENTER: Linghui Meng

ABSTRACT. The concept of sub-safety for high-speed trains is put forward for the first time. As trains run for a long time, train systems constantly age and degrade and enter a state of sub-safety. It is important to recognize the sub-safety state in a timely manner in order to implement safety control measures to prevent the system from entering a faulty state. Firstly, this paper analyzed the main faults of the devices in a high-speed train’s motor-drive system and their failure mechanisms. Secondly, a sub-health state was proposed for the main devices of the system. Then a degradation and state transition model for the devices was established based on the Markov theory. Finally, using the reliability model for the system, the reliability was evaluated and the sub-safety state was determined.

Efficient System-Reliability Demonstration Tests using the Probability of Test Success

ABSTRACT. Today, products are very often equipped with a variety of functions. This variety of functions is in turn reflected in a large number of components, parts and subsystems and has therefore lots of different failure modes. This creates new challenges in demonstrating the reliability of such a product, since there is an equally large number of possibilities for designing the necessary tests. In this paper the effect of testing on component, subsystem or system level on the demonstration of the system reliability is studied. The assessment of the different tests is obtained by calculating the Probability of Test Success (Pts) [1-4]. Via this holistic approach, an optimal test design can by identified for complex systems with multiple failure modes and system levels. A case study for different systems shows the advantage for using the Pts in the planning of reliability demonstration tests for complex systems, since some test types and configurations should be avoided due to a very low achievable Pts. The calculation of the Pts is based on the approach of [3, 4]. It requires prior information about all failure modes of the system to be considered. According to this prior knowledge of the failure behavior, virtual tests are carried out on the respective system level in a Monte Carlo simulation (MCS). Using a bootstrap procedure and a beta distribution as confidence distribution, the confidence interval of the system is calculated in each MCS iteration and thus the evaluation criterion Pts can be determined via the law of large numbers by dividing the number of successful virtual tests by the total number of MCS iterations [1-4]. The presented approach results in a very streamlined process of reliability demonstration test planning, since no allocation of the required reliability has to be done for the individual component or subsystem level tests. In addition, the apportionment of specimen among the tests on different system levels can be derived according to maximum efficiency in system reliability demonstration. Studies with different parameters and numbers of failure modes show that this assessment can be used to answer the question at which system level a test should be performed with regards to minimal expenditure while maintaining a maximum in Pts. Sample size domain-specific advantages of system versus component tests indicate that the Probability of Test Success is a very suitable test assessment to identify, among the many possible variations and configurations of tests and their levels in complex systems, a test that represents the greatest probability and chance of demonstrating system reliability in the most efficient way.


1. M. Dazer, D. Bräutigam, T. Leopold and B. Bertsche, “Optimal Planning of Reliability Life Tests Considering Prior Knowledge,” Proc. - Annu. Reliab. Maintainab. Symp., 2018. 2. M. Dazer, M. Stohrer, S. Kemmler and B. Bertsche, “Planning of reliability life tests within the accuracy, time and cost triangle,” IEEE ASTR Conference, 2016 Florida, USA 3. A. Grundler, M. Dazer, B. Bertsche. „Reliability test planning considering multiple failure mechanisms and system levels.” Proc. - Annu. Reliab. Maintainab. Symp., 2020. 4. A. Grundler, M. Dazer, T. Herzig and B. Bertsche. „Considering Multiple Failure Mechanisms in Optimal Test Design.” Proceedings IRF2020: 7th International Conference Integrity-Reliability-Failure, 2020, pp.673-682

08:30-10:10 Session WE1B: Mathematical Methods in Reliability and Safety
Location: Atrium 2

ABSTRACT. Knowledge of the quality of a machine-manufactured product is crucial to its reliability throughout the product's use phase and indispensable if a customer is to be assured of a certain quality standard. However, the quality of not all products can be determined non-destructively. In this case, machine learning methods are increasingly used to predict the quality of the product based on production parameters and non-destructively measurable attributes. Due to the progress made in recent years, products can be reliably classified if the amount of training data is large enough. However, the generating of such training data is often associated with high effort and high costs. For this reason, the amount of data to be generated should be kept as small as possible while maintaining reliable classification by the machine learning algorithm. We therefore applied a modification of the Yarowsky algorithm, a method from the field of semi-supervised learning, in combination with DNNs. The Algorithm involves a stepwise expansion of the learning dataset. To expand the learning dataset, we used samples that were assigned to a class with high confidence by the neural network. We conducted our experiments on a data set, which contains production parameters of 3600 knives. The dataset features attributes of the surface topography determined by computer vision and gloss values. The gloss values serve as target variables and were divided into 3 classes. For the experiments, we used a neural network architecture that was previously determined to be very performant for the problem. We then conducted a series of runs of our method to determine whether the method could be suitable for real-world applications based on the metrics recorded.

PRESENTER: Mariusz Zieja

ABSTRACT. A major challenge for both aircraft designers and operators is to guarantee the safety of air operations. The idea of safety is closely associated to the principle of reliability. To determine it, an aircraft can be considered both as a whole as well as a complex structure. The present study has been concentrated on the reliability of control and navigation instruments of an aircraft M-28 "Bryza" during the analysed period 2011-2019.

In order to determine the reliability, a measurable parameter should be selected, which changes will allow us to assess whether the defectiveness of the system increases or decreases. In many analysis this parameter is the mean time between failures (MTBF), on the basis of which it is possible to determine the intensity of failures and consequently the reliability changes.

The chosen methodology is based on the MTBF determination using log-normal, Weibull or normal functions. As a result, it will be possible to select the function that best describes the data from the maintenance and operating process. The selected function will be the input for the calculation of the reliability. Functions logarithmic-normal function and Weibull function distribution have been fitted to the distribution of the time between failures.

In the analysis there are indications the failure distribution is far from being uniform. MTBF value is not enough in itself to provide any information about the reliability of products over time.

To conclude, the usefulness of methods to determine reliability based on MTBF, particularly in aviation, is associated with main reasons: the availability of reliability manuals used as a reference for demonstrating reliability requirements are as follows fulfilled; the shortcut of the unverified hypothesis of constant failure rate describing the system under analysis; easily applying such a label with a misleading name.

Optimization-based reliability assessment of multi-energy systems
PRESENTER: Paolo Gabrielli

ABSTRACT. For a successful transition to a more efficient energy system with reduced environmental impact, multi-energy systems (MES) are a promising concept. MES consist of several energy conversion and storage technologies combining different energy carriers to supply a variety of energy demands [1]. The redundancy and flexibility of energy supply in MES can also avoid supply interruptions, either due to unavailability of any component within the MES, or in case of disturbances in the external energy infrastructure such as the electricity and gas grids [2]. To strengthen the case for MES, we propose an optimization-based method to assess MES reliability. The ability of MES to supply energy despite component failures is quantified using a 3-step procedure, i.e. (I) a baseline operation schedule without failures is determined by a mixed integer linear programming optimization, minimizing operations cost; (II) the impact of component failures is quantified by making single components unavailable and optimizing the reaction of the MES to the failure, minimizing energy not supplied. This step is repeated for the failure of each MES component and external energy grid, and for each potential starting time of a failure; (III) the expected energy not supplied and the distribution of uninterrupted supply times are calculated via stationary alternating renewal process. The method is applied to a Swiss MES that supplies thermal and electrical energy demands. A comparison of designs with electricity- and gas-based conversion technologies, energy storage and distributed photovoltaic generation shows that both electricity- and gas-based MES are highly reliable due to the high reliability of the Swiss electricity grid. Introducing centralized battery storage and distributed photovoltaic panels further increases the probability of uninterrupted supply and decreases the expected energy not supplied. However, reliability is not always increased by introducing conversion technologies that couple energy carriers. In fact, heat pumps introduce a dependency of the heat supply on electricity, while combined heat and power plants can be less reliable than other technologies, thus increasing the expected energy not supplied.


1. P. Mancarella, Energy, 65 (2014). 2. G. Koeppel and G. Andersson, Energy, 34 (2009).

Graph representation of logic differential calculus for reliability modeling of coherent binary state systems
PRESENTER: Nicolae Brinzei

ABSTRACT. In reliability engineering a large class of models (fault trees, reliability block diagram, event trees, …) are founded on the structure function model often written as a Boolean polynomial. Based on the structure function, two approaches were recently proposed to analyze and asses the system reliability: logic differential calculus and graph models based on Hasse diagram. Logic differential calculus is a powerful mathematical methodology that allows the analyze of dynamic properties of Boolean functions by means of Direct Partial Boolean Derivatives (DPBD). Hasse diagram is a graph representation of the partially order on the values of the state vector. The system states diagram, which is an extension of the Hasse diagram, allows the determination of a minimal disjoint Boolean polynomial as well as a direct computation of the system reliability. In this paper, we propose a new graph to represent Direct Partial Boolean Derivatives that allows us to compare both approaches of logic differential calculus and graph models in order to find correspondences between them. Thereafter these approaches are applied for computation of Birnbaum’s importance measure to determine the impact of critical components on the system reliability.


ABSTRACT. The degradation process of some units is often bounded due to their geometry, their physical dimension, and/or to the nature itself of the degradation mechanism, which (even partial) understanding could make apparent that the degradation level cannot grow indeterminately. Probabilistic models customarily used to describe the evolution of degradation level over time do not allow one to explicitly account for the presence of an upper boundary. Nonetheless, the existence of such a physical constraint can be easily and conveniently modelled by employing an appropriate transformation of an unbounded degradation process. The aim of this paper is investigating the potentiality of the transformed gamma process in tackling this specific experimental situation. The proposed approach, which led to the definition of a bounded, state-dependent transformed gamma process, is illustrated starting with a motivating example, which is developed on the basis of a real set of wear data of cylinder liners equipping diesel engines for marine propulsion. Model parameters are estimating by using the maximum likelihood method. Fitting ability of the innovative proposed process is compared with those of potentially suitable unbounded processes, namely the transformed gamma and the gamma processes. Potentiality of the proposed approach and possible pitfalls associated to the use of a bounded degradation process are critically discussed in the concluding section of the paper.


1. M. Giorgio, M. Guida, and G. Pulcini, A new class of Markovian processes for deteriorating units with state dependent increments and covariates. IEEE Transactions on Reliability, 2015, 64(2), 562-578. 2. Y. Deng and M. D. Pandey, Modelling of a bounded stochastic degradation process based on a transformed gamma process. In L. Walls, M. Revie, T. Bedford (Eds), Risk, Reliability and Safety: Innovating Theory and Practice Proceedings of ESREL 2016, Glasgow, Scotland, 25-29 September 2016.

08:30-10:10 Session WE1C: Digital twin approach in maintenance and safety engineering
Digital Twin-based Prognostics and Health Management for Subsea Systems: Concepts, Opportunities and Challenges

ABSTRACT. Digital Twin (DT) constitutes to be an important pillar for industrial transformation to digitalization. Both academics and industries have recently started the exploration on methodologies and techniques related to DT. A systematic overview on the relationships and differences between DT and traditional approaches, such as simulation, is thus needed. This paper aims to contributes towards better understanding of DT, by reviewing different DT types in an effort for their grouping and classification. Subsea production is the focusing industry in this study, where conventional corrective/age-based maintenance is shifting towards condition-based maintenance (CBM) and prognostics and health management (PHM). DT is believed to be meaningful to improve efficiencies and reduce costs of such activities, but technical difficulties of DT-based PHM are existing to impede real-world applications. We outline some of these opportunities and identify challenges of DT-based PHM with an aim of highlighting future research perspectives.


ABSTRACT. Companies in the oil and gas industry have, since the fall in oil price in 2014, been under pressures to cut costs and improve the effectiveness of their operations. Digitalization is generally considered as an important contributor to achieve this. One barrier to benefit from digitalization that is increasingly being recognized by the industry is data silos. Digital twin is a concept that has been proposed to alleviate this problem, but there is a lack of common understanding of what this concept entails and the potential benefits of this concept. To gain a better understanding of how digital twins are used for maintenance and safety in the offshore oil and gas industry, we have conducted a survey in the form of a web-based questionnaire among practitioners from this industry. 15 responses to the questionnaire was included in the final sample. Nine of these where from respondents that reported to have implemented digital twins in their own organization or in their products or services. Because of the low number of responses, the results cannot be used to draw conclusion on the current state of digital twins for maintenance and safety in the offshore oil and gas industry in general. But the results offer some insights that can be useful for further research.

Reliability Digital Twin Approach Based on Bayesian Method for Brake Pad Wear Monitoring

ABSTRACT. Reliability Digital Twin (RDT) method is regarded as a novel Reliability assessment technology that integrates the characteristics of digital twin implementation and mapping. Based on the real-time data transmitted between physical space and digital space, RDT technology has ability to collect and transmit the parameters data that reflects the braking process of brake pads. Furthermore, the RDT models of the digital space is updated in real time to present the health status of the equipment in the physical space, which is used to guide maintenance-related decision-makings. This paper proposes a reliability digital twin approach based on Bayesian theory to realize the dynamic reliability evaluation and life prediction method for brake pads. Firstly, the wear performance degradation data of brake pads are obtained by accelerated degradation experiments, as well as the amount of wear is selected as its performance parameter to characterize its health state. Secondly, the brake pad wear reliability function and the performance degradation model based on the Wiener process are established, and Bayesian theory is used to update the models’ parameters in real time based on the transmission of dynamic sensor data. Finally, the reliability index of brake pad is calculated in real time, and the remaining wear life prediction of brake pad is realized under different degradation degrees. Numerical examples verify the effectiveness and accuracy of the proposed method. The proposed RDT approach can provide a more efficient and economical way to realize brake pad health assessment and maintenance activities.

Application of digital twins in condition-based maintenance

ABSTRACT. Condition-based Maintenance (CBM) is a combination of fault diagnosis, fault prognosis, and maintenance decisions. The diagnostic results can be used to predict the remaining life and maintenance plans are made based on current state and future state. Digital Twins (DTs) allows CBM to be carried out in a more efficient way. It simulates the operation of the system and realizes real-time interaction with the system. For this, the CBM model can get more data from the DTs and display the results through DTs. Compared with the traditional CBM, DT-based CBM is more intelligent, and thus, DTs-based CBM is widely studied in recent years. This paper presents the changes that DTs brings to CBM and the focus is the changes that DTs brings to fault diagnosis, fault prognoses, and maintenance decisions. The work divides the changes into three aspects, that is, DTs provides a new CBM framework, DTs provides data for CBM modeling and DTs provides good visualization tools. The future direction of DTs for CBM is also discussed in this paper.

PRESENTER: Nathalie Julien

ABSTRACT. The Digital Twin is a new concept with various definitions, shapes and applications. In recent years, publications on this subject have been increasing sharply without any real coherence, representing very different realities and including a wide variety of models. In order to clarify such a complex representation, we propose not only a global definition, but also a complete, usage-driven classification methodology. Our generic overview relies on three axes as the Digital Twin is not just a set of data or models but a data organization in information and meta-information allowing to combine models in order to provide usages required for its application. As the required services are evolving throughout the object lifecycle, the Digital Twin properties must also evolve, making it a ‘living model’. Seven different usages can be combined to respond to a wide range of industrial applications. We give different examples of this classification to support the deployment of the Digital Twin in predictive maintenance, line reconfiguration, operator training… Moreover, such a usage-driven approach also ensures to be user-centric, in order to foster acceptance and facilitate risk management.

08:30-10:10 Session WE1D: Flexible Tolerancing Analysis of Complex Structures and Assemblies
Location: Panoramique
Fastening process simulation of structural parts with shape defects
PRESENTER: Ramzi Askri

ABSTRACT. The design of fastened joints consists in choosing adequate geometric, material and joining parameters, such as the ratio of fastener diameter to adherent thickness or clamping force in order to ensure target mechanical performances of the assembly. Deviations of nominal values of design parameters as geometric errors involve in most cases a decrease in the mechanical performance and reliability of the assembly. This paper proposes a method to analyse the fastening process of parts that include geometric form defects. The structural behaviour of the joint is simulated using a finite element model combining connectors and rigid surfaces to represent fasteners. With this approach, the calculation time can be drastically reduced while frictional contact is maintained between fasteners and parts. Shape defects are generated by translating parts mesh nodes. The method is applied to a case study and its efficiency is evaluated by analysing the evolution of axial bolt preloads and transverse bolt forces during the assembly process. Results demonstrate the ability of the method to simulate different clamping sequences and to capture the interaction between shape defects, bolt-hole clearance and target axial preload.

Statistical tolerance analysis of over-constrained mechanical systems using Tolsis software
PRESENTER: Antoine Dumas

ABSTRACT. All manufactured product has geometrical variations which may impact its functional behavior. Tolerance analysis aims at analyzing the influence of these variations on the product behavior, the goal is to evaluate the quality level of the product during its design stage. Analysis methods must check if specified tolerances enable the assembly and functional requirements. The technique consists in computing the non conformity rate of the mechanism for both requirements, expressed in parts per million (ppm), by mixing the resolution of an optimization problem and reliability analysis techniques such as Monte Carlo simulation or FORM system approximation.

Analysis methods must consider geometrical deviations as realizations of random variables (resulting from the manufacturing process) and the worst admissible configurations of gaps. As the geometrical behavior is formalized by an implicit assembly response function, these configurations change depending on the deviations. Hence contacts between the different parts of the mechanism may change according to the considered configuration. These configurations must be found using an optimization scheme. For simple mechanism, it is conceivable to build the mathematical behavior model manually but for complex system it is needed to build it automatically using a dedicated tool.

The presentation proposes to illustrate how the Tolsis software, integrated in Catia or in SpaceClaim, can deal with such tolerance analysis problem. First a brief reminder on the behavior model will be presented. Then the characteristics of the software will be described using a mechanical system as example, from the initial definition of the surfaces and linkages to the evaluation of the non conformity rates.

Flexible Tolerancing analysis of complex assemblies with surrogate chaos and Kriging meta-models

ABSTRACT. In industry, the modelling of product and process assemblies is based on the theory of Geometrical Product Specification and Tolerancing Analysis. This industrial approach follows several international standards to specify the parts and build stack-ups models of tolerances of an assembly. The main hypothesis of these standards is the rigid workpiece principle. However, for large dimensions thin parts and assemblies as example, the effects of gravity and of the forces and/or displacements imposed by active tools, this rigid bodies assumption is not acceptable and “classic rigid stack-ups” can lead to non-representative results on functional requirements. Thus, this paper proposes an approach to take into account the flexibility of the parts and assemblies in the 3D tolerancing stack-ups. Coupling the tolerancing theory, the structural reliability approaches and FEM simulation, an original approach based on the stochastic polynomial chaos development and Kriging methods, the Sobol’ indices and FEM method is developed to build 3D flexible stack-ups and to estimate the main tolerance results. This method is then applied on an aeronautical example aeronautical of assembly

Toward a normalized method to evaluate the quality and the relevance of a linear approximation for Tolerance Analysis and Synthesis.

ABSTRACT. In tolerance analysis and synthesis field, according to the small variations we can expect on the parameters on which specified tolerances apply, it is usual to use a linear approximation for the transfer function between these parameters and studied resulting criteria. With pure 1D stack-ups, there is generally no doubt about this linearity but in many case, with 2D & 3D geometrical, kinematical and mechanical effects, the relevance of a linear approximation is not obvious. We can be surprise that existing Tolerance Analysis softwares from the market never provides metric to evaluate the relevance of their linear approximations. Today, technical enhancements allow us to hope for the integration of parts flexibility in our stack-ups, and the question of the validity of the linearization reappears as legitimate. This question is not to determine if the cause-effect relationship to deal with is linear or not, but to validate if the use of a linear approximation is reasonable to deal with tolerances allocation and reliability assessment. Appropriate mathematical methods analysis" are known, usually named "regression analysis methods” and the question is not about theory, but about its implementation in tolerance analysis software. The objective of this paper is to motive tolerancing scientific community to build a normative frame or standard and to bring Tolerance Analysis Softwares to integrate the relevant routines for this required evaluation of the quality of their linear approximations. It is finally the question of the confidence we can have in provided results.

Tolerance analysis of a wiper blade using the probabilistic approach

ABSTRACT. Engineers are aware that uncertainties in the dimension of manufactured products cannot be avoided, i.e. mechanical components manufactured using the same tools and the same raw materials have slightly different shapes; and their dimensions are also different from the designer’s request. Tolerance analysis offers a rational framework to study such uncertainties, and allows guaranteeing that the quality associated with the production remains acceptable. This quality is quantified by estimating the defect probability, which is often expressed in parts per million. In this contribution, the probabilistic approach is used, and the dimensions of the components are modeled using random variables. A reliability analysis is then performed to estimate the probability of manufacturing a component which does not meet its functional requirement. The procedure described previously is applied to an industrial problem; the method is developed in collaboration with Valeo Wiper Systems. Components of the wiping systems have been periodically collected from the production lines and their dimensions have been measured. These results are then used to identify the distributions of the random variables associated with the geometry of the components. It is assumed that the model of uncertainty can be fully represented using the marginal distributions and the linear coefficient of correlation. For each random variable, several distributions are considered, and the most suitable one is select using the Akaike information criterion [1]. The performance of the wiping systems can be estimated using Finite Element (FE) simulations. The FE model is parameterized, which allows investigating the consequences of the shape imperfections. A meta-model is subsequently calibrated in order to reduce the numerical efforts. The formulation of the probabilistic model is similar to the one introduced in [2], the quantifiers for all and there exists are introduced and the problem is expressed using system reliability. References:

[1] H. Akaike, A new look at the statistical model identification, IEEE Transactions on Automatic Control, 19(6): 716–723, 1974 [2] J.-Y. Dantan, A.-J. Qureshi, Worst-case and statistical tolerance analysis based on quantified constraint satisfaction problems and Monte Carlo simulation, Computer-Aided Design, 41(1): 1-12, 2009

08:30-10:10 Session WE1E: Organizational Factors and Safety Culture
Location: Amphi Jardin

ABSTRACT. Learning from incidents is widely accepted as a core ingredient of safety management. This is also true for fires – however few fires in Norway are investigated. Fires are particularly interesting incidents due to a) their potential of devastating outcomes on material and human lives, b) because they happen across sectors and industries, businesses and homes, c) as they are considered an incident in itself or as a consequence of various other initiating events like a car collision, and that d) understanding, preventing and mitigating fires require highly skilled professionals. In Norway, several different actors play a role in investigating and learning from fires. The present study seeks to understand the preconditions for learning from fires in Norway, with emphasis on existing practices for learning, and potential inhibiting and promoting aspects. Methodologically, we first conducted a brief international survey on how authorities learn from fire incidents. Then, qualitative interviews were conducted with relevant Norwegian actors from the police, fire services, relevant authorities, insurance, the Norwegian fire protection association and the Norwegian Safety Investigation Authority. The results were analyzed using thematic analysis and the Pentagon model framework. We found that there are structural, cultural, technological, and relational aspects that constitute preconditions for learning from fires in Norway. The findings are discussed in relation to theories from organizational learning and learning from incidents.

Innovative Road Safety Education Program
PRESENTER: Dagfinn Moe

ABSTRACT. The Norwegian Council for road safety, Trygg Trafikk with SINTEF and Nord University developed a new road safety education program based on the last findings in Neuro-Education. Whereas traffic rules are essential and part of the school road safety education program, learning how to properly use attention in complex traffic situations was never taught before. This paper presents the method developed for stimulating pupils' reflection on traffic safety issues and three concepts: risk, orientation, and attention. SINTEF compared the new education program with the one currently in place in Norway at its Virtual Reality laboratory. The program was evaluated with two groups of 5th grade pupils. A bicycle simulator and a Head Mounted Display (HMD) with an integrated Tobii eye tracking system, were connected to the Virtual Reality. The virtual environment was identical to the traffic centre facility (a miniature traffic system with intersections. Traffic lights and signs) used for the school road safety education program. The results showed that the experiment group who participated in the new education program orientated themselves and used their attention better than pupils in the control group who followed the traditional program.

Security of electricity supply in the transition toward smarter grids
PRESENTER: Tor Olav Grøtan

ABSTRACT. The paper presents the results of an exploratory study of the way digital transformation processes can involve challenges for the security of electricity supply. The case for the study was the development of digital substations in a European electricity grid operator. By means of a sociotechnical approach, we studied digitalization as a process, focusing on transition risks related to the process itself. Six categories of challenges are described: 1) The role of risk assessment and risk management in procurement processes, 2) issues related to language, culture and competence, 3) the management of emergencies and crisis situations, 4) technological aspects of redundancy and oversight, 5) formal organization and responsibility, and 6) changes in regulation, regulatory roles and threat landscape. Common themes cutting across the six categories are discussed and the paper is concluded with a delineation of the strategies that can serve to mitigate the risks involved.


Prevention and management of industrial risk through effective citizen-facing communication from authorities: the experience of Regione Lombardia in Italy
PRESENTER: Fabio Borghetti

ABSTRACT. Lombardia is a region in north-western Italy composed of 12 provinces. In Lombardia, according to the report of ISPRA - Istituto Superiore per la Protezione e la Ricerca Ambientale, there are 287 industrial sites at major accident hazard out of 1142 total in Italy, ISPRA (2013). These companies must comply with the requirements of Directive 2012/18/EU (Seveso III); in general, the directive identifies and provides measures aimed at preventing major accidents related to certain dangerous substances and limiting their consequences for human health and the environment. The Directive also provides for the preparation of emergency plans with the aim of communicating the necessary information to the public and interested authorities in the area, European Commission (2012). From a demographic point of view, Regione Lombardia is by far the Italian region with the largest resident population, about 10 million inhabitants, equal to about 16% of the national population, and the second most densely populated, 419 inhabitants per sq km compared to the Italian average of 201 inhabitants per sq km. Generally speaking, it follows that the population of Regione Lombardia is among those potentially more exposed to accidents deriving from industrial activities. According to the regulatory framework and the competence of Regione Lombardia in the field of major accident hazard, in the last years there has been the need for Regione Lombardia to update the state of the art of risk consultation and communication to the public. The main goal is to identify and implement innovative strategies of effective communication, also considering the communication techniques associated with the latest information technologies. In this view, Regione Lombardia has also set itself the aim of providing support tools to municipalities affected by the presence of plants at major accident hazard, for the activity of information and participation of the public potentially affected by an accident. The paper then presents the experience of Regione Lombardia in promoting the activity of industrial risk communication to the public on the regional level. After an overview and a territorial analysis of the region carried out with GIS tools, the adopted approach is described: i) the organization of technical meetings with the municipalities of Regione Lombardia to collect best practices of risk communication, ii) the preparation of a survey for the knowledge of the state of implementation of communication tools in Regione Lombardia and for highlight the most suitable tool on which provide support by Regione Lombardia, iii) the development of an industrial risk communication tool represented by a brochure that can be filled and customized by different municipalities.

References European Commission (2012). Directive 2012/18/EU of The European Parliament and of The Council on the control of major accident hazards involving dangerous substances, amending and subsequently repealing Council Directive 96/82/EC. ISPRA. (2013). Mappatura dei pericoli di incidente rilevante in Italia. Rapporti 181/2013. edizione 2013. Istituto Superiore per la Protezione e la Ricerca Ambientale. Ministero dell’Ambiente e della Tutela del Territorio e del Mare. ISBN 978-88-448-0613-2.

Getting realism into a participative framework for operational risk analysis
PRESENTER: Florent Brissaud

ABSTRACT. The risk analysis is a systematic use of available information to identify hazards (potential sources of harm) and to estimate the risk (probability of occurrence and severity of the harms). Based on the risk analysis, the risk evaluation is used to determine whether the tolerable risk is achieved. The resulting risk assessment is a fundamental step for managing risk with the appropriate protective measures in order to obtain safety (freedom from unacceptable risk). It is suitable to make a risk analysis in an early phase of a project for anticipating the management and reducing both risks and costs. However, the risk analysis should also be updated once a feedback is observed. The reality of the operations may differ significantly from the procedures in mind during the design phase. It is therefore also very important to make a risk analysis during the operational phase. The main challenge is then to get the reality of the operations.

To get realism into the operational risk analysis, a framework has been developed in order to stimulate the participation of the operators. The guideline consists in creating a discussion forum suitable for: providing detailed explanations on the actual sequences of operations; discussing on the different practices; identifying the “weaknesses” and “strengths”; proposing “good practices” to integrate into the procedures and the trainings. The proposed framework includes several workshops under specific stimulating conditions, allowing to draft dedicated risk analysis sheets while avoiding tedious exercises and cognitive biases.

The participative framework has been applied to the drilling in-charge operations on the pipelines of the main high-pressure natural gas network in Europe. About twenty proposals have emerged from this operational risk analysis, in terms of: competences and professionalism (minimal practice), tools and procurement (standard, verification), risk and guidelines (realism of the rules, risk reduction measures), operation management (validation procedures), and feedback (traceability, trainings). Additional concepts have been also proposed, such as the safety criteria (acceptable risk level in operation), the yellow lines (rules that can be derogated under conditions) and the red lines (inescapable rules, i.e. compulsory). Several updates of the guidelines have then been performed thanks to this approach, reducing the actual operational risks under realistic conditions.

08:30-10:10 Session WE1F: Civil Engineering
Probabilistic determination of the seepage line in river levees under steady-state conditions and its effect on the stability

ABSTRACT. In the stability analysis of river levees, their seepage is a sub-process of high relevance. Finally, the seepage line separates the cross-sectional area into the water-saturated (under uplift) and the unsaturated cross-sectional part. In the stationary case, the position of the seepage line in homogeneous levees is imprinted into the system by the outer cubature. In the transient case and in the analysis of structured designs, its position depends on the saturated permeability of the levee construction materials. However, the saturated permeability is a spatially scattering and therefore uncertain quantity which should be considered as such. This paper presents a methodology for the probabilistic location of the seepage line in river levee cross sections and applies it exemplarily to standard designs (homogeneous levee, two- and three-zone levee). Using input distributions of saturated permeability, the position of the seepage line is analytically determined as an uncertain quantity. By means of a Monte-Carlo simulation it is then evaluated in the form of distributions. Finally, the results of the analysis allow statements to be made regarding the exceedance probabilities of discrete seepage lines. Subsequently, the effect of uncertain seepage lines on the stability of levees is illustrated by the results of a reliability analysis. The results show that the probabilistic analysis of seepage has an influence on the position of the seepage line and the reliability of river levees. This influence can further be quantified by the change of the failure probability. In the end, not only the input distributions of the saturated permeability are decisive for the quality of the results, but also the distributions of further soil mechanical parameters (e.g. soil weight, shear strength). Therefore, it is recommended to summarize experiences with soil me-chanical parameters in an (inter)national database to feed probabilistic parameter studies.


ABSTRACT. Fire safety in tunnels is a key issue for railway safety. Various approaches are usable to assess the concerned risks and to assess how to manage them. Large-scale experiments are not economically affordable to recreate a significant variety of configurations, reason why Computational Fluid Dynamic (CFD) is nowadays the prevalent method to address such topics. This paper discusses the simulation of a very typical fire-scenario for lines with relevant freight traffic carried out with Fire Dynamics Simulator (FDS). The scenario includes a 150 MW fire caused by an accidental spillage of liquid octane from a tanker inside an artificial railway tunnel. The main goal is to study how the geometric peculiarities of the tunnel can significantly influence the temperature field within the tunnel, to an extent that is not easy to predict through simplified conventional methods. The applied method bases on a sensitivity analysis on obtained results to the variation of specific parameters. The structural checks performed in this scenario show that prolonged exposure to fire plays a key role in reducing the load bearing capacity of the system. This result is more relevant for the structural elements inherently sensitive to spalling phenomena, triggered by high temperatures (e.g. beams or columns characterized by low concrete cover). These elements need appropriate mitigation measures to prevent the onset of spalling phenomena. In particular, the case study shows how the installation of insulating plaster is able to protect the pre-stressed concrete roof beams and to safeguard the required safety level for the structure.


ABSTRACT. This study aims to show the importance of developing and making compatible complementary projects using BIM (Building Information Modeling) and to analyze the risks on the costs and schedule during the execution of a real estate development. The study also shows that advanced response to risks improves performance, increases assertiveness in the work, minimize waste of material, rework and unnecessary wear and tear. Construction work is very complex, teams from various disciplines are involved, and a compatibility of projects is necessary to minimize execution errors and risks during the construction. The performance of a real estate project is affected by the lack of compatibility of complementary projects using the BIM (Building Information Modeling) methodology, generating conflicts between the disciplines, rework, cost and time overflows. As a methodological approach, an in-depth literature review and a case study was conducted. The first step in the case study was to map out the whole construction process, then analyze the risks present in each step. The probability of existing risks at each stage was combined using (BBN) and the impact were analyzed Analytic Hierarchy Process (AHP). This study is a source of information for professionals and companies planning to invest in BIM as support for decision making and shows the benefits in the practice.


ABSTRACT. Many variance reduction techniques have been proposed over the years to improve the performance of conventional Monte Carlo (MC) simulations. With respect to wind engineering applications, a conditional stochastic simulation (CSS) scheme has recently been introduced that is based on partitioning the probability space of a subset of appropriate wind intensity measures into a collection of mutually exclusive and collectively exhaustive subevents in which MC simulations are carried out for propagating general model uncertainty [1]. This permits straightforward simulation of rare events that are of critical interest to the performance evaluation of systems subject to extreme winds. A modification of the CSS scheme, the optimal conditional stochastic simulation (OCSS) scheme, is introduced in this paper and the properties of the estimator are critically discussed. The proposed OCSS scheme stems from the identification of the optimal distribution of the MC samples to use in each mutually exclusive and collectively exhaustive subevent in order to minimize the overall variance of the estimator. An unconstrained minimization problem is introduced to this end, and an expression for the minimum variance is derived. The expression is compared with the variance for a simple non-optimal form of the CSS scheme and conventional MC. The superiority of the proposed OCSS scheme is demonstrated. A practical illustration of the OCSS scheme is presented through the estimation of the collapse probability of a 45-story tall steel building under extreme wind loads. In this example, the maximum wind speed occurring at the building top is taken as the wind intensity measure. For the case study, variance reductions in the order of 50% and 90% are achieved as compared to non-optimal CSS and conventional MC schemes. Lastly, the challenges in implementing the proposed scheme are also discussed.

PRESENTER: Matheus Almeida

ABSTRACT. Brazilian Federal Government created the standard PBQP-H, aiming at organizing the construction sector around two main elements: habitat quality improvement and productive modernization. The standard provides alignment to the concepts of ISO 9001:2015. This study discusses the importance of meeting the standard requirements in construction work, the challenges and the associated critical risk factors in maintaining the certification to this standard. As a methodological approach an in-depth literature review and a case study were conducted to identify the critical risk factors. The case study was conducted in a specific company that adopted the standard. The performance of the construction company against the requirements of PBQP-H was evaluated by a third-party quality audit and based on its result and the information obtained from the literature review, the critical factors were identified and the respective actions developed. AHP (Analytical Hierarchy Process) was used to prioritize the risk factors. As a result, the critical factors and the respective actions were listed by priority. Maintaining the certification and compliance to a quality program is a challenge for construction companies all over the world. However, when the risk factors are known and properly responded and risk responses correctly implemented, it leads to waste /cost avoidance. This study is important for engineers, professionals and companies working in construction field.

08:30-10:10 Session WE1G: Asset management
Location: Atrium 3

ABSTRACT. Safe operations and adequate maintenance are two main means to achieve reliable production and reduce downtime of a plant. In a good manner, operations are the customer of maintenance, and maintenance is the service provider. However, in reality, the tasks of operations and maintenance are carried out by two different groups so that the close relationship between the two tasks is split. In this paper, this challenge is handled by a proposed integrated functional modelling framework. In this framework, the Multilevel Flow Modelling (MFM) method with its cause-consequence reasoning rules is used. The qualitative relationship distribution between operations and maintenance can be established by using a distributed qualitative evaluation method based on the function states of the system. In addition, these relationships are visible for both groups and utilize the detected information in the early stage of the development of the unpleasant scenarios to improve the situation awareness, and prevent the undesired emergency shutdown from both perspectives of operations and maintenance. Consequently, it can reduce production loss. A case study of operations and maintenance of a seawater injection system is carried out and shows the industrial applicability of the proposed framework. The case study strongly reveals that there is a highly close relation between operation and maintenance for ensuring the system working properly. It demonstrates that the proposed integrated framework is not only able to support operational tasks but also for the maintenance tasks by including relevant maintenance information of the system. The results show that it can potentially help with decreasing downtime of the system.

A look at the influences of Hydraulic Power Generator operation on the hydraulic passages
PRESENTER: Cecilia Lazar

ABSTRACT. In order to meet the demand, some Hydraulic Power Generators (HPGs) which were previously designed for operation in base load, nowadays, need to operate more and more at partial load or even in peaking mode [1]. Therefore, the HPG are subjected to a significantly increased stresses compared to their design specifications due to the change in operating conditions [2]. The outcomes on the life expectancy of HPG are yet unknown [3]. The present study pays attention to the hydraulic passages which might become the weakest component in terms of aging since it cannot be refurbished as easily as other components of the HPG. In these new circumstances, it’s important to know the impact of the new mode of operating on the HPG business plan and how to adapt maintenance plans accordingly, particularly for hydraulic passages, in order to increase their reliability and their lifetime [4, 5]. In this study a mathematical model, based on the techno-economic analysis, was used to assess the impact of the increased maintenance costs caused by peaking mode. The annual value of different peaking loading scenarios is evaluated in this study. Some hypotheses and simplifications are made in order to easily compare a so-called "normal" operation (operation in base load) with a "peaking / cycling" operation. A surveying panel of experts (Delphi method) was used to estimate the increased expenses in maintenance for a HPG from the change to more flexible operation. The results show that the expenses in maintenance have a significant influence on the cash flow. For each studied scenario it appears that there is a critical point in terms of added MW, beyond which the operation of HPG becomes unprofitable. This implies that the maintenance budgets should be distributed with diligence and a particular attention should be allowed to the hydraulic passage. Unlike generator or turbine failures, occasioning only unprofitable projects, hydraulic passages are difficult to refurbish and have failure modes that can lead to major accidents with dramatic consequences. References 1. Hydropower stations performance; LMH, Lausanne, Hyperbole FP7, (2013). 2. March P., Flexible Operation of Hydropower Plants; EPRI Journal, Palo Alto, USA (2017). 3. Adamkowski A., Evaluation of the Remaining Lifetime of Steel Penstocks; ISHP, Vienna, (2010). 4. Gagnon M., On the expected monetary value of hydroelectric turbine start-up, WCEAM, (2018).

Comprehensive Method for Improving Asset Integrity Management

ABSTRACT. Asset Integrity Management (AIM) plays a significant role in keeping complex ageing assets, such as oil and gas plants, power stations, manufacturing plants operating safely and productively. Many assets are now in a critical stage of life and new approaches to monitor and improve assets’ performance are required. We present a systematic monitoring method for improved efficiency and effectiveness of the plants’ assets; through a comprehensive data analysis based on 12 factors related to 5 underlying pillars as illustrated in Figure 1. These pillars and factors have been identified from an extensive review of academic literature and organizations’ publications which focus on AIM programs. We integrate the pillars within one monitoring model such that asset owners can measure AIM performance through key performance indicators (KPI) and identify pitfalls/gaps in each pillar for improvement and enhancement opportunities. The core idea is integrating all AIM’s pillars in one method and measuring the performance as one indicator, as well as each individual pillar’s performance. To measure each pillar performance, its each element’s performance has to be computed first. We propose using a Multi Attribute Value Analysis (MAVA) approach to scoring and weighting the individual pillars and overall performance. We propose that the method is regularly applied to ageing assets to identify weaknesses early. We expect that asset owners or operators will oversee the proposed approach in order to gain a bigger picture view of the asset performance, identify poor performers and develop a remedy to close gaps before getting poorest. which can draw more attention to improve assets from several aspects.

Techniques for Assets’ Criticality Judgement
PRESENTER: Tomáš Kertis

ABSTRACT. Critical infrastructure (further CI) is composed of systems of various natures (technical, organizational, cybernetical, social, etc.) and is important for human security and for the economics and functionality of cities and states especially under emergency and critical conditions. Integral safety of CI is predetermined not only by criticality of its assets but also on criticality of its interdependences. Therefore, its management needs to be made up by work with integral risk considering the interdependences among the critical assets. The subject of research is Praha´ metro safety. Because safety and criticality are complementary quantities, the decreasing criticality leads to improvement of integral safety. In the article we determine metro critical assets criticalities based on Sensitivity theory, Graph theory and propose the transformation rules for efficient interpretation of assets and their analysis including the interdependences. Main results of the work are: methodology, i.e., derivation of transformation rules for sensitivity matrix (representing the vulnerabilities of assets) to the graphs for analysis of scenarios in case of critical events (disasters); determination of critical interdependences in metro (the paper provides an example); proposal of measures for metro safety improvement (the paper provides an example). The results were transferred to Praha Transport Company because its expert recommended it for realization.


ABSTRACT. Maximizing the realization of value from physical assets through asset management is a contemporary approach to support the achievement of organizational goals. Nevertheless, in face of the increase in complexity and pressure for better performance of their engineering systems, organizations are more dependent on the appropriate failure management policy selection for failure prevention. As these decisions take into account different aspects and performance of the physical assets, the decision makers should not rely on simple heuristics. Instead, these decisions are expected to be supported by systematic approaches that incorporate data analysis. Thus, it is suitable that organizations investigate how modern techniques such as machine learning can be incorporated in solving maintenance management challenges. In this context, this paper proposes a method to support the failure management policy selection in asset management based on the exploratory cluster analysis technique. The proposed method complies with three sections: acquisition of physical asset performance data, cluster analysis, and definition of failure management policies. The case study consists of the application of the method to support the maintenance strategies of a Brazilian hydroelectric power plant. This plant has been undergoing several studies for asset management improvements. The results obtained show a method by which organizations can define appropriate failure management policies according to determined groups of physical assets. This is an important result for maintenance management optimization as different maintenance tasks can be proposed to different engineering systems groups. Accordingly, this article is expected to contribute to asset management research and maintenance practitioners facing the challenge of defining the appropriate failure management policy to prevent failures in a portfolio of physical assets.

08:30-10:10 Session WE1H: Case Studies on Predictive Reliability: an Industrial Perspective
Location: Cointreau
Fail Aware for Autonomous Driving Cars – Case Study

ABSTRACT. Health status, residual useful life (RUL) and prognostics and health management (PHM) in general have become topics of increasing interest for the automotive industry as well. Especially with the development of autonomous driving cars, these topics are essential. In an autonomous driving car there is no driver that can interact with the system and control any fault behavior. Thus, systems have to be made fail safe, fail operation and fail aware. This involves the whole system structure and value chain, from the components to the car itself.

In this article, we give a practical example how PHM is applied to a semiconductor device and how this interacts with modules and subsystems. In fact the requirements for PHM differ on various levels in the system. We show how to identify electrical parameters that indicate weakness and degeneration of the semiconductor device. This is a challenging task that involves advanced mathematics and statistical model development. Measurement data have to be corrected for bias and measurement uncertainties. In that way critical parameters are identified. Once these electrical parameters are selected, strategies need to be developed how to make this information available to higher-level systems. Finally, this information is used on system level and enhances the functionalities of autonomous driving cars.

We give an example of an angle sensor, where we identify lifetime drift by a large scale screening of electrical parameters at accelerated stress tests. We show how this information about the drift behavior can be made available on module level. Finally, there are concepts how to include this information in the on-board diagnostics on car level.


This project has received funding from the ECSEL Joint Undertaking under grant agreement No 737469. This Joint Undertaking receives support from the European Union's Horizon 2020 research and innovation programme and Germany, Austria, Spain, Italy, Latvia, Belgium, Netherlands, Sweden, Finland, Lithuania, Czech Republic, Romania, Norway.

Probabilistic Fatigue and Reliability Simulation
PRESENTER: Amaury Chabod

ABSTRACT. The fatigue design of mechanical systems has historically followed a ‘deterministic’ process. That means, for a given set of inputs they will return a consistent set of fatigue life results with no scatter. In practice, the designer will apply a safety factor to each input parameter to account for the uncertainty.

In comparison, a ‘Probabilistic Fatigue Simulation’ method is ‘stochastic’ in nature. That means inputs can be expressed using an expected value along with a probability distribution. This design process helps to avoid poor in-service reliability whilst reducing over-design.

This paper address in detail the three stages of Probabilistic Fatigue and Reliability Simulation:

1. Uncertainty Quantification (UQ) of input parameters 2. Stochastic fatigue simulation of individual components 3. Reliability simulation of the entire system

In order to take advantage of Probabilistic Fatigue Simulation, uncertainties in the input and the analysis model must be properly calculated. Two types of input uncertainties are considered: 1. Reducible uncertainties (or epistemic uncertainties) 2. Irreducible uncertainties (or aleatoric uncertainties)

Stochastic simulation is performed using a ‘Monte Carlo’ simulation. Also, statistical sampling techniques known as ‘Design of Experiments (DOE)’ are discussed for optimizing the size of the design space matrix. Two broad areas are discussed: 1. Exploring the extremities of design space 2. Exploring the statistical variability of design space

Reliability analysis of the simulated failures is performed using a Weibull analysis. A case study is presented to demonstrate how reliability analysis is used to:

1. Optimize the design to achieve the target reliability 2. Identify potential cost savings by identifying the most influential uncertainties 3. Provide an optimized maintenance schedule

The objective for industrial case study is to assess existence of a location parameter as "minimum resistance threshold". Indeed, a minimum resistance threshold would represent the minimum value of the in-service duration (kilometers or years) before any fatigue failure can appear. It would be very promising for warranty extension (5 - 8 years, 150 000 km), as below the threshold limit, zero wear-out fatigue durability failure within the warranty period would be expected.

What’s more, prediction of this model would enable control of the manufacturing process, to avoid shifts in the "minimum threshold resistance" of the product, by considering any change in the uncertainties parameters (thickness, material, etc…) .

Outliers detection at the lower tail of a small statistical sample originated from test results of strength
PRESENTER: Marco Bonato

ABSTRACT. In the automotive sector the stress-strength interference approach is commonly used to ensure the reliability of the components during the design validation. In this framework, during the development process, only a limited number of prototypes are tested to estimate their minimum resistance threshold. In order to meet ever-increasing technical and economical requirements (cost and timing), destructive tests are carried out on the smallest number as possible prototypes units, which are assumed to be representative of the overall serial population. The minimum resistance threshold is estimated as a random variable resulting from experimental sampling. Its mean value and the coefficient of variation are affected both by the small number of devices tested (3 <N<10) and by the possible appearance of values considered abnormal (outliers). Indeed premature failures may occur as the result of the not yet optimized manufacturing process. Under these conditions, a classical statistical treatment of the results encounters serious limitations, because any outlier within the small sample size must first be detected. Then, the further reduction of the samples available for such analysis must be considered. We have therefore developed a specific methodology, based on order statistics, for the treatment and elimination of these outliers. Because of the statistically low sample size, its interpolation according to a particular distribution would be affected by a significant level of uncertainty. For this reason, a bounded distribution of maximum entropy has been chosen as a prior assumption that can reasonably justify the dataset distribution. As for the statistical criteria usually applied for outliers detection (statistical tests for a chosen confidence level), they are not appropriate because the datasets are fitted according to the normal distribution – an assumption which is not suitable in this particular context. We have overcome this difficulty by replacing the classical approach with one based on the sequential combination of two deterministic criteria, which leads to a simpler decision-making process and reduces the zone of ambiguity. Several examples of the method are presented , both in the technical domain (i.e. rupturing tests of mechanical devices) and in the biological domain ( i.e. survival data of living beings).

Stratégie et analyses : comment anticiper l'augmentation de la durée des périodes garantie ?

ABSTRACT. Garantir la durabilité des composants d’un système est une activité stratégique pour les entreprises des secteurs industriels clés, comme celui des transports. Les enjeux sont doubles : impact économique et image de marque. En effet, un manque de maîtrise de la fiabilité d’un produit peut entraîner des coûts importants.

Cela est d’autant plus vrai que ces dernières années, la tendance est à l’augmentation des durées de période garantie. Si elle n’est pas le seul facteur, la conception fiable ou durable des composants contribue à ces coûts, donc son amélioration à leur réduction. Mais au-delà de l’impact économique direct, il y a également l’image de marque du produit qui est concerné. Cette dernière est difficilement quantifiable objectivement.

08:30-10:10 Session WE1I: Electromagnetic Risk Management
Location: Giffard
PRESENTER: Jaber Al Rashid

ABSTRACT. Electromagnetic compatibility (EMC) of integrated circuits (IC) should be within the desirable level for maintaining the functional safety and reliability of electronic systems in different complex automotive and aeronautical applications. Throughout the operational lifetime of ICs, harsh environmental conditions including extreme high or low temperature, humidity, shock, stress tend to cause intrinsic physical degradations, which results in significant variations of long-life EMC performance of IC device. Consequently, ensuring along with maintaining electromagnetic robustness (EMR) and integrating IC reliability throughout their whole lifetime period is a key challenge that needs to be addressed. The purpose of this paper is to conduct a comprehensive state-of-the-art study on developing accurate immunity and emission models of ICs focusing on quantitative evaluation of experimental characterization based on various IC EMC measurement methods under various ageing accelerated life tests.

Producing accurate transient EMC models help not only estimate EMC immunity and emission levels of ICs but also allows determining different failure types and mechanisms due to radio frequency disturbance when applied to IC model structures. This paper presents a few recent researches on the conducted pulse immunity as well as conducted emission models for ICs based on the IEC standard immunity model, demonstrating a good agreement between the electric fast transient (EFT) simulations and measurements applied on different IC pins considering the ageing impact.

Previous studies demonstrated the importance of the ageing on the EMC performance of ICs depending on the ageing stress parameters. Future perspective of the current study would involve proposing and implementing predictive reliability model for the IC during its entire lifetime under accelerated life tests.

PRESENTER: Richard Perdriau

ABSTRACT. Sophisticated electronic technologies are increasingly used in mission- and safety-critical systems where electromagnetic interference (EMI) can result in substantial risks to people and the environment. Currently, EMI engineering follows a rule- based approach, which is unable to cope with complex modern situations. With this rules-based approach, during the design stage, guidelines are used, whichs result in the application of a set of mitigation techniques, which are verified in the finished product against standards. This rule-based approach is costly, but with no guarantee of the required performance. This is particularly so for sensitive medical applications or the fully autonomous systems that are becoming ever-more common in our society. What we need is a risk-based approach, which is what PETER1, the Pan-European Training, Research & Education Network on Electromagnetic Risk Management, will provide. PETER is training 15 young engineers in topics related to the development of high-tech systems that maintain reliability and safety over their full life-cycle, despite complex EMI, such as in hospitals or transport systems. This is achieved using best practices and state-of-the-art EM engineering, reliability engineering, functional safety, risk management and system engineering, to create the risk-based EMC approach.

Assuring Shielded Cables as EMI Mitigation in Automotive ADAS
PRESENTER: Oskari Leppäaho

ABSTRACT. Shielded cables are an important mitigation for electromagnetic interference (EMI) in high-speed data systems. In the automotive domain, one use for them is to transmit image data from a front camera to an advanced driver assistance system (ADAS) controller. Some ADAS functions have implication for human safety and thus place extra requirements for the design of the transmission path including its resilience to EMI. This paper presents a case study of an automated lane centering (ALC) system with the above-mentioned shielded cable use case. The study starts from a National Highway Traffic Safety Administration (NHTSA) concept level assessment. Subsystem components are then separated and a physical realization derived. Goal Structuring Notation (GSN) is used to present EMI assurance scenarios over the safety requirements. First, the ability of a shielded cable reliability argument to cover the derived safety requirements during different operating scenarios is studied. It is found that relying on reliability alone, it is challenging to fulfil all the safety requirements. To overcome this challenge, an alternative systems safety based method is studied.

PRESENTER: Lokesh Devaraj

ABSTRACT. Road vehicles and similar complex systems are constructed by integrating many subsystems and components that are sourced from a large number of suppliers. This process may lead to the emergence of possible system-level safety issues, some of which could be caused by external or internal electromagnetic interference (EMI) [1]. Assurance of safety by demonstrating that EMI risks in such systems are at acceptable levels is becoming increasingly challenging as system complexity rises. This is due to the costs and practical limitations of both system-level EMC testing and whole-vehicle EM simulations. Hence, there is a need for additional methods to help estimate the likelihood of EMI risks associated with such systems. This paper proposes a knowledge-based approach to assist risk management in system-level electromagnetic engineering. The purpose of using a knowledge-based approach is to be able to include uncertainties (e.g., internal and external EMI levels) and lack of information (e.g., physical location of the component) during the safety risk analyses. Probabilistic graphical models, such as Bayesian and Markov networks, are able to provide a better visualization of various features and their relationships in a single graphical structure. Moreover, using template models [2], a general-purpose representation (see Fig. 1) for various integrated components of a system can be developed for collective inference. Relevant technical and non-technical features as mentioned in [3] can also be included in the graphical models.

Evaluation of EMI Risks

ABSTRACT. Methods for risk evaluation are usually based on a comparison of identified risks, e.g. by determining risk levels. A common method of risk assessment is the weighting of risks with the product of probability of occurrence and costs of the expected damage. Depending on the scope and objective, the costs may include, in addition to the damage in the narrower sense (e.g. loss, repair costs, replacement costs), costs for its handling (recovery) or consequential costs (e.g. penalties for environmental damage).

Due to the lack of historical experience with deliberately induced electromagnetic influences, in the case of EMI risk analysis, the probability of occurrence of the considered hazards (EMI environments) can only be greatly limited or not at all quantified in the form of a decidedly determined percentage number. In most cases, it is therefore more practicable to indicate the probability of occurrence employing probability categories. These are the result of subjective evaluation by experts and are less suitable for pure product formation. Another aspect that speaks against the application of the current assessment is the fact that in many cases the damage is difficult to express in a monetary value. Thus, the monetary value does not always reflect the severity of the damage.

This contribution presents the three methods to evaluate risks (1) risk matrix or risk cube, (2) risk priority index and (3) risk graph. The paper starts by describing the basic principle of each method. In the following part modifications, required by the application on EMI, risks are discussed. The application of the introduced methods on selected examples demonstrate the practical applicability as well as their strength and limitations.

08:30-10:10 Session WE1J: Advancements in Resilience Engineering of Critical Infrastructures
Location: Botanique
PRESENTER: Andrejs Utans

ABSTRACT. The power system of Baltic states is synchronously interconnected with the Unified Power System of Russia (UPS). This interconnection has always provided the power system of the Baltic states with sufficient frequency and inertia reserves, so that frequency stability issues were irrelevant. A political decision of the Baltic states has determined a de-synchronization of the Baltic power grid from the UPS power system and synchronization with the European Network of Transmission System Operators (ENTSO-E) power system in 2025. This synchronous connection is to be set up and maintained via a single double-circuit synchronous interconnection on the border Lithuania-Poland. During planned or unplanned outages of this interconnection the power system of Baltic states is to operate in an island mode. A possibility of the island mode of operation will bring major challenges to the frequency stability and sufficiency of the system inertia thus, raising a problem of the resilience of the entire energy system. An impact of a major generation source failure on the Baltic power system in an island mode and the following frequency response will be used as a case study to evaluate the resilience of the Baltic energy system. A practical way of enhancing the resilience of the electrical power grid of the Baltic states will be considered and addressed in this study.


ABSTRACT. The uncertainties associated with renewable energies can have a significant impact on the security of power systems. To address this problem, this article studies the effect of renewable sources on the reliability and vulnerability of systems with a high share of renewable generation and compares the results obtained with those measured in electrical grids mainly composed of thermal power plants. This comparison aims to quantify the influence of renewable generation on the performance and operational behavior of infrastructure under severe contingencies. Both reliability and vulnerability are assessed in parallel in two case studies: one based on the IEEE RTS-96 test system with thermal generation and the other on the IEEE RTS-GMLC test system with a high share of renewable generation. Different reliability indices are calculated using the sequential Monte Carlo method, and a vulnerability index is measured using a cascading failure approach. The simulations show that the integrated system with renewables is less reliable and more vulnerable than its purely thermal counterpart. These conclusions highlight the importance of analyzing the operational security of infrastructure from both perspectives.

Modeling environment dependency in partially observable Markov decision processes for maintenance optimization

ABSTRACT. Partially Observable Markov Decision Processes (POMDPs) are studied in the maintenance literature because they can take uncertainty of information into account. This uncertainty may, for instance, arise from imperfect information from a sensor placed on the equipment to be maintained. Examples of such system-sensor pairs are an engine with a temperature sensor, ball bearings with a vibration sensor or a heating, ventilation, and air conditioning (HVAC) system with a temperature sensor. Our research into environment dependent POMDPs is motivated by HVAC systems used in trains. Their functioning is crucial during hot summer months, as carriages with failed HVAC systems cannot be used during this period. Hence, from a resilience standpoint it is important that HVACs are maintained effectively to ensure mobility around the country. Failures of an HVAC system are obvious in the summer and winter when its functionality is needed to keep the temperature stable. However, failures also occur in the fall and spring, but these failures are not as obvious from the temperature read-outs as the failures in summer and winter. This setting can well be modeled as a POMDP since the temperature read-out does not give complete information on the current state of the system. We model the following three actions: an inspection with incomplete information, a perfect inspection, and a maintenance intervention. To this model, we add a Markovian environment, giving rise to a model in which environment dependent partial observations, degradation and costs are included. For this model we show that an environment dependent 4-region policy is optimal. In other words, adding the environment preserves most of the properties of the original model. This contributes to the literature, as the preservation of properties will also hold when adding an environment to other POMDP models. We further perform numerical experiments that lead to interesting insights.


ABSTRACT. Water supply systems are considered critical infrastructure, and the effectiveness and efficiency of their post-disaster restoration vastly contribute to the resilience of society. Simulation-based optimization using the genetic algorithm (GA) and machine learning (ML) has been widely applied to optimize the restoration planning of such critical infrastructure. However, these methods face challenges of interpretation and explainability. A heuristic-based algorithm reflecting the empirical rules of experts in restoration planning is a promising alternative because the result can be explained and justified in terms of these rules. In addition, the computational cost is lower than that of the GA and ML, both of which require iterative simulations. However, it is not easy to precisely and comprehensively elicit such empirical rules. In addition, the result obtained by the algorithm does not necessarily ensure optimality and validity. To solve these problems, we designed a workshop to elicit knowledge from experts by comparing restoration plans created by experts, plans created by a heuristic-based algorithm prototype, and optimal plans obtained by a GA. In the workshop, the participants were first asked to create a restoration plan for a disaster scenario that contained the locations and specifications of damaged water pipes, as well as the geographic and demographic data of the target city. After the session, we revealed the performance evaluation of these restoration plans performed by the restoration process simulation developed in our previous study and asked the experts to discuss the results, particularly the differences in their plans. We recorded the observations expressed in the workshop and analyzed them to extract the empirical rules. This workshop was conducted thrice, and it was confirmed that the interactive approach was effective for knowledge elicitation. We also confirmed that the heuristics are dynamic and context-dependent. We observed that different rules were applied depending on the severity of the disaster scenarios and phase of recovery. Further, we confirmed that in many cases, the GA optimization was superior to the others within the tested scenario, but the difference was not significant. This suggested that quasi-optimality was assured in the heuristic-based algorithm.

Strengthening Resilience in Critical Infrastructure Systems: A Deep Learning Approach for Smart Early Warning of Critical States
PRESENTER: Stella Möhrle

ABSTRACT. Systems of critical infrastructures are characterized by strong interdependencies and the developments of urban areas towards Smart Cities even increase the underlying complexity due to growing automation and interconnectedness. A system of highly cross-linked components is especially prone to systemic risks making concepts of resilience accordingly important. One way for being able to withstand in times of stress, maintain security of supply, and promote adaptive and anticipative capabilities, is to establish early warning capabilities. As cities are complex and rather chaotic socio-technical systems reigned by randomness, the caused parametric uncertainties challenge modeling approaches that are intended to support robust decision making. Sophisticated methods based on artificial intelligence can play an essential role in this case, as they perform well on highly complex environments and large data set. To study resilience, the urban area is split into zones where the city’s state is determined by the states of these zones and the state of a zone is characterized by the criticalities of infrastructures accommodated there. Considering criticality as an atomic building block for urban performance assessments, this paper proposes a zone-based state forecast methodology by applying deep convolutional neural networks for learning state evolution that is influenced by non-linear demand dynamics. Furthermore, a case study is presented that applies agent-based simulations and underlines the relevance of deep learning approaches for Smart City early warning systems.

08:30-10:10 Session WE1K: Occupational Safety
Location: Atrium 1
A scientific approach to practical robot safety

ABSTRACT. This paper addresses a structured development method for the digitalization of safety management systems for robot/cobot safety in an Industry 4.0 setting. The method is referred to as GRIP: Guarding Robot Interaction Performance. The method progresses beyond our earlier work [1] by adding digital tools for safety management for robots and cobots. The method has a strong emphasis on the human factors (HF) and occupational safety and health (OSH) side of the interaction. Systematically considering the human is an important part for successful Industry 4.0 applications. The development process combines three mainstream approaches in robot safety to development an interactive tool to design a safety management system. The first model is the IMOI model, which is based on the input-mediation-output (IMOI) model [2]. Originally developed to model teamwork, the input (I) concerns characteristics of team members, the task and the environment, which would affect the cooperation. The mediators (M) are the conditions or states that emerge from the interaction as a result of the input factors, and which will affect the output. The output (O) concerns the desired results of the interaction (I) and which can directly affect the input for the next interaction. The second is the Storybuilder model that deals with the structured recording and analysis of occupational accidents [3]. The storybuilder model is based on the BowTie and encapsulates the causal factors for accidents as well as the factors that aggravate the consequences. It is used today for the analysis of occupational accidents by the Dutch labor inspectorate. The third is not so much a model but the legislation in relation to machine safety: the Machine Directive (2006/42/EC). The machine directive collates all safety requirements for machines and working safely with machines. These three parts form the foundations for an online safety assessment method for safe work with robots. The paper discusses how the theories link and how the various elements fit together for practical deployment in an OSH environment.

[1] W. Steijn, J. van Oosterhout, J. Willemsen, and A. Jansen “Modelling Human-Robot Interaction to Optimize Their Safety, Sustainability and Efficiency: Identifying Relevant Characteristics”, in proceedings of ESREL2020-PSAM15, September 2020. [2] M.A. Rosen, A.S. Dietz, T. Yang, C.E. Priebe, and P.J. Pronovost, “An integrative framework for sensor-based measurement of teamwork in health care”, J. Am. Med. Inform. Assoc., vol 22, pp 11- 18, January 2015. [3] B.J.M. Ale, L.J. Bellamy, H. Baksteen, M. Damen, L.H.J. Goossens, A.R. Hale, M. Mud, J. Oh, I.A. Papazoglou, and J.Y.Whistoni “Accidents in the construction industry in the Netherlands: An analysis of accident reports using Storybuilder”, Reliability Engineering & System Safety, Vol. 93 iss. 10, pages 1523 – 1533, October 2008.


ABSTRACT. Bisphenol A (BPA) is one of the most widely used chemical compounds in the world, and products containing BPA are part of our everyday life. People can be exposed BPA through breathing, ingestion or dermal contact. It has reproductive toxicity and is structurally similar to natural and synthetic estrogens, which may damage fertility. It is classified as a class 1B substance by the European Commission. This paper introduces the main regulations dealing with risk management of BPA in EU and China, and compares the protection requirements of workers and users who may be exposed. For the protection of workers, both the EU and China have put forward requirements for risk prevention, evaluation, control, notification, and health surveillance, and have stipulated the Occupational Exposure Limit of BPA. For the protection of product users, controlling the content of BPA in materials in contact with food and thermal paper is the main requirement of EU regulations. China has also restricted the use of BPA in materials in contact with food, but BPA exposure of people exposed to thermal paper has not been paid enough attention in China. At the moment the regulation of BPA risk control in the EU is more advanced than in China. And the EU has more extensive restrictions and stricter control requirements. In comparison, China's control of BPA risks needs further development. Thanks to the activities of the International Joint Laboratory on for Risk Management and Sustainability created by INERIS and BJAST (Beijing Academy for Science and Technology), the BMILP (belonging to BJAST) has paid more attention to the risk of BPA. Learning from INERIS experience and practices, Chinese researchers have carried out a more extensive risk assessment of BPA in China in order to promote good practices and reduce exposure of people to this reprotoxic substance.

1. The European Framework Directive on Safety and Health at Work (Directive 89/391 EEC) 2. The Law of the People's Republic of China on Prevention and Control of Occupational Diseases. 3. European Chemicals Agency (ECHA) (June 2019), Use of bisphenol A and its alternatives in thermal paper in the EU – 2018 update

The emergence of netcentric principles in Dutch safety-expert networks during the Covid crisis.

ABSTRACT. This work investigates the changing role of professional safety networks in OSH during the Covid crisis. In the Netherlands, as elsewhere, occupational health and safety experts have been working round the clock to address the governments’ mandatory covid measures. The professional networks they that they traditionally used for intervision and exchange have performed adequately under the stress of the Covid crisis. The networks are traditionally distributed networks used to keep in touch with colleagues, discuss specialist topics in dedicated meetings and take courses. But during the Covid crisis the need for a new form of collaboration emerges. It was speculated that this novel need mimics netcentric networks. This work tests that hypothesis with an initial survey. A netcentric approach is different from a distributed network in the sense that emphasizes on a mediated form of interaction rather than a merely working together [1]. The this form of collaboration is well developed in military environments where technological agents collaborate together under the supervision of a set of goals. Fighting the Corona crisis follows a similar pattern and individual safety experts are developing ad-hoc networks to that affect. This paper investigates tow what extent principles of netcentric working fit the needs for safety experts in the time of Covid. Semi-structured interviews were performed to identify how safety experts changed their need for relevant information to acquire the right information. The methods in which they acquire, process and share their knowledge show that there is a self-organized move toward a netcentric learning approach.

[1] R.S. Abrams, ‘Uncovering the network-centric organization’, thesis for University of California IRVING, 2009.

Smart system for worker safety: scenarios and risk

ABSTRACT. Extended abstract

The introduction of IoT in work environment has the potential to revolutionise the industrial scenario. Among others, IoT technologies have the capability to innovate the customer experience, to improve the effectiveness and accuracy of processes, to identify in the early stages possible problems and/or defects, to enhance the efficiency and the sustainability of the activities. Moreover, IoT can dramatically improve the safety of workers, allowing to assess the psycho-physical state of the worker (Bernal, 2017), the effectiveness and the correct use of the safety devices (Kritzler, 2015) (Lee, 2017) and the status of the environment, with the capability to provide on-line and in the field assessment in order to improve the situational awareness (Gnoni, 2020), (Podgorski, 2017). In this paper, we will illustrate the main uses of such technologies in the OSH framework providing a taxonomy in terms of purpose, typology and position of sensors (either worn or environmental), type of measurements collected and information processing. A specific attention will be posed on the privacy, since the risk of potential remote control of the worker is an aspect that can negatively impact on the adoption of these solutions (Faramondi, 2020). Moreover, we will carry out an analysis of the problems related to the use of smart systems at large in the safety framework. Specifically, the paper will analyse the negative effects that can be induced in the consequence of the employee de-responsibilities and due to the systemic fragility introduced by the cyber security aspects (Langone, 2017), (Liu, 216). Finally, using a specific case study, we will provide some recommendations to design an effective and “safe” smart safety environment.


(Bernal, 2017) Bernal, G., Colombo, S., Al Ai Baky, M., & Casalegno, F. (2017, June). Safety++ designing iot and wearable systems for industrial safety through a user centered design approach. In Proceedings of the 10th International Conference on PErvasive Technologies Related to Assistive Environments (pp. 163-170). (Gnoni, 2020) Gnoni, M. G., Bragatto, P. A., Milazzo, M. F., & Setola, R. (2020). Integrating IoT technologies for an “intelligent” safety management in the process industry. Procedia manufacturing, 42, 511-515. (Faramondi, 2020) Faramondi, L., Bragatto, P., Fioravanti, C., Gnoni, M. G., Guarino, S., & Setola, R. A Privacy-Oriented Solution for the Improvement of Workers Safety. In 2020 43rd International Convention on Information, Communication and Electronic Technology (MIPRO) (pp. 1789-1794). IEEE. (Kritzler, 2015) Kritzler, M., Bäckman, M., Tenfält, A., & Michahelles, F. (2015, November). Wearable technology as a solution for workplace safety. In Proceedings of the 14th International Conference on Mobile and Ubiquitous Multimedia (pp. 213-217). (Langone, 2017) Langone, M., Setola, R., & Lopez, J. (2017, July). Cybersecurity of wearable devices: an experimental analysis and a vulnerability assessment method. In 2017 IEEE 41st annual computer software and applications conference (COMPSAC) (Vol. 2, pp. 304-309). IEEE. (Lee, 2017) Lee, W., Lin, K. Y., Seto, E., & Migliaccio, G. C. (2017). Wearable sensors for monitoring on-duty and off-duty worker physiological status and activities in construction. Automation in Construction, 83, 341-353. (Liu, 216) Liu, J., & Sun, W. (2016). Smart attacks against intelligent wearables in people-centric internet of things. IEEE Communications Magazine, 54(12), 44-49. (Podgorski, 2017) Podgorski, D., Majchrzycka, K., Dąbrowska, A., Gralewicz, G., & Okrasa, M. (2017). Towards a conceptual framework of OSH risk management in smart working environments based on smart PPE, ambient intelligence and the Internet of Things technologies. International Journal of Occupational Safety and Ergonomics, 23(1), 1-20.

PRESENTER: Marco Pirozzi

ABSTRACT. Proportional flow control valves can control flow and they are sometimes pressure compensated and temperature compensated. Proportional valves are particularly suitable for applications where it is necessary to vary the output flow, both during the same process and moving from a process to another. The control of these valves can be obtained by many coils (solenoids), this allows a more accurate regulation of flow. Another advantage is that they allow various speeds achievable by changing the electrical signal without any additional hydraulic components. Proportional controls, used with their respective electronic controls, add a variety of machine cycles, operated at higher speeds, in conjunction with controlled start and stop. The regulation of acceleration and deceleration allows improved cycle times, production speeds of the machine and a stable flow is also obtained. These features make proportional valves particularly suitable also for the design of Industry 4.0 systems. Therefore, Inail and the University of Perugia have begun a research activity aimed at studying the main safety and energy efficiency aspects that must be considered in the utilization of proportional valves within hydraulic systems. In this paper relevant application of those components, first for lifting and operating machines and then for machinery tool sector, has been investigated. Main failure modes and safety issues are presented. The complete risk assessment necessary for possible application in the field of machinery will be carried out in future research activities.

10:10-10:25Coffee Break
10:25-11:45 Session WE2A: Risk Management
Location: Auditorium

ABSTRACT. The expansion, maintenance, and optimization of pipeline systems that distribute natural gas are all essential to meet commercialization demands for various uses of natural gas. Therefore, these are some of the key-factors for this sector that must be attended in order to stimulate growth in investments and to enhance current activities. However, understanding how risk is managed in the decision context must be a focus of particular attention. Thus, recent studies involve multidimensional risk analysis, and this approach deals with physical and operational aspects of natural gas pipeline systems that can lead to accidents with potential for human, financial and environmental losses. In this context, a consistent decision-making process is needed to manage risks in this complex system, and the decision maker's perception of risk by applying decision models may contribute to prioritizing maintenance actions and improving resource allocation. Therefore, this paper seeks to contribute to the decision-making processes by analyzing risk in natural gas pipelines, for which multidimensional risk visualization tools are explored. To this end, this paper uses a multicriteria model and applies it to a case study from the literature to assess risk and evaluate uncertainty aspects. In addition, potential contributions of this approach using a graphical visualization analysis are suggested. Finally, it is shown that categorizing risk in natural gas pipeline sections in different hazard scenarios by using graphics and visual information is the main innovative feature this paper introduces to aid the decision-maker reach a more assertive recommendation.


ABSTRACT. Risk monitoring is a fundamental part of risk management that allows detecting changes that might affect the risk consequences and their likelihood. To be effective, the process requires storing the risk information into a so-called risk register. This paper presents an implementation that is realized in form of a relational database. We present the complete table schema of the database and discuss what motivated the different features. We believe that certain aspects of our schema improve its application in comparison to related works. A noteworthy characteristic is the separation of the risk scenarios and risk analyses in different tables. This feature relates to the fact that the same scenario can be assessed within multiple analyses, in which individual circumstances may result in different risk levels. Moreover, different analyses can apply distinct risk criteria. Thus, the analysis-specific criterion must be stored in the database as a risk matrix. A risk scenario includes an entity or item that is the subject being considered in that scenario. This subject can be a system or organization or a subpart of a system or organization. Due to this reason, the schema allows a hierarchical categorization of entities. The developed schema also employs ideas from the object-oriented programming approach, which allows entities to inherit already defined risk scenarios. This paper further presents a browser based user interface to access the database.


ABSTRACT. Nowadays, medical devices are characterized by making a significant use of software technologies and artificial intelligence, in order to control processes in real time, to optimize the treatment plans as well as to make accurate diagnostics by merging large quantities of data. Medical devices also provide increasingly sophisticated user interfaces, through which medical doctors can supervise the patient and operate from remote. This technological progress has started a new era of modern medical science, and at the same time, it has demanded the review of those requirements that make it possible to operate/use a medical device in a safe and effective way. The results of this review are reflected in the Medical Device Regulation MDR 2017/745 as well as in the adaptation of several applicable standards. It has also affected the state of the art for the risk management of medical devices, in line with ISO 14971. The increased complexity of medical devices, especially because of software, turns inevitably into a higher number of failure scenarios. Moreover, the possibility to operate remotely has introduced a new contribution to the overall risk, which is associated to cyber-security threats. In the near future, cyber-security threats will represent one of the main risk contribution, possibly higher than the contribution of human errors. The analysis and the mitigation of such risks entail specific competences, which are acquired from other disciplines. It also requires taking into account the cyber-security within the scope of the “risk informed design” of a medical device. The latter, which is built upon the model “avoid, prevent, protect”, shall be complemented with an equivalent approach to the modeling of security threats and the security related risk control measures.

This paper aims at addressing the challenge of cyber-security for a medical device from the point of view of risk management and the impact on the risk informed design principles. The concepts will be exemplified by referring to the MedAustron particle therapy facility.

PRESENTER: Scarlett Tannous

ABSTRACT. In the aftermath of the 26 September 2019 accident at Lubrizol chemical plant in Rouen (France), the authorities have been investigating and reconsidering the efficiency of some policy assumptions and safety management measures1. Among their findings, they (i) highlighted the need to reinforce risk prevention measures by improving the control of Seveso sites and acknowledging a better understanding of stocked substances in this these sites; and (ii) fostered improvements by establishing new risk-related regulations to provide adequate protection of public health and safety at these sites. Accidents have always been opportunities to learn and improve current practices and regulations in risk and safety. For example, after the AZF accident on 21 September 2001, the ensued law of 30 July 2003 has elaborated a set of tools to prevent risks, manage crises, communicate to the public, involve subcontractors and employees, and plan land-use aspects2. Nevertheless, the occurrence of these accidents raises the question of how effective these risk-related policies and their evolutions are. Hence, this attracted further research to support organizations, authorities, and communities in developing efficient policies that can mitigate the challenges induced by major risks. The goal of this paper is to conduct a policy gap analysis taking the Lubrizol accident as a case study since consecutive accidents have occurred in 2013 and 2019. By analyzing the policy process, the study addresses involved stakeholders, encountered gaps between the current situation and the desired outputs, policies’ pitfalls, means used to solve the problem, and impacts of the taken measures and decisions on different stakeholders. It aims to scrutinize the policies’ durability by assessing their effectiveness, examining their unintended effects, and revealing their impacts on various stakeholders. Consequently, the policies are explained from a practical point of view and their evolution is described using a diachronic analytical perspective. The evaluative outcomes will serve policymakers, researchers, and risk managers in the decision-making process and the reduction of the theory-practice gap. Therefore, this diachronic analysis can enrich the risk prevention policies and the methods used for establishing risk-related regulations for high-risk installations in the industrial and petrochemical field.

10:25-11:45 Session WE2B: Mathematical Methods in Reliability and Safety
Location: Atrium 2
PRESENTER: Pavel Krcal

ABSTRACT. Boolean combinations of basic failures leading to an undesired consequence which one can specify by fault trees allow for a very efficient analysis even for large industrial safety studies, such as nuclear Probabilistic Safety Assessment (PSA). This efficiency comes for a price of approximating possibly complex dependencies, failure behaviors and accident mitigation strategies by Boolean structures with basic events. Often, these approximations are acceptable and give valuable insights about system risks, making fault trees (and event trees) an industrial standard. Two aspects of typical safety systems might lead to imprecise results in fault tree models, especially when considering prolonged accident durations -- cold spares and repairs. Cold spares are replaced by hot spares, as if all safety systems were started at the instant of an accident initiating event. Repairs can be modeled by new basic events representing failures to repair a component. Including cold spares and a possibility to repair failed components in exponentially distributed repair times implies considering such models as Continuous Time Markov Chains (CTMCs). General analysis methods for CTMCs hit the computational limits of current computers even for medium size models with ca 300 basic events. If we want to consider cold spares and repairs then we need to look for approximative analysis algorithms that work well for typical safety models. One such algorithm restricts the number of repairs of each component by a fixed number. As a result, one can work with acyclic CTMCs which are easier to analyze. In this paper, we adopt this restriction on the number of consecutive repairs within one accident. We study the possibilities which it opens for a more detailed modeling of repairs. We discuss repair aspects such as partial repairs (into a less reliable state), requirements on operators or other equipment to successfully bring repaired components in operation, or non-exponential distributions of repair times. Modeling limited repair capacities with a given repair strategy poses another challenge. We investigate to what extent it is possible to include repair strategies in this scalable analysis.


ABSTRACT. Quantum physics applications in secure communications and high-performance computing are currently receiving a lot of attention worldwide. Huge investments are being made in the field of quantum computing, with major players such as Google, IBM, Intel, Microsoft, etc. competing for commercial solutions. Quantum supremacy\cite{Arute2019} or advantage\cite{Centrone2021} have made the headlines, and are hotly debated.

While the operational number of qubits is not yet sufficient to implement a general-purpose quantum computer, it has been suggested by Preskill\cite{Preskill2018} that systems with a few tens of qubits, called Noisy Intermediate Scale Quantum (NISQ) computers, could be very useful for specific problems/quantum algorithms. They are termed noisy because the qubits are very sensitive to their environment, so that the error rate of qubit operations is still important. A relatively simple meshed architecture is also required in order to be able to address all qubits properly.

In this context, Tannu and Qureshi\cite{Tannu2019} first assessed the reliability of NISQ computers, defining two figures of merit. Because of the significant variations in the errors rates of qubits and links, they proposed a variation-awareness to qubit-movement policy in order to improve the overall system reliability. Their approach is based on a shortest-path routing algorithm to transfer information from one qubit to another.

We propose here to apply the formalism of network two-terminal, and more generally $k$-terminal reliability to the calculation of the probability of qubits association, so that the result is not path-dependent anymore. This calculation can rely on analytical results for already solved generic network configurations, to which the IBM~Q architectures have the good taste to belong. We shall apply our methodology to a few such cases. The variability of the error rates can also be included very simply in our approach, in which node and link availabilities may be defined individually. Finally, we shall provide directions for the inclusion of correlations in the error rates.

City bus reliability assessment based on state space models
PRESENTER: David Valis

ABSTRACT. The city buses reliability and safety are crucial aspects of their frequent operation. Reliable and safe operation has, apart from others, one important aspect as it does not increase ownership and entrepreneur costs.

We had been a part of assessment team which was supposed to observe buses operation in a medium size town. We collected records of buses operation, maintenance, failures occurrences, etc. The buses failures were identified up to a subsystem level, therefore we know which subsystem in a bus had failed. This project lasted for more than six years. Thank to this we possess significant statistical data set which is very rich in information. As we have records of each event occurrence in terms of date, mileage, fuel consumption, sub system affected, etc. we do have a vector of more than 20 variables while some are dependent and some independent.

In this paper we present the data elaboration. Our effort is aimed on initial data mining, basic characteristics courses plotting and finding correlation amongst respective variables. The data create interesting courses (see Fig. 1) which we would like to develop further. Modeling tool such as time series state spaces models based on backpropagation Kalman recursor are the suitable tool which is applied on the data. We would like to present essential reliability measures, their course and development with potential to their predictions.

10:25-11:45 Session WE2C: Bayesian network for reliability modeling and maintenance optimisation
Development of a Bayesian Updating Model for O&M Planning of Offshore Wind Structures

ABSTRACT. Offshore wind structures are generally more complex than their onshore counterparts due to requirements to overcome challenges associated with severe marine environments and impact under substantial wave loading. There are difficulties associated with the accurate analysis of fatigue damage in offshore wind structures as a result of uncertainty within the crack growth models. To overcome such difficulties, this study proposes a model called Bayesian updating for predicting fatigue crack growth based on hourly observations from remote condition monitoring sensors. The Bayesian updating is applied to the Paris law, which is commonly used to represent the growth of fatigue-induced cracks in structures. The research method employed in this study also involves the use of influence diagrams to show the effects of condition monitoring decisions on fatigue crack growth. The model is applied to predict fatigue crack growth in an offshore wind turbine monopile over a 24-hour period. The results demonstrate a theoretical proof-of-concept of how ‘real-time’ condition monitoring technologies can be utilised to predict different damage modes in offshore wind turbine structures.

Unsupervised condition monitoring with bayesian networks: an application on high speed machining

ABSTRACT. The objective of Smart Manufacturing is to improve productivity and competitiveness in industry, based on in-process data. Indeed, failures can stop the production for a couple of days and generate costs of non-quality. Failures in industry can either damage the machine or the product being produced. In both cases, the earlier the failure is detected, the lower the impact on production. Thus, monitoring both the process and the machine condition is interesting, due to their potential interactions. Besides, the diagnosis of the nature of the incident is also important, in order to react adequately as fast as possible. It requires reliable, explainable and understandable models such as Bayesian networks for performing tasks like condition prediction. Bayesian networks can be learned with incomplete data and in a supervised or unsupervised way, which is very useful because the collect of labelled data is costly and sometimes impossible, especially in industry where problems are, moreover, very rare. In this paper, we propose a generic architecture based on two Bayesian networks and a collaborative learning strategy that improves the condition monitoring of rotating machines in unsupervised context by using information gathered from process monitoring.

Quantitative System Risk Assessment from Incomplete Data
PRESENTER: Simon Wilson

ABSTRACT. In this work we focus on the use of belief networks as a generalization of a fault tree analysis, where the main interest is in learning about the probability of the top event, and where a fault tree has been constructed that relates it to the occurrence of one or more primary or intermediate causal events. Furthermore, we focus on a common situation where: there is substantial expert opinion, but that constraints on the availability of the experts, or their experience of an elicitation process, means that the elicitation process must be kept simple; there are limited data from past instances of the risky event, meaning that this information must be used to the full but that also there is uncertainty in the risk assessment that must be properly quantified. In other words, typically some of the events in the fault tree are not observed, and which are observed may change from one observation to the next.

Additionally, the method that we propose can be extended to situations where different instances of the event occurred under different circumstances that may affect the risk probability, but that it may be difficult to quantify these circumstances and in any case the data are insufficient to fit a statistical model such as the proportional hazards model

The motivating example for this work comes from an application in the space industry. To reduce the creation of space debris, operators are increasingly resorting to a controlled re-entry of satellites and spacecraft once they are no longer needed, with the objective that they will largely burn up in the atmosphere. The re-entry trajectory is designed so that any components that do reach the surface will land in areas of remote ocean, such as the South Pacific. A recent example, and the motivation for this work, is the European Space Agency's Automated Transfer Vehicle (ATV), built to supply the International Space Station. Other examples of situations where the approach of this paper may be relevant are nuclear power, maritime safety or counter-terrorism.

A panel of experts approach is used to elicit prior distributions on primary event probabilities, which are then updated from data with the usual belief network methodology. We illustrate the approach with the space debris example, and discuss the use of the method in deciding whether it is worth collecting more detailed data from the fault tree.

10:25-11:45 Session WE2D: Prognostics and System Health Management
Location: Panoramique
New mixture distribution model for a description of different failure mechanism caused by different stresses
PRESENTER: Stefan Bracke

ABSTRACT. Failure symptoms of technical complex products are often caused by mixed failure root causes. In reliability engineering, the use of a single Weibull-distribution (parameters threshold t0, shape b, location T) is the state of the art regarding the description of a single failure mode. In terms of a mixed failure root cause, a mixture of different Weibull-distributions has to be considered. The subsequently following failure models are state of the art, cf. (Meyer 2003) and (VDA 2016): Compete failure model, Mixed population failure model, Partial population failure model, General failure model. Only the first two models are fundamentally different. The third model is a special case of the second, and the fourth is a mixture of the others. The application of the “compete failure model” is useful for a system of components with different failure behavior resulting to a mixture of distributions. Considered e.g. a simple system of two components with two different shape parameter values (b1 < 1; b2 > 1) one, the representation of the resulting distribution in a Weibull probability net will be then principally a curve, which begins with a lower gradient and continues in a slight curvature to the end with the higher gradient. This failure model has been used often as well in cases of single components with different damage appearances. But in many cases resulting curves of the mixture distributions have been observed, which differ to the just described by a clearly sharper bend between two straight lines (VDA 2016). Thus the “compete failure model” does not explain this phenomenon. In some of these cases an explanation by the “mixed population model” could be possible, if there are reasons for an assumption of such an inhomogeneous population, which could have been caused e.g. by a mixture of charges with different quality levels from different suppliers or manufacturing lines. In contrast to the “compete failure model” especially this model is suitable to describe a behavior represented by a bending down curve. This paper shows the development of a new failure model with respect to a mixed failure root cause. The model considers especially a distribution of different stress levels affecting the components in field. It is based on the “inverse power law” (Nelson 1990) and the “damage accumulation hypothesis” (VDA 2016). Fundamentals of the model, the estimation of model parameters and a first approach for the prognosis application based on on laboratory tests are shown.

Prediction of remaining useful life via self-attention mechanism-based convolutional long short-term memory network
PRESENTER: Jiusi Zhang

ABSTRACT. With the increasing complexity of the various large-scale equipments, prognostics health management (PHM) technology has attracted more and more attention. Prediction of remaining useful life (RUL) is of vital significance in the PHM technology. Deep learning approaches can achieve great performance in predicting RUL. However, conventional deep neural networks, such as convolutional neural network (CNN), recurrent neural network (RNN), and long short-term memory network (LSTM), do not consider the impact of the various features on RUL at different times. To solve the limitation, we propose a novel self-attention mechanism-based convolutional long short-term memory network (AM-CNN-LSTM) for RUL prediction, whose framework is shown in Fig.1. The main contributions of the proposed approach lie in extracting the spatial and temporal feature information of historical data with the aid of the CNN and the LSTM, meanwhile, the self-attention mechanism is able to learn the significance of the different features and times, and assign larger weights for more significant ones. Feature selection is employed to eliminate the features that are not useful for RUL. In addition, the data are smoothed, and combined into the time window to construct the relationship between input and output. The AM-CNN-LSTM is trained by the time window data. Finally, the trained neural network is used for online RUL prediction. The aircraft turbofan engine dataset provided by NASA is applied to verify the effectiveness of the proposed RUL prediction approach.

PRESENTER: Bahareh Tajiani

ABSTRACT. Remaining useful life (RUL) is an important requirement for condition-based maintenance especially for critical components whose failures can cause a long unplanned shutdown. Roller bearings are considered critical components in rotating machinery given the fact that almost 30% of the abnormalities of rotating machinery are induced by bearing failures (Ge et al. 2020). Since the vibration signals in bearings have both nonlinear and nonstationary characteristics, neither of time and frequency-domain approaches can provide reliable and accurate RUL prediction results. Thus, most researchers focus on wavelet transform (WT), short-time Fourier transform (STFT) and Hilbert-Huang transform (HHT) as time-frequency techniques for bearings’ fault detection. However, the combination of these methods as input for health indicator (HI) construction together with RUL estimation approaches for failure prediction have not been studied thoroughly before (Caesarendra and Tjahjowidodo 2017). Furthermore, most research are based on online datasets such as Pronostia in which the bearings are only degraded by loading.

To fill this gap, this paper presents a framework using empirical wavelet transform (EWT) for fault detection and HI construction, combining with Bayesian inference approach considering a random failure threshold for failure prediction of bearings. The datasets in this paper are collected by performing several accelerated life tests at RAMS laboratory at NTNU. EWT as an adoptive approach which was proposed by Gilles (2013) to enhance the performance of conventional WT is employed to decompose the signals into sub-bands and various features were extracted from them. The selection criteria to choose the sensitive sub-band are proposed based on the correlation coefficient and the consistency of the features’ degradation paths in the bearings’ datasets. A wiener process with Bayesian approach is applied on the degradation stage of the bearings’ lifetime to predict RUL efficiently and the framework is validated using the testing dataset. The paper’s outcome can be used as an input for constructing an efficient maintenance optimization model to facilitate the decision-making process in the operation of rotating machinery.

A comprehensive parameter study for the neural networks based monitoring of grinded surfaces
PRESENTER: Marcin Hinz

ABSTRACT. The optical perception of high precision, fine grinded surfaces is an important quality feature for these products. Its manufacturing process is rather complex and depends on a variety of process parameters (e.g. feed rate, cutting speed) which have a direct impact on the surface topography. Therefore, the durable quality of a product can be improved by an optimized configuration of the process parameters. By varying some process parameters of the high precision fine grinding process, a variety of cutlery samples with different surface topographies are manufactured. Surface topographies and colorings of grinded surfaces are measured by the use of classical methods (roughness measuring device, gloss measuring device, spectrophotometer).

To improve the conventional methods of condition monitoring, a new image processing analysis approach is needed to get a faster and more cost-effective analysis of produced surfaces. For this reason, different optical techniques based on image analysis have been developed over the past years. Fine grinded surface images have been generated under constant boundary conditions in a test rig built up in a lab. The gathered image material in combination with the classical measured surface topography values is used as the training data for machine learning analyses.

Within this study the image of each grinded surface is analyzed regarding its measured arithmetic average roughness value (Ra) by the use of feedforward Neural Networks (NN). NNs are a type of machine learning algorithms which can particularly be applied for any kind of analysis based on extracted features. For the determination of an appropriate model, a comprehensive parameter study is performed. The approach of optimizing the algorithm results and identifying a reliable and reproducible NN model which operates well independent of the choice of the random sampled training data is presented in this study.

10:25-11:45 Session WE2E: Autonomous system safety, risk, and security
Location: Amphi Jardin
Operational Design Domain for Autonomous Cars versus Operational Envelope for Ships: Handling human capabilities and fallback

ABSTRACT. Autonomous cars have been researched since the 1980s and has created significant interest in both the research and the commercial communities. Terminology is in the process of being standardized and the concept of the operational design domain has been proposed to define the capabilities of the car's driving automation system. Autonomous and unmanned ships have similarly been a research item, also since the 1980s, but with a much lower public profile. A main difference between the two types of vehicles is that autonomous ships will in most cases have human supervision and backup control responsibilities. This has led us to suggest the term operational envelope for the ship, instead of operational design domain, and to include the human capabilities in the operational envelope. This paper compares these concepts and the benefits of the operational envelope when dealing with ships.

PRESENTER: Joffrey Girard

ABSTRACT. The evaluation of the visibility of road markings has been defined according to human needs but shall now be extended to the needs of Advanced Assistance Driving Systems (ADAS). Several publications propose minimum levels of road reflectivity and contrast between the night /day visibility of the marking to ensure optimum detectability of vision-based ADAS. However, the calculation methodology is rarely indicated and in visibility studies the road and marking photometry are considered as homogenous, which is not the case in reality. The French project SAM consists of developing knowledge’s to build a regulatory and technical framework facilitating the circulation of the Autonomous Vehicles (AV) on the French road network. One of the tasks focuses on assessing the detectability of road markings by AV cameras. A first experiment was conducted on a 20 km road section in Rouen. The characterization of the marking was carried out with several reference mobile systems and a vehicle equipped with an ADAS based on vision technology. The different vehicles circulated one after the other in order to record all the road markings (both centre and edge lines) in the same conditions. Several statistical analyses are performed on the collected data at different study scales (punctual or global scale). They show that despite very low levels of marking retroreflection values and visibility contrast ratios, the road marking lines are almost always very well detected by the ADAS camera’s algorithm. That demonstrates that the current indicators characterizing the marking visibility according to standards are not enough to fully understand the behaviour of Autonomous Vehicle cameras. It is necessary to better understand the needs of ADAS vision systems in terms of road markings and to propose levels and criteria with a calculation methodology. This would then make possible to update the preventive maintenance models.

PRESENTER: Luis Pedro Cobos

ABSTRACT. Increases in the connectivity of vehicle and automation driving functions, with goal of fully automated driving, are expected to bring many benefits to individuals and wider society. However, these technologies may also create new cybersecurity threats to vehicle user privacy, the finances of vehicle users and mobility service operators, and even the physical safety of vehicle occupants and other road users. Assuring the cybersecurity of future vehicles will therefore be key to achieving the acceptability of these new automotive technologies to society. These concerns have resulted in the United Nations Economic Commission for Europe (UNECE) formally adopted two new vehicle type approval regulations for new vehicles, relating to automotive cybersecurity and software updates, including over-the-air (OTA) updating processes. However, traditional prescriptive assurance methods will not work for vehicle cybersecurity, due to the evolving threats and the deployment of artificial intelligence (AI) and OTA updates. Cybersecurity regulations that are goal-oriented and risk-based, like those increasingly used in safety engineering for complex systems, are therefore recommended to accommodate the many uncertainties surrounding automotive cybersecurity performance. It is proposed that an assurance case approach be adopted, adapting existing approaches from safety engineering and merging in the specific analysis techniques used in cybersecurity engineering to develop a “cybersecurity case”. Methods will also be needed to allow the development of assurance arguments that can accommodate non-deterministic AI and machine learning technologies, often used to support the higher levels of driving automation, for both cybersecurity and safety assurance. Constructing the assurance arguments is expected to help identify the types of evidence needed to support the assurance claims. Ongoing assurance activities will also be needed to complement product launch assurance, in order to ensure that cybersecurity and safety assurance are maintained over the operational lifetime of the vehicle as outlined in the UNECE regulations and emerging standards. For cybersecurity this will require the development of dedicated vehicle security operations centers to help ensure the through-life cybersecurity performance of vehicles, as well as methods that facilitate the construction and maintenance of dynamic cybersecurity (and safety) cases that can be readily modified as new threats are identified and the on-board vehicle software evolves

Resilience in autonomous shipping
PRESENTER: Kay Fjørtoft

ABSTRACT. In this article, we will look at some of the potentials with autonomous shipping and will discuss how we can ensure resilience. The term resilience is widely used, Woods [ ] discusses four different common usages and argues that the value of the last two are more applicable to produce fundamental findings, foundational theories, and engineering techniques: (1) resilience as rebound from trauma and return to equilibrium; (2) resilience as a synonym for robustness; (3) resilience as the opposite of brittleness, i.e., as graceful extensibility when surprise challenges boundaries; (4) resilience as network architectures that can sustain the ability to adapt to future surprises as conditions evolve (sustained adaptability). Many factors affect the resilience approaches of the autonomous system, including communication and collaboration between technology and humans, as well as a clear definition of responsibility. This paper will give an understanding of technological limitations, as well as understanding of operational knowledge applied within shipping today that might be addressed to the autonomous sector. Will it be more challenging to operate an autonomous than a manned vessel? What knowledge is needed? What will be characterized as high concerns? What do we know can be addressed when designing new autonomous technology? Will the knowledge at a control centre be enough to recover from an unwanted situation? Will the technology be capable to do the process without human interactions? A bow-tie methodology will be used to identify and describe proactive and reactive barriers, that can be used to understand the resilience mechanisms. This paper will point to known operational challenges, will focus on the hand-over process between technology and humans, and will elaborate on issues that will be important drivers for increased resilience and a successful implementation of autonomous maritime transportation systems. This article will also present a method to assess different aspects of the risk scenarios, in the light of the specific capabilities and constraints of autonomous ships.

10:25-11:45 Session WE2F: Risk and Resilience Analysis of Interdependent Infrastructures
Energy and Telecommunications networks interdependency: Resilience challenges

ABSTRACT. This paper focuses on interdependency between telecommunication and energy distribution networks, both belonging to critical infrastructures as described in [1]. Energy distribution is always behind every single telecommunication network and related equipment. To anticipate power cut or frequency issues, different solutions have been deployed depending on the sites size and importance in terms of hosted network functions: batteries, inverters to generators. From Distribution Service Operator (DSO) perspective, energy distribution networks depend more and more on telecommunication services provided by Telecommunication Operators (Telco) to supervise, monitor and automate processes. With the digital transformation, the economy across all industry sectors will continue to develop a critical dependency upon telecommunication networks. DSOs have new requirements linked to their deployed networks, but also requirements for new services relying on telecommunication networks capabilities [2]: massive deployment of electric mobility, smart grid, augmented technicians... with a will to manage these services and associated networks. Telecommunication networks are also evolving, new enhanced functions (like Mobile Edge Computing, MEC) are to be deployed closer to customers. Smaller sites will require a better resilience against power cuts than today. In addition, evolution to 5G will draw new challenges as well, because of the inherent complexity of 5G architecture and softwarization paradigms [3]. This paper is the result of a collaboration between energy and telecommunication companies and has as purpose to provide some answers to face these new challenges. The proposed solutions cover both technical and organizational approaches. The technical solutions will rely on new technological paradigms (5G, Artificial Intelligence [4]) to ensure and improve the resilience of each type of networks and will depend on some proposals of information exchange modes. To put in place these technical solutions, a task force with delegates from DSO and Telco should be built to select the best way to proceed depending on local constraints.

Dynamic Orchestration of Communication Resources Deployment for Resilient Coordination in Critical Infrastructures Network
PRESENTER: Khaled Sayad

ABSTRACT. Smart Grids (SG) and Information \& Communications Technology (ICT) are part of modern Critical Infrastructures (CIs) network on which depend modern societies to ensure security, economic and societal well-being services. The fifth generation of mobile communication (5G) is a paradigm on which rely modern CIs to incorporate new technologies, deliver new sophisticated services and adopt new business models. The high dependability on ICT services will pave the way to a new dynamic paradigm where communication resources are deployed within CIs operational scheme to reach performance and quality of service (QoS) objectives. Network Function Virtualization (NFV), Network Slicing (NS) and Software Defined Networking (SDN) are examples of 5G-enabling technologies used to reach the aforementioned objectives. However, due to the complex nature of CIs and the interdependencies between its components, the shift toward a dynamic operational scheme will increase the vulnerability and exposure to risks and impact the network resiliency. This requires the design of new reliability methods that consider the heterogeneity, privacy and self-interest nature of CIs network and guarantee resiliency and QoS objectives in such a constrained and dynamic environment. To tackle this, we propose a framework to dynamically coordinate and manage the deployment of communications resources based on NFV to ensure the availability of services and meeting performance objectives during disruptive events with respect to interdependencies and heterogeneity constraints. To illustrate our approach, we study the case of maintenance operations as a disruptive event in ICT and its impact on the SG infrastructure, the method developed will be applied and evaluated through this use case.

Towards a realistic topological and functional modeling framework for vulnerability analysis of interdependent railway and power networks
PRESENTER: Andrea Bellè

ABSTRACT. Critical infrastructures are large-scale systems which provide services and commodities to people, and their functionality is essential for the well-being of a society. Railway systems and power grids are recognized as two of the most important infrastructures, as the first one represents a sustainable and efficient commuting means and the second one provides electricity to multiple users. The majority of European railway networks are electrified, and power transmission networks usually represent the main power supplier. Railway and power networks share thus a unidirectional interdependency, as railway networks functionality depends on power networks. Due to this interdependency, disturbances and failures in power networks have the potential of causing vast disruption in the dependent railway networks. The necessity of improving the blackout-related risk awareness of railway operators has been recently raised. Despite this, the issue of modeling interdependent railway and power networks has not been addressed sufficiently carefully in the available literature. Firstly, a comprehensive and realistic modeling framework for the interconnections between railway and power networks seems to be missing. Secondly, the treatment of cascading failures in power networks and their consequences in railway networks is limited and approximative. In this work, we propose a modeling framework which accounts for more realistic assumptions on the interconnections topology and the cascading failures dynamics. Firstly, we model the interconnections between railway and external power network introducing the traction power network, which acts as a bridge between the external power grid and the railway network. Secondly, we model cascading failures in the external and traction power network with an approach based on DC power flow equations. Thirdly, we suggest a simple approach to estimate the negative consequences on the railway network due to load shedding in the traction power network. The analysis is performed in the context of vulnerability analysis, estimating the negative consequences in the railway network due to different failure scenarios in the external power network. Sensitivity analysis on the initial assumptions is also performed.

PRESENTER: Natalie Miller

ABSTRACT. Railway transportation dynamically performs under complex coherent systems and fail-safe interlocking conditions. Security and safety are the first priorities of this massive intermodal transportation. However, railway systems, like any other critical infrastructure, face many threats that can take not only the form of physical, but may also be cyber or combined cyber-physical threats, since the automated- and digital-based technologies in the rail operation may be vulnerable. Therefore, SAFETY4RAILS, a H2020 EU project, was initiated to strengthen the EU rail operations by increasing the resilience and improving the safety and security of railway networks against these threat types through the development of a variety of state-of-the-art tools. Within SAFETY4RAILS, a predictive risk and resilience assessment tool [1] will be utilized to assess the damage and predict the severity of the impact of various attacks on the railway systems, as well as the effectiveness of mitigation measures. The cascading effect will be analyzed to elaborate interconnectivity impacts of the whole system, which support protective measures and prevent a black swan effect of future risks. This tool will highlight the most vulnerable and critical components of the system, allowing rail operators to implement measures which support the components resilience. As this project includes a variety of other innovative tools, the inputs needed for the predictive risk assessment come from within. To achieve effective tool development and integration, the project requires huge collaboration from different expert and informative sources. One of the main sources of input is expert workshops held at the beginning of the project, to obtain end-users requirements, as well as data regarding their networks. These requirements will help define the direction of the analysis of the risk assessment tool and will highlight which scenarios should be tested. This paper will include a brief literature review of risk and resilience assessment specific to rail systems, before focusing on the risk assessment tool described earlier, how it will be implemented, and what methodologies will be used along with data input aggregation. The paper will also introduce a few other tools within the project and discuss the expected interactions the predictive risk assessment tool will have. The outcomes of the analysis will be a best practice to develop the resilience of railway infrastructure.

10:25-11:45 Session WE2G: Mechanical and Structural Reliability
Location: Atrium 3
Reliability of Spur Gears - Determination of Stress-Dependent Weibull Shape Parameters for Tooth Root Fracture
PRESENTER: Axel Baumann

ABSTRACT. For fatigue and wear failures, a stress-based determination of the Weibull shape parameter is of importance and thus ultimately a stress-based reliability prognosis. The failure behavior of a spur gear at different load levels with torsional vibration excitation is examined and analyzed with the three-parametric Weibull distribution. The distribution parameters are shown for all tested load levels and a 90% confidence interval is given. The failure-free time is another parameter of the Weibull distribution. It is the most important parameter in the context of fatigue failures. The level of the shape parameters depends on the failure-free time. The shape parameters on different load levels are stress-dependent. The shape parameter decreases with increasing load levels in high cycle fatigue (HCF) area without vibration excitation. The shape parameter remains constant or increases with increasing load levels in low cycle fatigue (LCF) area with vibration excitation. For the failure of tooth fracture, a typical value range of the shape parameter from 1.2 to 2.2 is well-known in literature. The given mean value range for the stress-dependent shape parameter largely coincides with the test results.


ABSTRACT. This paper discusses the operation of the fatigue testing machine for cables and moorings, named MEA1000, which performs fatigue tests by bending of high-gauge steel cables under moderate tension. This machine has an exclusive and pioneering project in Latin America as it is the only one capable of conducting tests with cables up to 76 mm thickness. The machine is installed at the Petropolis Catholic University, in the Steel Cable Fatigue Testing Laboratory (LEMOC), and is the result of an agreement between Petrobras (National Petroleum Agency), Dom Cintra Foundation, and Petropolis Catholic University. The study analyzes the results obtained in the commissioning test of the MEA1000 machine to obtain information about its functioning, and present observations on the behavior of the steel cable and a critical analysis of the results and data on the cable performance and its reliability. As a result, through detailed observation of the graphs generated from the data obtained in the commissioning test, the behavior of the cable on the machine regarding its resistance to fatigue due to bending force and the contact with the tie components such as pulleys and lever hoist could be evaluated. Data on the behavior of the cable during the cycles, how long it took to break each wire and how long it took for the final break of the cable could be gathered and is a important source of information for the stakeholders involved in the mentioned agreement.


ABSTRACT. Providing safe and on-time flight training is one of the most important activities of a university for educating future pilots. It therefore follows that the unreliability of the aircraft participating in the training program is resulted in delays and may also be the trigger for catastrophe. The main aim of the research was the identification of the main caused of failures. It was assumed that the concept of reliability is understood as the probability of the aircraft to fulfil the functionalities as intended [1]. It is also assumed that the object is reparable. As a part of the research, data from the process of operation of the aircraft used in the Military University of Aviation were investigated. The reliability of two types of helicopters Guimbal Cabri G2 and Robinsson R44 has been subject of studies. In this publication the intensity of damage, unreliability and reliability of the helicopters in question as a function of time [2] have been calculated. On the basis of the operating and maintenance data, it was concluded that the majority of the incidents were caused by technical failures in the cooling system, fuel system, impossibility to start the engine and damage to the clutch. In helicopters, the airframe itself was also characterized by high failure rate, i.e. problems with the landing gear and damage to the outside of the helicopter fuselage. Summarizing, it can be noted that the failure rate is higher for a Gimbal Cabri G2 helicopter then for Robbinson R44. Additionally, it should be mentioned that the human influence on reliability is reduced due to the fact that training flights are carried out under the supervision of experienced instructors, who are able to prevent numerous mistakes.

Probabilistic mixed mode fatigue crack growth analysis considering spatially varying uncertainties

ABSTRACT. The fatigue crack growth process is characterized by uncertainties inherent to the variations in geometrical properties, material properties and loading conditions. These variations are often considered in the mechanical computations through a simple probabilistic models namely random variables, which are unable to model a spatially varying uncertainties such those related to the material properties. Indeed, in some structural materials like timber, composite and graded materials this behavior is very pronounced. Then it become trivial to be considered to design safe structures. This necessitates to handle intricate probabilistic models such as random fields. In the practice, a random field F(x,θ) with mean μ and standard deviation σ is represented through an M th order truncated expansion based on a set of standard normal variables, the eigenvalue and eigenfunctions of the covariance function.

Higher is the truncation order M, better is the accuracy of the expansion to reproduce the true variability of the random field. Unfortunately, this contributes to a significant increase of the probabilistic dimension of the problem, and consequently classical uncertainty propagation methods become inefficient, since all of them suffer from the problem of the curse of dimensionality. This problem is amplified if the mechanical model it-self is time consuming such in the case of fatigue crack growth. To overcome this problem, an efficient probabilistic method is developed, having as backbone the well-known dimension decomposition technique. The s th variate approximation of the mechanical responses, constructed in the standard random space, is obtained through a projection on Hermite polynomials.

The unknown coefficients, defined as multidimensional integrals, are computed by convenient monomial cubature rules in order to reduce again the number of mechanical model evaluations. A mixed mode fatigue crack growth problem, where some of material properties are spatially varying uncertain parameters, is addressed using the proposed approach. The spatial variability of the mechanical responses defined by the fracture parameters as the stress intensity factors and the bifurcation angle is well established with high accuracy level and a reduced computation cost compared to most existent approaches.

10:25-11:45 Session WE2H: Railway Industry
Location: Cointreau
Prognostic expert system for railway fleet maintenance
PRESENTER: Fabien Turgis

ABSTRACT. Context: To realize maintenance of a large rolling fleet, with operational constraints due to mass transit, a mixed maintenance solution based on real-time data analysis and condition-based maintenance has been integrated into the SNCF maintenance process in 2017 [1] [2]. Based on expert systems, a specific data workflow is used to build health indicators from raw data collected on trains and a specific signaling system has been designed to optimize dependencies between systematic scheduled, corrective and condition-based maintenance. To enhance this maintenance solution, a new expert system has been developed in order to help prognostic by analyzing the dispersion of health indicators through the whole fleet.

Challenges: The first prognostic expert system made at SNCF was based on constant signaling thresholds in order to assess the health state of a system. These thresholds are defined using technical knowledge and physical models, but this induces several limitations. Indeed, each system is not equivalent from one train to another and independently evolves in time due to several parameters such as manufacturing quality, delivery date, aging or maintenance operation quality. Therefore, those constant thresholds will not always fit to maintenance needs during the whole train life. Moreover, this kind of expert system does not always match maintenance load and infrastructure availabilities in the workshop. All these limitations have motivated an upgrade of the existing constant threshold prognostic expert system to a dynamic threshold expert system based on key repartition of data across the whole fleet.

Contribution: The article will describe the dynamic threshold expert system. How dynamic thresholds are defined from the analyze of a health indicator distribution on the whole fleet and how they are combined with failure thresholds to manage maintenance load balance and aging effects.

A Complete Streaming Pipeline for Real-time Monitoring and Predictive Maintenance

ABSTRACT. Railway maintenance is changing with the arrival of Maintenance 4.0 (also called Predictive Maintenance, or PdM), which features smart, automated systems that are capable to diagnose faults, predict failures, and recommend optimal actions. Such systems are brought to fruition with the help of domain expertise and equipment data. However, data-driven railway maintenance faces several challenges: a large volume of data from various systems, noisy sensor signals, the impact of operational contexts that leads to heterogeneous degradation. To overcome these difficulties, practitioners are shifting their attention to the use of machine learning (ML) [1]. ML encompasses algorithms that discover insightful findings from data and/or make predictions on new data. Furthermore, the proliferation of IoT devices multiplies the amount of data to be processed efficiently to enable uninterrupted monitoring of machinery health. Stream learning (SL), a learning paradigm extended from ML to handle fast, unbounded, dynamic data streams, appears suitable to tackle big data mining for smart maintenance.

Aiming to enhance PdM with SL, we propose a complete streaming pipeline for real-time monitoring and anomaly prediction for application in the railway. Part of this pipeline has been implemented and has resulted in an interactive graphical application, with which domain experts can interact to aid the analysis. Preliminary experimental results on two sensor datasets of the SNCF company will be discussed in this paper.

This work is a continuation of an early study on the potential of SL for PdM [2] and is meant to be complementary to previous work on smart railway maintenance [3]. This article is related to another submission "IKIM: A new architecture for predictive maintenance in railways application" which will describe in detail how the streaming pipeline is integrated into the IKIM architecture.

[1] T. P. Carvalho, F. A. A. M. N. Soares, R. Vita, R. P. Francisco, J. P. Basto, S. G. S. Alcalá. "A systematic literature review of machine learning methods applied to predictive maintenance". Computers & Industrial Engineering, Vol. 137, p. 106024, 2019.

[2] M. H. Le-Nguyen, F. Turgis, P. E. Fayemi, A. Bifet. "Challenges of Stream Learning for Predictive Maintenance in the Railway Sector". IoT Streams for Data-Driven Predictive Maintenance and IoT, Edge, and Mobile for Embedded Machine Learning, pp. 14-29, 2020.

[3] F. Turgis, P. Audier, Q. Coutadeur, C. Verdun. "Industrialization of Condition Based Maintenance for Complex Systems in a Complex Maintenance Environment, Example of NAT". 12th World Congress on Railway Research (WCRR), Tokyo, Japan, 2019.

Implications of Cyber Security to Safety Approval in Railway
PRESENTER: Eivind H. Okstad

ABSTRACT. Due to the railway domain's preoccupation with safety, there has been an insufficient focus on cyber security which has resulted in the existence of cyber security flaws in current railway systems (Gabriel, et al., 2018). Threat landscapes against railway SUC (System Under Consideration) are presented by other authors such as Rekik, et al. (2018). However, in recent years the railway industry has realized the importance of Cyber Security aspects in addition to Safety as part of the approval of railway systems. This can be seen from the fact that railway standards, like CENELEC EN 50129:2018, and EN 50159:2010 to a larger degree mention how to deal with Cyber Security, or rather IT-security.

The authors claim that it is important to reflect on Cyber Security in the context of safety approval of railway systems since the Cyber Security threats may have implications on functional safety. Motivated by this, the current paper seeks to address the following topics: 1) A brief literature study on the vulnerability of railway systems towards Cyber Security threats, 2) A discussion on how Cyber Security is covered by the current railway legislation, and finally 3) We elaborate challenges related to the handling of Cyber Security as part of the railway approval processes. As part of the latter, an overview of the ongoing work in Europe regarding Cyber Security within the railway domain is provided. A recent ENISA study (Liveri, et al., 2020) focused on the level of maturity of the European railway sector regarding implementation of security measures enforced by the NIS Directive. At the standardization front, CENELEC plans to issue TS 50701 in 2021, which aims to introduce the requirements as well as provide recommendations for addressing cybersecurity within the railway sector.

The main contribution of this paper is to highlight the Cyber Security challenges that need to be addressed in railway legislation. A second goal is to elaborate on how the most relevant challenges could be handled efficiently as part of the railway approval. Here, it must be taken into account that Cyber Security threats change at a faster pace than the pure Safety threats.

PRESENTER: Abhimanyu Tonk

ABSTRACT. The potential benefits of autonomous and semi-autonomous systems have been driving intensive development of such systems, and of supporting tools and methodologies. In railway, the train remote driving is considered as one of the technological building blocks necessary for the deployment of autonomous trains. The train remote driving allows for a remote driver to operate and control a rolling stock, to interact with the other agents and entities (technical or human), and to assure the train driving remotely and safely [1]. The railway domain has an immutable principle: it is forbidden to degrade its safety level. Thus, the introduction of the remote driving technologies in the railway system needs to be supported by a body of evidence, which demonstrates that the overall level of safety (for users, operating staff and third party) is globally at least equivalent to the safety level of existing systems providing similar services (i.e., conventional trains). In the context of autonomous systems, the first step toward establishing an (operational) safety demonstration is the definition and the specification of its Operational Design Domain (ODD). Broadly speaking, An ODD is defined as specific conditions and operating environment under which a given automated system is designed to operate [2,3]. In fact, the ODD is mainly used to establish the operational validity, the scope of the safety case and its verification. In this paper, we aim to extend the concept of the ODD to the context of railway remote driving in order to guide the process in the safety plan that leads to establish its safety case. Mainly, we propose an iterative process to define and to specify an ODD for the railway remote driving based on the operational risk assessment process. The main idea consists in, starting from an initial high-level ODD, including the overall operational environment of conventional in cab driving, restrict iteratively the ODD by exploiting the risk assessment steps (particularly, the risk evaluation and the risk reduction) through feedback loops to the ODD. The developed process is illustrated in this paper using the freight train remote driving.

10:25-11:45 Session WE2I: Artificial intelligence for reliability assessment and maintenance decision-making
Location: Giffard
A Network Connectivity Reliability Estimation Model Based on Light Gradient Boosting Machine

ABSTRACT. IoE (Internet of Everything) has become an inexorable trend of modern society development, which makes the network systems more and more complex. This also puts forward higher requirements for the security and reliability of complex network systems. Network connectivity reliability is a key index to evaluate network reliability. However, the computation complexity of the traditional exact algorithms increases exponentially with the expansion of network structure. Therefore, a network connectivity reliability estimation model based on LightGBM (Light Gradient Boosting Machine) is developed in this paper. The model takes the network structure sequence, link reliability, source node and target node as input and network connectivity reliability as output, which can realize the fast estimation of network connectivity reliability. A verification experiment is carried out on a data set of more than 80000 samples, which is obtain by the node traversal method and the inclusion-exclusion principle. The final experimental results also verify the effectiveness of the proposed model.


ABSTRACT. More and more complex models are being developed for Engineering Asset Management and the current challenge is to develop optimization methods that would help engineers define the best maintenance planning minimizing the Life-Cycle Cost (LCC). The difficulty has several origins: 1. Time-consuming computations: realistic asset management models evaluation are usually based on Monte-Carlo simulation that may be long to run. 2. Stochastic process: the underlying objects in an asset management model are failures dates described as stochastic variables, the goal function is then also probabilistic. 3. Dimensionality: maintenance optimization may be done on large systems, the controls (dates) are then defined on a large multidimensional space. Therefore, it is likely that the optimization problem meets the curse of dimensionality. Several methods have been proposed in the past to tackle such maintenance optimization problems. Some proposed methods are based on decomposition algorithms [1] that aim at transforming large problems into several small problems that can be solved easily and exactly. Although very efficient this type of algorithms are based on strong assumptions regarding the dynamic of the studied systems that could be too restrictive. Other types of methods rely on heuristic and surrogate models. In this paper we will describe such a method that uses Artificial Neural Network as a surrogate model of the stochastic code [2] that is used at EDF to support decision making regarding maintenance planning. The goal is to tackle the two first difficulties identified, that is to say reducing the number of code calculations and also capture the stochasticity of the output. The network is then linked to a Differential Evolution algorithm [3] that is a powerful meta-heuristic for continuous optimization in large dimension, the training data of the NN being expanded throughout the exploration performed by the DE algorithm. The paper will describe a test-case based on the refurbishment of a valley of hydropower stations.

Reinforcement learning for maintenance decision-making of multi-state component systems with imperfect maintenance
PRESENTER: Van-Thai Nguyen

ABSTRACT. In this paper we propose an artificial intelligence (AI) based framework for maintenance decision-making and optimization of multi-state component systems with imperfect maintenance. Our proposed framework consists of two main phases. The first aims at constructing artificial neural network (ANN) based predictors to forecast system's reliability and maintenance cost. The second refers to the use of deep reinforcement learning (DRL) algorithms to optimize maintenance policy which can deal with large scale applications. Numerical results show that ANN is suitable to reliability, maintenance cost forecasting and DRL is a potentially powerful tool for maintenance decision-making and optimization.

Predictive maintenance of natural gas regulators by forecasting output pressure with artificial intelligence algorithms
PRESENTER: Amel Belounnas

ABSTRACT. With the emergence of Industry 4.0, smart systems, machine learning (ML) within artificial intelligence (AI), predictive maintenance approaches have been extensively applied in industries for handling the health status of industrial assets. Due to digital transformation towards Industry 4.0, computerized control, and communication networks, it is possible to collect massive amounts of operational and processes conditions data generated from several pieces of equipment and harvest data for making an automated fault detection and diagnosis with the aim to minimize downtime and increase utilization rate of the components and increase their remaining useful lives. Machine learning (ML) techniques have emerged as a promising tool in Predictive Maintenance applications for smart manufacturing. In the proposed work, we test the recent advancements of ML techniques widely applied to predictive maintenance to forecast natural gas regulators pressure deviations and to predict failures. The data represents about a hundred of gas regulators with monitored output pressure, temperature and flow, plus the observed failure mode “pressure regulation out of specification” (too high or too low), dates and durations of preventive maintenance over the last three years. ML and neural networks models are tested to forecast the output pressure of the regulators and to predict the passing over the failure thresholds. Defining the parameters that optimize the prediction of failure while limiting the spurious detections is a challenge that is investigated in the proposed paper. The deployment of such methods is expected to reduce the preventive and field maintenance operations costs by providing early warning notification and diagnosis of gas regulators issues.

10:25-11:45 Session WE2J: Advancements in Resilience Engineering of Critical Infrastructures
Location: Botanique
PRESENTER: Edoardo Patelli

ABSTRACT. Critical infrastructure systems pose serious safety and reliability challenges for the researchers and practitioners when subjected to disruptive events. The complexity and uncertainty of threat prompts for consideration of resilience in system design and performance evaluation. Consequently, researchers have proposed several definitions and evaluation methodologies to assess the resilience of potentially complex infrastructure systems (Wood, 2006; Henry et al., 2012). However, methods to quantify net resilience of a specific infrastructure from multiple recovery paths, and their applicability to a large-scale system are limited. This paper presents a novel resilience evaluation framework to model critical infrastructure systems and to quantify the net resilience by focusing on key characteristics of resilience. The proposed approach is an extension of the framework by Santhosh and Patelli (2020) which explicitly models the temporal aspects of the system response to disruptive event towards effective recovery from various recovery measures. This extended framework also proposes a weighted resilience model to quantify net resilience from multiple restoration paths based on the weighted geometric mean of area under multiple restoration curves. The proposed framework is illustrated with a case study on regulating function of nuclear reactor employing timed Petri nets. A Petri net model of reactor system comprising regulating function, setback function, and human and organizational aspects is developed and simulated over a mean time of 13h starting from an initial steady-state until complete recovery state after a disruptive event on regulating rods with 1million simulations. The net resilience of the regulation system is computed from state probabilities of various recovery sequences with their associated weights. This novel approach demonstrates its capability to be applicable to various critical infrastructure systems by considering temporal aspects of the system with human and organizational factors for quantitative resilience evaluation.

Cascading failure analysis for power system vulnerability assessment
PRESENTER: Blazhe Gjorgiev

ABSTRACT. Power systems as a critical infrastructure are an integral part of human society and are therefore of paramount importance to modern life. Vulnerabilities in the system, that are reviled either by accidental or deliberate events, can cause large losses of power supply with sever social and economic consequences. A tool that identifies the vulnerabilities in a power system can provide the operators the means to support a more reliable power system operation. This paper presents a methodology for power system vulnerability assessment that couples an AC based cascading failure simulation model and a meta-heuristic optimization procedure. The objectives of the assessment is to (1) rank the most important branches in the transmission grid, and (2) identify sets of branches if simultaneously tripped will cause the cascade with highest intensity. The first objective is achieved by ranking the criticality of the branches using two criteria (i) the impact that each branch failure has on the DNS and (ii) the frequency of line overload. The second objective is achieved by hard linking an AC based cascading failure simulation model and a meta-heurist based optimization procedure. The algorithm developed for the purpose of this study is applied to the IEEE 118-bus test system. The results demonstrate the capability of the proposed methodology to identify vulnerabilities in a power system.

Conceptual Approach Towards a Combined Risk and Resilience Framework for Interdependent Infrastructures
PRESENTER: Stefan Schauer

ABSTRACT. Over the last decade, critical infrastructures (CIs) have become more and more dependent on each other and have evolved into complex interdependency networks. This becomes particularly evident when looking at metropolitan areas, where multiple CIs from different sectors are located within a geographically narrow space and incidents within one infrastructure can have wide-spread effects on the entire network (e.g., shutdown of multiple CIs in Caracas in 2019 [1]). In this paper, we describe a combined risk and resilience framework, which has been developed in the ongoing Austrian research project ODYSSEUS. This framework is based on the standard risk management process (as proposed for example in ISO 31000 [2]) and integrates core aspects from resilience management. Furthermore, it focuses on interdependent CIs in a city and aims at capturing potential cascading effects among CIs caused by an (intentional or random) incident. To achieve that, we apply a stochastic simulation model [3], which not only allows to describe the cascading effects of an incident but also to assess the overall risk of that incident for the entire CI network. This simulation model is extended by a resilience model to assess how the entire CI network evolves over time and how this influences its resilience. Additionally, the proposed framework implements a game-theoretic optimization approach [4] to identify effective mitigation strategies. This framework requires an evaluation of risk and resilience according to the implementation of different strategies. In this way, the developed framework supports municipalities’ decision makers and individual CI operators in assessing their preparedness against specific incidents as well as evaluating novel strategies to improve city resilience.

10:25-11:45 Session WE2K: Autonomous Driving Safety
Location: Atrium 1
Clarification of Discrepancies in the Classification of 1oo2 and 2oo2 Architectures Used for Safety Integrity in Land Transport

ABSTRACT. Automated car driving or advanced railway signaling is based on safe position determination of the vehicle. The required safety integrity of the positioning function cannot be generally achieved using a single element and therefore a combination of information from several diverse sources (sensors) shall be used. Dual-channel architectures 1oo2 (one out of two) according to the generic functional safety standard IEC 61508 [1] and 2oo2 (two out of two) architecture according to the CENELEC railway safety standard EN 50129 [2] represent the basic solutions enabling to meet the required integrity. However, the problem is that these architectures may have different characteristics depending on the application area, such as railway signaling, flight control or automated car driving. The standard IEC 61508 says that 1oo2 architecture is intended for safety integrity and 2oo2 for availability. On the other hand, the railway standard EN 50129 prescribes 2oo2 architecture for integrity – it is just opposite. So where is the truth? It can be very confusing, especially when IEC 61508 is considered the “mother” of standard EN 50129. The purpose of the paper is to clarify the above discrepancies. The paper starts with a basic classification of safety systems into “safety-critical” and “safety-related” ones [3] and an examination of the impact of system classification on system properties depending on the area of application. Then, the basic safety parameters of dual-channel architectures for safety integrity are presented on two examples (see Fig. 1) and analyzed using Markov modelling. Finally, the equations concerning the safety parameters contained in the automotive functional safety standard ISO 26262-10 (§8.4) [4] for dual-channel architecture are critically evaluated based on rail safety experience. Recommendations related to the safety architecture design for self-driving cars are given, which are based on the numerical results obtained in the examples.

PRESENTER: Thor Myklebust

ABSTRACT. Road traffic will change dramatically, triggered by the development of new technologies and a focus on accident-free driving. It is a race among the car manufacturers, to be among the first to develop fully autonomous cars and authorities are supporting them by adapting the regulations. In addition, several standardization bodies have developed relevant safety standards. A safety case is required by several international standards for road vehicles. A safety case is developed to convince a third party that the product or system is safe. Our suggestion is that also a "safety case for the public" should be issued to ensure (1) that the public are aware that safety evidence exists, (2) that they are aware of relevant safety aspects when they are passengers, (3) that the public in general are aware of the autonomous vehicles and (4) that limitations are transparent and described. Trust in automation is a key determinant for the adoption of automated systems and their appropriate use. While regulations and safety standards will provide requirements and guidelines for manufacturers, third parties and technology developers to create their safety cases to gain trust, it is also important to inform the public. A safety case as it is today is too technical for the public, is often lengthy and includes confidential information, and as a result the safety case cannot be presented to the public. In contrast to the domain of aviation, the operators and users of automated vehicles will not be experts but lay persons. The safety case should give the operators and users (the public) a realistic mental model of the automation's functions, capabilities and limitations. The claimed benefits of automation i.e. improved safety, efficiency, mobility, convenience and reduce energy and greenhouse emissions, may only occur if automated vehicles are successfully implemented into road traffic and adopted by the public. Trust in this technology is a vital precondition for this adoption, and this is where a new type of safety case can play an important role. In an earlier paper we have addresses the safety part of the safety case for the public [1]. These aspects where based on safety standards and a survey including only safety experts. As this was a limited basis, we have made a new survey of 311 passengers and interviewed 18 autonomous bus passengers. Based on this information we have proposed a template for the "safety case for the public". Using such a safety case will help manufacturers of autonomous vehicles and operators to gain public trust.

Traffic psychology in digital drive: Deceptive safety by corrosion of agency

ABSTRACT. Traffic psychology [see e.g., 1] as applied discipline is, and must be, context-specific. The context however, is changing. This change is digital. The world is in a state of being digitally transformed. There is a forceful momentum at play here, fueled by both opportunity and enthusiasm. It is – a ‘digital drive.’ In the traffic context, this drive concerns computer automation spanning from digital support systems, to the future ambition of the fully automated machine. The challenge: As vehicles changes, so does driving, and the traffic context. Adding to this, is the eerie question concerning how digital transformation affects and changes us as human beings. For traffic psychology, this relates to the fact that concepts to understand behavior was developed within the old pre-digital contexts. This calls for ‘self-reflection,’ addressing the ongoing divergence (between old and new contexts); to ensure that it does not develop into a schism, a deep split that irreversibly separates the old from the new. The simple, yet fundamental premise here is that ‘the digital’ calls for theoretical re-consideration and development. Although the degree and extent of computerized automation varies, I will argue that the tendency point towards the corrosion of agency (i.e., involvement, action). By considering risk perception, this problem is acutely accentuated. The premise being that a driver’s risk awareness is developed by exposure, practice, involvement and action. Paper aim: To theoretically examine what I term the Corrosion of Agency Thesis. This involves exploring digitally ‘produced’ gaps and challenges for risk perception and risk-taking behavior. To connect traffic psychology to ‘the digital,’ the paper is inspired by Carr’s [2, 3] reflections on digital transformation and its unnerving repercussions for us as human beings. Elements from Trimpop’s [4] Risk Motivation Theory is used as theoretical framework of risk perception and risk-taking behavior. If driver agency in ‘the digital’ deteriorates, this ricochets straight back into a fundamental principle in traffic psychology: The concept that experience (knowledge and knowhow) is built by agency. How is experience built without agency? And, fundamentally: What about safety?


1. B. E. Porter (editor), Handbook of Traffic Psychology, (Elsevier, London, 2011). 2. N. Carr, The glass cage: How our computers are changing us, (W. W. Norton & Company, New York, 2014). 3. N. Carr, Utopia is creepy: And other provocations, (W. W. Norton & Company, New York, 2016). 4. R. M. Trimpop, The psychology of risk taking behavior, (North-Holland – Elsevier Science, B. V., Amsterdam, New York, Tokyo, 1994).

PRESENTER: Tor Stålhane

ABSTRACT. driving busses, sponsored by the Norwegian Research Council. Trust is both a technical and psychological problem and these two areas use different terms and in some cases uses the same word with different meaning. Thus, the paper starts with a discussion of terms. Since the acceptance of self-driving vehicles partly is a psychological problem and partly an engineering problem we need to look at it from both angels. Next, we discuss some of the relevant models – Ajzen and Fishbein’s model for planned behavior – a psychological model – and the TAM model – a technical model. The TAM model will include the extensions added by Venkatesch et al. The first part ends with a short discussion on risk acceptance. The second part of the paper introduces the results from two focus groups plus a SMS-based survey that have been run by the bus operator AtB AS together with TrustMe project personnel and discuss how the problems raised in these focus groups can be handled in a self-driving vehicle. Since risk and risk acceptance is an important part of acceptance of self-driving vehicles, we end the paper with a short discussion on how to talk risk with the general public and some preliminary conclusions. Important conclusions are that the extended TAM model of Venkatesh and Davis is well suited for the TrustMe project. Using this model will help us to categorize and analyses our observations. In addition, it will help us to find new questions and problems to consider. We need to consider both technical and societal risks when construction safe and trustworthy / reliable self-driving busses for public transport.

11:50-12:30 Session Plenary IV: Plenary Session
Location: Auditorium
Cognitive, practical, organisational and regulatory safety challenges of a new era

ABSTRACT. Safety-critical systems such as offshore platforms, hospitals, aircrafts, nuclear power plants, refineries, bridges, dams, mines … rely on a myriad of artefacts, actors and institutions to operate safely. It is an admirable political, technological, social and economic endeavour re-enacted everyday all over the world. But sometimes, when a bridge collapses, a building burns, an offshore platform explodes, a nuclear reactor melts down, a train derails, a ship sinks or a plane crashes, we are reminded how precarious such successes are and we are also reminded of the diversity of practices across countries, sectors and companies. The Boeing 737 Max crashes in 2018 and 2019, the Grenfell tower fire in London in 2017 or the collapse of Vale’s dam in Brazil in 2019 are recent reminders of such events. The analysis of these events reveal a number of features which characterise the current operating landscape of safety-critical systems which includes globalisation, digitalisation, externalisation or financialisation. What many safety-critical systems share these days are their properties of ‘networks of networks’ which requires for safety research to explore their cognitive, practical, organisational and regulatory features together (to which one needs to add their ecological side). This is a complex and ambitious endeavor. The talk will provide insights from a collective book ‘Safety Science Research: Evolution, Challenges and New Directions’ (CRC Press, 2019) which addresses this problem.

12:30-14:00Lunch Break
14:00-15:00 Session WE3A: Risk Assessment
Location: Auditorium
COVID-19 pandemic: Analyzing of different pandemic control strategies using saturation models.
PRESENTER: Stefan Bracke

ABSTRACT. Since December 2019, the world is confronted with the outbreak of the respiratory disease COVID-19. At the beginning of 2020, the COVID-19 epidemic evolved into a pandemic, which continues to this day. The incredible speed of the spread and the consequences of the infection had a worldwide impact on societies, health systems and social life. Within many countries, several control strategies or combinations of them, like restrictions (e.g. lockdown actions), medical care (e.g. development of vaccine or medicaments) and medical prevention (e.g. hygiene concept), were established with the goal to control the pandemic. Depending on the chosen control strategy, the COVID-19 spreading behavior slowed down or approximately stopped for a defined time range. This phenomenon is called saturation effect and can be described by saturation models. Fundamental approaches are Verhulst (1838) or Gompertz (1825). The model parameter allows the interpretation of the spreading speed (growth) and the saturation effect in a sound way. A limitation of using these models is the time period in which growth can be well represented. The COVID-19 pandemic phase runs over a long period of time (12.2019 until today (02.2021) and the spreading behavior is changing frequently, e.g. caused by many different activities within the different control strategies or seasonal effects. This paper shows results of a research study of the COVID-19 spreading behavior depending on different pandemic control strategies in different countries and time phases. The study contains the analyzing of saturation effects related to short time periods, e.g. possible caused by lockdown strategies, geographical influences and medical prevention activities. The research study is focusing on reference countries like Germany, New Zealand, Sweden, Poland and Ireland. The data are taken from the base of the Johns Hopkins University (2020).

A systemic approach for preliminary risk analysis of cybersecurity of Industrial Control Systems

ABSTRACT. The cybersecurity of industrial facilities is a matter of concern these days[1]. Recent attacks (Oldsmar 2021, WannaCry, 2017)[3] or others a little older like Stuxnet (2010) have shown the potential vulnerability of such systems. A number of approaches to control this risk have been proposed by the main guides and standards, in particular the IEC 62443 standard [4] or the ANSSI guide [5] In these approaches, the first step is the risk analysis of the systems, and the guides do not give indications on which method to choose. The difficulty is to find a systematic method, and nevertheless, not too difficult to implement. A lot of approaches have been proposed [2], but they are often rather difficult to apply and they require a thorough expertise in cybersecurity The purpose of this paper is to describe such an approach. The method is suited for performing cybersecurity risk analysis of installations composed of computer components and industrial system control systems and to give an overall idea of the risks. The facility is decomposed in sub-systems in interaction, and each system can be seen as generating attacks and subject to attacks. Each system is modelled by a finite state automaton, each state being a compromise indicator. A generic database of compromise state and attacks has been built from the MITRE ATT&CK (Adversarial Tactics, Techniques, and Common Knowledge). The scenarios are obtained by composition. The main interest of the approach is to provide a way to get quickly a preliminary cybersecurity risk analysis of the system, that can obtain from the description of the elements inside each system. This approach is illustrated on an example installation.

Risk analysis of emergency operations in presence of limited prior knowledge

ABSTRACT. The risk of emergency operations can lead to the expansion of accident losses. The emergency process can introduce uncertain factors, and the resulting risk can threaten the safety realization of the emergency goal. By considering the characteristics of emergency operations, we proposed a methodology to assess the risk of emergency operations. In the methodology, the Bayesian network is applied to capture the risk characteristics in the emergency process. The fault tree is utilized to depict the reason for emergency failure. The fuzzy set theory is employed to determine the prior probabilities of the root nodes in presence of limited prior knowledge. The methodology was applied to the emergency operations in the deepwater blowout accident. The risk-influencing factors of emergency operations and their correlations were identified. In emergency operations, the fault tree is used to assess the risk of the lower process. A Bayesian network-based emergency operation model for the deepwater blowout is developed. The model captures the variability of parameters and simulates the evolution of emergency operations over time, with probabilistic updates based on field observations. The mutual information is also utilized to conduct sensitivity analysis and diagnostic reasoning on the model.

14:00-15:00 Session WE3B: System Reliability
Location: Atrium 2
RAM and Importance Measures Analysis of Offshore Drilling Rigs’ Cuttings Dryer

ABSTRACT. Offshore drilling operations consist in a complex system carried out in extreme conditions. Their operational safety relies on a series of well barriers elements (WBE), that alone or combined are capable of preventing unintentional flows of fluids or gases from the formation either into another formation or to the surface. During well construction and workover maneuvers, drilling fluid column is always employed as a well barrier, a configuration that by itself shows the drilling fluid relevance for the overall operational safety and reliability. Besides that, the fluid column also transports the cuttings from the well and before being pumped again on the wellbore, the fluid must be cleaned out from the carried cuttings. This operation is done in stages, in a process that although seemly simple, is responsible for several hours of downtime since its unavailability forces the drilling unit to stop operations. This paper presents a Reliability, Availability, and Maintainability (RAM) analysis of an actual cuttings-dryer configuration of a drillship. The methodology used consisted in the development of a functional analysis, in which the technical and operational characteristics of the system were identified, followed by the modeling of the system using the Reliability Block Diagram (RBD) technique. Since traditional RBD’s do not permit the failure-repair transitions analysis, it was necessary to additionally apply the Monte Carlo Simulation (SMC) technique to assess availability and maintainability. Finally, a multicriteria Importance Measures analysis was performed, considering five different approaches: Birnbaum, Criticality, RRW (Risk Reduction Worth), RAW (Risk Achievement Worth), and Fussell-Vesely. The analysis shows that the fluid cleaner, centrifuge, and catch tank are the most critical components of the system. It also indicates that a monthly preventive maintenance should be performed in order to lessen the likelihood of it failing.

Knowledge-based system modelling to enhance Design for Reliability process: an application to LNG industry
PRESENTER: Andrea Greco

ABSTRACT. Wärtsilä aims to increase customer value in the marine and energy markets through three key focus areas: energy efficiency, lifecycle optimization, and innovative solutions. In this framework, modern simulation capabilities and digital options provide an essential tool in order to test feasibility of systems architecture keeping costs at an acceptable level. In order to meet challenging emissions regulations, modern vessels require integration and interaction of many on-board systems. Adoption of fuel-saving and low-emissions technologies is pushing towards more and more complex and expensive design solutions. Simulation models ensure affordable and effective systems integration assessment as well as investigation of failure modes and hazardous situations in place of expensive and time-consuming laboratory tests. This paper describes how Wärtsilä relied on system simulation to investigate feasibility of a hybrid configuration to be implemented on an LNG vessel. The investigation focuses on a solution aiming at exploiting boil off gas process in order to feed on board auxiliary engines reducing pollution and optimizing operating costs. Furthermore, proposed layout includes installation of a batteries pack. Installed batteries aim to enhance power availability on board as well as reducing thermal stresses in auxiliary engines through peak shaving strategy. Proposed solution would allow both cost savings and increased on board systems reliability.

PRESENTER: Jianpeng Chan

ABSTRACT. We propose a modification of the improved cross entropy (iCE) method to enhance its performance for network reliability assessment. The iCE method is an adaptive sampling approach that employs a smooth transition from the nominal density to the optimal importance sampling (IS) density and updates a chosen parametric distribution model through cross entropy minimization (Papaioannou et. al., 2019). The efficiency and the accuracy of the iCE method are largely influenced by the choice of the parametric model. In the context of reliability of systems with binary component states, The obvious choice of the parametric model is the multivariate Bernoulli distribution. In systems with highly reliable components, the parameters of the Bernoulli model often converge to 0 due to lack of occurrence of failure data/samples. The problem is known as ”zero count problem” or ”sparse data problem” in the context of maximum likelihood estimation (Murphy, 2012). To circumvent this problem, an accurate yet efficient algorithm termed Bayesian cross entropy (BCE) method is proposed. In this approach, instead of employing a weighted maximum likelihood estimator to update the distribution model, the posterior predictive distribution is derived. The information from the samples generated at the previous levels can be further exploited in BCE through introducing a mixed prior. A set of examples are used to illustrate the efficiency and the accuracy of the proposed method.

14:00-15:00 Session WE3C: Maintenance Modeling and Applications
Condition-based Maintenance with Functional Modeling: Challenges and Opportunities
PRESENTER: Mengchu Song

ABSTRACT. A valuable operator decision-making in complex systems requests integrative support for both operation and maintenance, which nevertheless are mostly carried out separately nowadays. Practical application has shown benefits from functional modeling in particular a methodology called Multilevel Flow Modeling (MFM) to develop intelligent reasoning platforms, which can help operators quickly recognize the root cause of a failure and propose potential corresponding counteractions. The testing of the MFM-based reasoning platform in real industrial control rooms have indicated that the efficiency of corrective maintenance can accordingly be improved. In order to further reduce the maintenance costs, the condition-based maintenance (CBM) has also been advocated. With the robust capability of representation and reasoning for artifacts, MFM has the potential for condition monitoring and thus promoting CBM, but may need methodology extension, such as from a macroscopic process level to the perspective of mechanical equipment. This paper goes over the key steps in CBM, i.e. diagnostics, prognostics, and maintenance decisions, from which the possible opportunities and the applied range of MFM are defined. In addition, the prevailing CBM approaches have been reviewed. The challenges including integration of various data and determination of abstraction level are highlighted. By addressing the challenges and opportunities of applying MFM as the unified knowledge base for CBM, the authors aim to develop an information system for maintenance planning, which will interact with the existing tool for operation aid to provide operators a comprehensive decision support during abnormal events.

PRESENTER: Pablo Viveros

ABSTRACT. This research presents the elaboration and computational implementation of a framework for optimizing planning strategies on maintenance interventions. Our model comprehends a novel integrated approach for the opportunistic grouping strategy of preventive maintenance activities originally presented in Viveros, Mena, Zio & Campos (2020), incorporating through this extension new criteria to improve applicability in real industrial environments, i.e., a technical feasibility criterion for grouping, whereas a non-negligible repair time for preventive maintenance activities, and the application of time-window tolerances in order to stimulate preventive maintenance grouping schemes. This work also develops an optimization model based on the mixed-integer linear programming (MILP) paradigm for the implementation of the present framework. Our numerical experiments show a 39% downtime reduction in the system under analysis, considering a maximum tolerance factor of 10% for six preventive maintenance activities, demonstrating the framework’s effectiveness to improve productivity and reduce fixed maintenance costs. This research aims to formulate a new proposal for efficient maintenance planning, which considers realistic applicability criteria to facilitate the transfer of knowledge and its industrial application, with an approach oriented to the simulation and risk quantification of failure events in complex systems.

Quasi-Opportunistic Inspection of a Critical System

ABSTRACT. We propose a maintenance policy in which inspections are performed mostly but not exclusively at opportunities that arise at random. The model is motivated by the inspection of pumps in artesian wells that are geographically remote. The system has three states: good, defective and failed. The system operates when it is good or defective. The failed state is immediately revealed. The defective state, which acts as a warning stage prior to failure, is revealed only by inspection. At a positive inspection (defect found) the system is replaced. Maintenance interventions at other nearby systems are opportunities. In our quasi-opportunistic policy, inspection neither uses an opportunity if it occurs too soon nor waits too long for an opportunity. The policy mimics inspections that are planned flexibly so that production stoppages or missions or other events determine the times for inspection, while accommodating statutory or safety regulations about the maximum allowable time between inspections of a system. It also generalises the delay time model. We study the behaviour of the policy numerically for a range of values of the parameters of the model. The proposed policy is always superior in cost terms to both a purely opportunistic policy and a periodic inspection policy.

14:00-15:00 Session WE3D: Uncertainty Analysis
Location: Panoramique
Interval Uncertainty in Logistic Regression
PRESENTER: Nicholas Gray

ABSTRACT. Logistic regression models the probablility of a binary outcome given some predicting features or risk factors. As many decisions and events are binary in nature (yes/no, failed/passed, sick/well), logistic regression has many practical applications, and thus it is considered an important machine learning algorithm. Logistic regression algorithms can be used to make predictions across many different fields, from predicting whether a sports team will win a particular match [1] to the probability of lightning strikes at the Kennedy Space Center [2]. Traditionally it has been assumed that all the values of the risk factors and outcome statuses used in logistic regression are precisely known. This assumption is valid when the sampling uncertainty or natural variability in the data is large compared to the incertitude (lack of knowledge or epistemic uncertainty). However, in practice there can be considerable incertitude in both the independent and dependent variables used in the regression analysis as well as incertitude in the application of the regression model. Measurement uncertainty is a common cause of epistemic uncertainty in the risk factors and is often expressed as plus-or-minus-intervals. Analysis using data from combined studies with inconsistent measurement methods can even result in data sets with varying degrees of incertitude. Likewise, the outcome data can be uncertain if there is ambiguity in the classification scheme (e.g., diseased/healthy) or if the outcomes are lost or otherwise censored. In the case of the classifications being uncertain the data points can be expressed as vacuous [0,1] intervals.

The purpose of this talk is to show why analysts should account for these sources of uncertainty. We show that considering the interval uncertainties introduces upper and lower bounds on the logistic function. We suggest that uncertainty added to the logistic regression caused by uncertain data points leads to values for which classifications cannot be made, as the interval probabilities straddle decision thresholds. Such a sample would require further analysis to make a prediction; in a safety-critical or high-cost situation this could be of benefit. In general, excluding uncertain predictions from the analysis leads to improvements in the sensitivity and specificity of the predicted values.

PRESENTER: Chuhao Jiang

ABSTRACT. Nowadays, buildings are responsible for over 30% of society’s energy consumption and half of the global electricity demand [1]. In order to build a sustainable and integrated energy system, it’s crucial to make the buildings more intelligent not only to minimize the energy consumption while ensuring comfort but also to provide ancillary services to the energy market in the future smart energy system for the system operators and balancing party. The prerequisite for achieving the goal of the intelligent building is data collection. The accuracy, diversity, and non-repeatability of data become the key to this problem, so finding an optimization method to locate different kinds of sensors so as to obtain good data quality and to minimize the sensors number played a prominent role in data collection. This presentation first provide a short state-of-art review for various optimal sensor placement techniques. Based on the literature review, a greedy algorithm relying on the condition number of a Fisher information matrix [2] is presented in the methodology section. It is applied to find the M optimal temperature sensors placement for a typical university lecture room in west of France, for which temperature was computed at several points through energy (EnergyPlus tool) and CFD simulations. In the results section of the presentation, the set of M indoor temperature sensors that bests fit the temperature at T target points is identified. Then, an implicit model linking estimated and real temperature at some target points is designed for N occupancy scenarios. The model related error is estimated between each N cases. Then, using these model uncertainties as well as the estimated temperature at the target points, an artificial neural network is applied to predict the occupancy status (absence or presence of occupants) with a certain level of confidence. Furthermore, a more profound study of variational number (M) of sensors will be addressed to gain insight for prediction accuracy

14:00-15:00 Session WE3E: Autonomous system safety, risk, and security
Location: Amphi Jardin
Mind the gap between automation and meaningful human control

ABSTRACT. Mind the gap between automation and meaningful human control Author Stig O. Johnsen

In this paper we have analysed the challenges posed by automation in safety critical operations. Background information and experiences has been gathered from automation in aviation, drones, metro systems and automated cars. This information has been used to analyse and explore challenges and opportunities when performing automation in the oil and gas industry. The exploration of automation challenges in the oil and gas industry has been done through a systematic literature review, exploration of relevant accidents, interviews and workgroup discussions. Experiences of automation from aviation, drones, metro systems and automated cars indicates that gradual automation in collaboration with key users has improved efficiency, safety and user satisfaction. Automation has been a success when the operational design domain has ensured that the environment is structured and not too complex. Autonomy in complex environments is still an elusive future goal. To support the development there is a need to learn from accidents, focus on the science of human factors and support meaningful human intervention or control. Automation may benefit from the ability to be resilience, i.e. ability to handle the unexpected and go to a safe state through mechanisms such as redundancy from infrastructure and ability to manage performance safely close to performance boundaries. However, to high level of automation, poor human machine interfaces and poor sensemaking may create a gap in safety performance that must be addressed. Exploration of accidents involving automation, our findings indicated that there were gaps in understanding and learning from the accidents. This gap was due to poor focus on human factors issues (not considering human limitations and strengths) , poor focus on underlying complex design and a too strong focus on rules instead of trying to understand how and why there was a gap between work as done vs. work as imagined. The interviews and workshops identified three main challenges related to automation and the interfaces to human operators and human users. There is a strong technology optimism and poor focus on human limitations and strengths. Successful projects has benefited from user centred design, clarity in responsibilities and strengthening the possibilities for meaningful human control. Learning from incidents and accidents has often missed the science of human factors, root causes from poor design and has not minded the gap between procedures/rules (work as imagined) and work as actually done. Our main finding is to support both technology use (automation) balanced by focus on organization and meaningful human control in order to support safe and efficient use of automation and autonomy.

A Modeling Approach to Consider the Effects of Security Attacks on the Safety Assessment of Autonomous Vehicles - An AT-CARS Extension and Use Case

ABSTRACT. Researchers and developers of autonomous vehicles are facing various challenges ranging from establishing public acceptance to meeting high reliability requirements. Due to the complexity of the autonomous system structure and its components, these challenges are often faced individually in specific areas, e.g., safety and security, or are addressed separately for each software and hardware component. The applied approaches are delivering single solutions that might not consider the interdependencies between the different areas. Some common interdependencies include, for instance, the safety failure of an element that provides security measures or the safety failure of the system due to a security attack of a safety-related component. Therefore, in this paper, we integrate these ideas based on our previous research into a safety analysis to consider the interdependencies between safety failures and security attacks. In particular, we implement security attack rates into our safety analysis tool, called AT-CARS, and develop failure management strategies to handle these security attacks. Furthermore, we introduce a new component to our modeling approach, the so-called hardware-security component, which provides security mechanisms for specific components. Besides, a developed show-case demonstrator visualizes the developed methods and tools.

Analyzing Influence of Robustness of Neural Networks on the Safety of Autonomous Vehicles

ABSTRACT. Neural networks (NNs) have shown remarkable performance of perception in their application in autonomous vehicles (AVs). However, NNs are intrinsically vulnerable to perturbations, such as occurrences outside of the training sets, scene noise, instrument noise, image translation, and rotation, or small changes intentionally added to the original image (called adversarial perturbations). Incorrect conclusions from the perception systems (e.g., missing objects, wrong classification, and traffic sign misdetection or misreading) have been a major cause of disengagement incidents in AVs. In order to explore the dynamic nature of hazardous events in AVs, we develop a range of methods to analyze AV safety and security. This work is part of the project and is devoted to analyzing the influence of robustness in the NN-based perception system by using fault tree analysis (FTA). We extend the traditional FTA to represent combinations of failure causes in the multi-dimensional space, i.e., two variables that influence whether the image is classified correctly. The extended FTA is demonstrated on the traffic sign recognition module of AV theoretically and in practice.

14:00-15:00 Session WE3F: Critical Infrastructures
PRESENTER: Pavel Rusek

ABSTRACT. The subject of the article there are dedicated gas installations: gas pipelines and appliances for natural gas or propane-butane pursuant to Decree 21/1979 Coll. for use in commercial and residential buildings [1]. Their operation in the Czech Republic is governed by a number of national legislative regulations that follow international standards in the area in question. The obligations of operators and state supervision are regulated by laws. State surveillance is accompanied by regular inspections of operators of dedicated and non-reserved gas installations, which inspect not only the installations themselves, but also the flue gas routes and other related installations. Inspections and revisions are carried out by accredited inspection technicians who meet the professional requirements set out in Decree No. 85/1978 Coll., on inspections, revisions and tests of gas equipment and have a valid certificate issued on the basis of a test at the Technical Inspection of the Czech Republic [2]. Nevertheless, there are a number of accidents in commercial and residential buildings, resulting in human casualties and significant material losses and environmental damage. The article will list case studies of 3 selected accidents and the result of an evaluation of the accident database, which summarizes the sources of risks. The most common causes of accidents include, for example, explosions of natural gas in buildings, which cause damages to the gas pipeline in the adjacent area. As a result of gas penetration into the building, an explosive mixture is created and after initiation, e.g. by switching on the electrical appliance, an explosion occurs. E.g., the destruction of the family house in Tursko on October 28, 2010, induced by gas explosion caused the house roof collapse, destruction of a third of the house was, death of one person, injury of 4 people and evacuation of 20 other people by firefighters [3]. One of the causes of accidents is the fact that according to the legislation, inspections of gas installations must be booked by their operators. The fact is that operators do not often do it and simultaneously they do not respect the prescribed safety procedures. In conclusion, technical and legal measures are, therefore, proposed to increase the safety of the operation of gas installations from the design stage to the end-of-life phase.

Perception shift between the classical physical and modern digital notion of critical infrastructures of a state: elements of diagnosis based on a qualitative study

ABSTRACT. Over the last two decades, society has been radically transformed by digitalization, moving to a more globalized model with transnational interdependencies on multiple aspects allowing the massive and wide transportation and distribution of people, materials, products, services as shown by Aven and Zio (2020). But this globalization of society is creating new risks that current risk management practices sometimes have difficulty identifying and managing, such as cyber-risks, social engineering, social exclusion or reduced commitment to work, where the human factor is always underlying. We notice a shift between the classical physical and modern digital notion of critical infrastructures of a state. This is the case for example for the broadband network or the cyberspace. The shift of some critical infrastructures creates new risks that should not be ignored. When a major incident hits a modern digital critical infrastructure, the handling of the crisis cannot rely on the standard approaches and new ones need to be developed to protect the essential functions of a government, the national security, the national economy or public health as described by Hull et al. (2006). The event could have a negative impact on other infrastructures as well as explained by Löschel et al. (2010). In this paper, we have taken an interest in the role that the governments should play in the identification and the management of the risks of its critical digital infrastructures in particular with the increased interdependencies of the GAFAM. We have conducted semi-directive interviews of different people from the public or private sectors with a direct or indirect experience with crisis management. The findings of this research have enabled us to develop a diagnosis template to better assess and respond to the risks of new digitalized critical infrastructures.

Functional Impact Analysis for Complex Critical Infrastructure Systems
PRESENTER: Dustin Witte

ABSTRACT. The well-being of the population depends to a large extent on services of critical infrastructures (CIs). It is a well-known fact that the dependencies between CIs lead to even greater impacts on society in the case of malfunctions. To assess these impacts, understand and mitigate them, it is essential to determine and possibly quantify the dependencies. The approach described here consists of three steps. First, the disruptive event for which the impacts should be analyzed is defined (e.g. a blackout). In a second step, the infrastructure system is modeled. For this purpose, the system is divided into its entities according to common CI definition. Then, the possible service levels of each entity are assessed and quantified, such as standard operation mode, emergency operation mode or breakdown. Logically linked requirements for reaching those levels are defined, either autonomously or as an external dependency (e.g. a power generator), forming a system of dependent services. In a third step, impacts of events are analyzed by degrading selected services and calculating the effects on other services according to the logically linked requirements. Uncertainties can be described by specifying probabilities for service levels. The development over time can be examined by evaluating the impacts of sequential time intervals, e.g. for a regional blackout, where more CIs will face difficulties over time.

14:00-15:00 Session WE3G: Oil and Gas Industry
Location: Atrium 3
Environmental monitoring in a Cuban oil storage plant to characterize the hydrocarbons pollution exposure in the fence-line community.
PRESENTER: David Castro

ABSTRACT. Due to industrial development and the wide application of oil in industry, large amounts of petroleum hydrocarbons have been annually released into the environment. Legacy contamination continues to adversely impact a new generation of residents in fence-line communities, interacting with other risks. The goal was to characterize the hydrocarbon pollution, caused by a Cuban oil storage plant, as a stressor in the community living near the industry. The background of engineering research on the interest area were reviewed, and four comprehensively geographical strata were established. A monitoring program in different environmental matrices upstream and downstream the plant was design. Inside the plant, the oily residuals treatment system was evaluated over a period of 3 years. Around the industry were identified 19 wells, quite totally used for human consumption water. The lab results regarding to hydrocarbons, fats and oils, organic load, and organoleptic characteristics, frequently trespassed the standards requirements for the different environmental matrices. The causal analysis suggested the contamination with hydrocarbons in the aquifer has been produced by infiltration in the unsaturated area, related with the management of oily residuals in the plant. The results highlight the negative environmental impacts caused by the plant operations, acting as a dynamic stressor on the territory, increasing the vulnerability of it near community. The results were presented to stakeholders to contribute the awareness in the decision-making process.

Physics-based accelerated RDT for high reliable equipment
PRESENTER: Eduardo Menezes

ABSTRACT. Reliability Demonstration Testing (RDT) is a reliability evaluation methodology focused on experimentally simulating system lifetime and using the test results to conclude if a pre-specified reliability threshold for the desired confidence level is reached. At first, RDT planning entails analyzing specific failure modes and mechanisms that may cause system failure so that the test design adequately addresses them. Then, the test time for a given number of specimens is estimated, or the required number of units to be tested over the available test time is established. However, for high reliable equipment, the amount of time or the number of items necessary to be tested to simulate actual field conditions are rather long and even unfeasible, given budget and time constraints. An alternative solution for applying RDT in these situations involves using physics modeling of the failure mechanism to reduce the test time by accelerating test variables according to the subjacent physical law. In this paper, we design an RDT for highly reliable equipment used in the O&G industry considering fatigue failure induced by vibration as the main failure mechanism. The physics-based RDT includes the fatigue S-N Curve in its planning, which permits test time acceleration and test costs reduction. A sensitivity analysis is carried out to assess the impact of RDT inputs, such as the acceleration level and the number of specimens tested, on the test time. The obtained results show that the proposed physics-based RDT is an effective method to support the design of efficient physical tests for equipment under development that must comply with high-reliability targets.

A Bayesian Regularized Artificial Neural Network for the estimation of the Ignition Probability in accidents in Oil & Gas plants

ABSTRACT. Within the Quantitative Risk Assessment (QRA) of Oil & Gas (O&G) plants, the estimation of the Ignition Probability (IP) following the release of flammable material in an accident (e.g., a Loss of Primary Containment (LOPC)) is commonly conducted by time-consuming and computationally demanding Computational Fluid Dynamics (CFD) simulations, for only a limited number of operational configurations and accident scenarios. In this work, we propose an Artificial Neural Network (ANN) to overcome these limitations. Specifically, a Bayesian Regularized ANN (BRANN) is developed from a limited set of operational configurations and LOPC characteristics relative to a representative onshore O&G plant.

14:00-15:00 Session WE3H: Maritime and Offshore Technology
Location: Cointreau
PRESENTER: Alf Ove Braseth

ABSTRACT. Digitalization in the maritime domain brings several potential benefits. It is expected that autonomous ships operated from a land-based center can reduce cost and risk to personnel and environmental emissions through reduced ship speed. There is, however, a need to perform more research into this complex socio-technical system to ensure a safe and efficient operational model. One key concept is to maintain a high level of situational awareness (Endsley, 2012). Based on Endsley´s three-level model of situational awareness, we ask: i) Which information is needed, and how should the information be presented to provide an understanding of the current and future situation for the land-based operation center? ii) Which situations are particularly challenging for autonomous ships?

This is addressed through semi-structured interviews of four sea captains on passenger ferries in the outer Oslo fjord. A case of autonomous cargo ships operated from a land-based center was used to facilitate the discussions. It is performed within the project: “Land-based operation of autonomous ships (LOAS)”, financed by the Research Council of Norway (RCN). The interviews were performed through MS Teams due to the Covid-19 situation.

The results suggest that the operational center should focus on presenting the “larger situational picture”, where the radar display is particularly important for building a picture of the situation. There is further a need to comprehend information from the ship´s technical systems with the “out of the windows” bridge view. The captains reported that planning ahead was mainly performed before unberthing, this should be considered for remote operations. The captains stressed that performing a safe journey is a joint teamwork from the whole ship´s crew. It was a concern that the “feeling for the ship” could be lost during challenging weather conditions if the crew is not onboard. Understanding the automation level was also mentioned.

We suggest that further work should focus on a user-centered integration of systems, presenting the “whole picture”, avoiding too many standalone systems. We suggest using simulator-based studies with relevant maritime scenarios. Further research should also focus on how to support “a feeling for the ship”, particularly for bad weather situations.


ABSTRACT. During the recent years, technology supporting autonomous and remotely operated ships has evolved. It is expected that unmanned, autonomous cargo ships can be monitored and operated from a remote operation center (ROC) in the foreseeable future. This concept implies that a human operator only intervenes when necessary. Previous research has indicated that one primary challenge for successful integration of advanced autonomous systems, is trust [1]. A challenge often mentioned related to monitoring of technological systems, is to which degree the user trust matches the capabilities of the system [2]. Too much trust may lead to a reduced likelihood of detecting and diagnosing errors in the system, while too little trust may result in disuse of the automation, which again may lead to degraded performance [3]. In this study, we ask: i) Which challenges, and opportunities do current and future navigators perceive related to unmanned, autonomous cargo ships? ii) To which degree do current and future navigators trust conventional vs. autonomous cargo ships? And iii) To which degree is trust related to factors such as the navigators´ digital competence and risk perception. These research questions were addressed through a survey developed within a research project, Land-based operation of autonomous ships (LOAS), financed by the Research Council of Norway (RCN). A case of an autonomous cargo ship was used to explain the concept of a remotely operated ship. The questionnaire was distributed to two groups. The first group consisted of navigators who work at passenger ferries in the outer Oslo fjord, in the same area as autonomous cargo ships is envisaged. The second group were maritime students in their final year, representing future navigators. The answers to the questionnaires are analysed and presented in the final paper.

Human-Automation Interaction for a small autonomous urban ferry: a concept sketch

ABSTRACT. The International Maritime Organization (IMO) added Maritime Autonomous Surface Ships (MASS) to its agenda in 2019. At the NTNU in Trondheim, a new Centre for Research and Innovation started in 2021 with the aim of supporting the Norwegian industry’s attempts to realize autonomous shipping. One of its use cases is a small autonomous urban passenger ferry. This research area is technology centered and there is a lack of Human Factors (HF) research [3]]. This is a concept paper presenting some design sketches for Human-Automation Interaction for this ferry crossing the harbor canal in Trondheim. The operating concept is simple but includes difficult problems regarding HF and interaction design implementations. (1) Design of the control room: Situation Awareness from different sensors, automation transparency, how do the operator understand and interact with automation? (2) Design of interaction between the autonomous ferry and other vessels in the canal? How to signal intentions using different interfaces? (3) The interaction between the crew-less ferry and passengers. How to promote trust and security, and how to handle emergencies? The paper discusses safety and security issues and presents some possible solutions and sketches of prototype interfaces for testing.

14:00-15:00 Session WE3I: Artificial intelligence for reliability assessment and maintenance decision-making
Location: Giffard
Big data analytics for reputational reliability assessment using customer review data

ABSTRACT. Traditionally, reliability assessment is done based on lifetime testing data. Such assessment methods suffered from a lot of limitations. For example, it is in general difficult to collect enough life testing data to support an accurate reliability assessment. Further, the experimental conditions can hardly reproduce the way a consumer will use a product in practice. In the meantime, with the expansion of the Internet, a lot of customers give their feedbacks on the products by posting reviews on websites. This constitutes a huge, easily accessible, and more realistic database that can be used to assess reliability.

In this work, we scraped reviews from a famous e-commerce website. Machine learning models are developed to extract failure-related information from these reviews. Two kinds of information are examined in this study : (1) whether a review reports a failure and, in such a case, (2) the severity of the failure. We used natural language processing tools to process text and we developed different classification models for information extraction. The developed methods were tested on customer review data from $11$ different tablets of several brands. The results we obtained were around 85\% accuracy when training and testing our models with our dataset. Hence, the machine learning-based approach we developed is demonstrated to be a promising first step to assess reliability thanks to web-based data. However, with a corpus containing only a few thousand reviews and more than 100,000 words, using text to train classification models remains a complicated task. Especially, the models developed in this paper strongly overfit despite the use of several methods designed to prevent overfitting.

Efficient Deep Learning Scheme to Evaluate the Reliability of a Passive Safety System
PRESENTER: Kyungho Jin

ABSTRACT. Passive safety systems are introduced to mitigate accidents in nuclear power plants (NPPs) even under extremely harsh conditions. The one way to estimate the reliability of this passive system is to employ the Monte Carlo simulation (MCS) with the thermal-hydraulic (T-H) simulation code. Due to the fact that the failure probability of a passive safety system is extremely low, a number of simulations are essential to obtain reliable results; the long computation time of a T-H code makes it difficult to perform a large amount of analysis. In order to reduce this computational burden, previous researches have proposed the framework using the advanced sampling techniques for MCS combined with a surrogate regression/classification model instead of the running T-H code. This paper also employs a deep learning (DL) scheme as a surrogate model to minimize the running of T-H code when evaluating the reliability assessment of a passive system. In addition, this paper also suggests the efficient scheme for the generation of training data to make data sets include more failure cases that rarely occur during MCS. With this motivation, at first, distributions of input parameters are updated by constructing empirical distributions based on specific combinations contributing to the system failure. In the next step, the semi-supervised learning is carried out to obtain more training data without the T-H code. Using this concept, it can be found that the performance of the surrogate DL model describing the decision boundary can be enhanced.

14:00-15:00 Session WE3J: Effectiveness, Management and Reliability of Natural Risks Reduction Measures and Strategies
Location: Botanique

ABSTRACT. The “Wetropolis” flood-outreach demonstrator has been developed to enable live visualisation of so-called return periods of extreme rainfall and flooding events to the general public [1]. While initially designed for showcasing to the general public, Wetropolis received additional attention from flood professionals and academics. That attention triggered development of a new protocol [2,3,4], comprising theoretical and graphical mathematical tools for quantifying flood-mitigation approaches and their costing, quantifying both the up-scaling of flood-mitigation measures and offering a consistency check on detailed calculations in engineering and consulting. The protocol enables rapid a priori, as well as thorough a posteriori, comparisons to be made of the efficacy of various flood-mitigation options and scenarios. As starting point, we have revisited a concept called “dynamic flood-excess volume”(dFEV) and presented it in a novel three-panel graph comprised of the (measured) in-situ river-levels as function of time, the rating curve and the hydrograph, including critical flooding thresholds and error estimates. dFEV is the amount of water in a river system that cannot be contained by existing flood defences. The new tool deliberately eschews equations and scientific jargon and instead uses a graphical display that shows, with dFEV displayed as a dynamic, hypothetical square lake two metres deep, the amount of water that needs to be contained in a river valley to stop a river from flooding. This square-lake graphic is overlaid with the various mitigation measures necessary to (dynamically) hold back or to capture the floodwaters and, for the purposes of strategic quantification, how much each option will cost. The tool is designed to help both the public and policymakers grasp the headline options and trade-offs inherent in flood-mitigation schemes. It has already led to better decision-making regarding flood defences in France and Slovenia [2], particularly where a number of alternatives are being considered. The above developments will be presented for various river-flood events, e.g., see the openly available tools [5], followed by a discussion of more recent insights in dealing with uncertainty and the (graphical) communication of multiple benefits of nature-based solutions.


1. O. Bokhove, T. Hicks, W. Zweers and T. Kent, Hydrology Earth System Sci. 24(5) (2020). 2. O. Bokhove, M.A. Kelmanson and T. Kent, Evidence for the UK Government (2020). 3. O. Bokhove, T. Kent, M.A. Kelmanson, G. Piton and J.-M. Tacnet, Water 12(3), 652 (2020). 4. O. Bokhove, T. Kent, M.A. Kelmanson, G. Piton and J.-M. Tacnet, River Res. Appl. 35, 1402 (2019). 5. T. Kent, G. Piton and O. Bokhove (and others) (2020).

Improvement of Proportional Conflict Redistribution Fusion Rule for Levee Characterization
PRESENTER: Theo Dezert

ABSTRACT. This work is part of a problematic of levee characterization for flood protection. Indeed, these hydraulic works are mostly old and heterogeneous and their rupture can lead to disastrous consequences such as human, economic and environmental losses. Reducing the risk of levee breakage requires an improvement of their diagnosis and therefore to enhance their characterization. Methodologies for the evaluation of these structures usually include geotechnical and geophysical investigation methods. Geophysical methods are mainly non-intrusive and provide physical information on large volumes of subsoil but with potential significant uncertainties. Geotechnical investigation methods, on the other hand, are intrusive and provide more punctual information spatially, but also more precise. These two sets of methods are complementary. The processing of the data from geophysical and geotechnical investigation methods and their fusion, taking into account their imperfections and associated spatial distributions, is an essential issue for the evaluation of earthen levees. A cross-disciplinary fusion approach for the characterization of lithological materials within the structures has recently been proposed in the mathematical framework of belief functions.

In our previous works, we did use the proportional conflict redistribution rule no.6 (PCR6) proposed in DSmT (Dezert Smarandache Theory) framework for the levee characterization. In some cases however, this rule can generate non satisfactory results because the uncertainty between several hypotheses (lithological materials) is overestimated after the fusion process, which is detrimental to decision making in the end. This result occurs because the PCR6 rule does not preserve the neutrality of the vacuous belief assignment, which can be judged as being a counter-intuitive behavior. To overcome this problem we present an improved rule denoted PCR6+ that preserves the neutrality of vacuous belief assignments in the fusion process. Hence, the redistribution of the partial conflict masses using PCR6+ does not overestimate the masses associated with partial uncertainties.

As an example application of this new fusion rule, we simulate two geophysical acquisition campaigns (electrical resistivity tomography method and multiple analysis of surface waves method) and a geotechnical acquisition campaign (core drillings with particle size analysis) on an earthen structure. The objective is to compare and discuss the fusion results obtained using the methodology developed (based on original PCR6 rule) with respect to what we get with PCR6+ rule, and to demonstrate the enhancement of the levee characterization.

Prediction of runoff sediment volume utilizing a peak discharge of debris flow based on a stochastic data

ABSTRACT. Predicting debris flow runoff sediment volume and peak discharge is necessary to design active and passive protection measures in torrential streams. This study proposes a method to estimate both debris flows runoff sediment volume and peak discharge using a probabilistic safety assessment based on about 450 debris flow data recorded in Japanese survey database. On the other hand, statistical analysis showed relative correlations between drainage basin area and both runoff sediment volume and peak discharge. On the other hand, there were no significant correlation between runoff sediment volume and the peak discharge. An estimation method for choosing the design value of runoff sediment volume and peak discharge using the drainage basin area is proposed : it uses a 95% prediction threshold based on data provided by the survey.

14:00-15:00 Session WE3K: Autonomous Driving Safety
Location: Atrium 1
Safe interaction between AVs and vulnerable road users

ABSTRACT. Advanced automated driving vehicles (AVs) are expected to radically transform road transport by improving safety, increasing traffic flow efficiency, enhancing mobility for all, and reducing road congestion, fuel usage and emissions. To facilitate deployment research has vastly focused on ensuring the safety of AVs operation, investigating primarily the interaction between the human driver (or user depending on the level of automation) and the AV. In addition to the interaction between the AV and its user, the successful deployment of AVs depends also on the exchange between AVs and other road users, also known as vulnerable road users (VRUs), such as pedestrians and cyclists, as well as motor-cyclists, people with disabilities or reduced mobility and orientation. For instance, what do cyclists and pedestrians anticipate when interacting with AVs; are AVs capable of recognizing cyclists and pedestrians in time; can AVs predict accurately the intentions of VRUs; and how shall AVs communicate their intentions to VRUs? Focusing on vehicles with advanced automated driving features, that is vehicles that allow the human driver to not execute the driving task when automation is engaged1, this paper contributes to the overall discourse on safety of automated driving vehicles in a twofold manner. First, it critically reviews the literature on the interaction between AVs and VRUs, discusses the recent developments and identifies the most prominent research gaps. Second, it shows results of an international online survey on the VRUs expectations, preferences and concerns with respect to their interaction with AVs. We expect our findings to not only provide new insights on the interaction between automated driving vehicles and vulnerable road users, but also be instrumental for policy makers and other relevant actors involved with the development of automated driving technology.

PRESENTER: Philippe Richard

ABSTRACT. Safety is a key concern for road, rail, and air transport activities. It must be continuously ensured at the technical,operational and organizational levels. At the operational level, two types of safety come into play: (i) rule-basedsafety, which is based on standards, procedures and technical barriers and concerns known events, and (ii) managedsafety, which is currently supported by human operators, mainly by drivers in the context of railway system, who tendto focus on unforeseeable events. In the context of autonomous trains, in which all or a part of the driver’s activitiesand tasks are transferred to an automatic driving system, that integrating some advanced artificial intelligencetechnologies, the system should consider the managed safety, in addition to the rule-based one. In order to identifydifferent possibilities for its integration and to feed a reflection around this question, we revisit in this paper theconcepts of regulated and managed safety in the transport domain and we analyze their future management inthe context of autonomous systems. In addition, we present a study of two accidents which illustrate the impactof managed safety. The first one, a railway accident, illustrates the importance of managed safety in handling anaccident to reduce or eliminate major consequences. The second one, an autonomous car accident, shows someaccident consequences due to the lack of managed safety in an autonomous system of transportation.

PRESENTER: Basma Khelfa

ABSTRACT. Predicting lane-change intents is crucial for the automated driving. Several models and algorithms are described in the literature. In this contribution, we compare lane-change intent prediction using MOBIL rule-based model and a data-based decision trees algorithm. Both approaches are based on the speed difference and spacing with the four surrounding vehicles on current and intend lanes. The data are collected from the Highway Drone (HighD) trajectory data-set of two-lane German highways. We extract from the trajectories lane-keeping and lane-changing maneuvers, including lane-keeping on right and left lane and lane-changing to the right and the left lane. The behavior of the driver significantly changes according to the maneuver. It turns out that changing the lane to the right (fold-down) is a more complex process than lane-changing to the left (overtaking). Indeed, overtaking results from a mechanism with the neighbors mainly based on three parameters, while fold-down requires more complex combinations. This leads to different meanings of the spacing variables with the neighboring vehicles according to the maneuver and requires analysing the maneuvers separately. The prediction errors of lane-changing and lane-keeping intents are calculated and compared for the rule-based MOBIL and decision trees approaches. The data-based algorithm, devoid of modeling bias, can predict both overtaking and fold-down maneuvers accurately.

15:00-15:20Coffee Break
15:20-16:00 Session Plenary V: Plenary Session
Location: Auditorium
Safer by design concept

ABSTRACT. Because the physicochemical properties of nanoparticles are distinct from their bulk counterparts, the fast growth of nanotechnologies has brought new industrial and business opportunities. The field of nanotechnology has shown a huge expansion during the last decade and the key challenge is how to take into account potential risks to human and environmental health posed by long-term exposure to and accumulation of nanoparticles? However, manufactured nanomaterial production is out pacing the ability to investigate environmental hazard using current regulatory paradigms, causing a backlog of materials requiring testing. Based on results from toxicological and ecotoxicological studies, researchers now have a better grasp on the relationships between the nanomaterials' physicochemical characteristics and their hazard profiles. Nowadays, it is expected that an integration of design synthesis and safety assessment will foster nanomaterials safer‐by‐design by considering both applications and implications. Multiple case studies will be presented on the safer by design concept.

16:10-17:30 Session WE4A: Risk Assessment
Location: Auditorium
PRESENTER: Cosetta Mazzini

ABSTRACT. The aim of this paper is to explain the activity carried out by a working group, instituted within the national coordination table on Legislative Decree no. 105/2015 (1), the Italian implementation of the Seveso III directive. The scope is to provide technical support in safety reports evaluation of underground natural gas storage facilities, carried out by the Local Competent Authorities, in order to pursue a uniformity of evaluation throughout the national territory, taking into account plant and site-specific territorial aspects (2). In order to frame the issue, an overview of the Italian law and legal requirements concerning safety reports evaluation and the natural gas sector is given, also focusing attention on the situation of Italian Seveso establishments. The other main issues concern: information about the establishment and the company organizational structure; information on classification of substance under Seveso directive; industrial safety of the plants; methodological approach for assessing the risk analysis of plants, in terms of: identification of events and accident scenarios, evaluation of events and scenario frequency, calculation of consequences; safety and technical systems. Some references are finally given to identify the most “critical” parameters of the different techniques for risk analysis (3) which, if not adequately evaluated, can lead to an incorrect result of the analysis itself, also taking into account the correct safety measures in order to limit the consequences of an accident scenario.

Numerical Verification of DICE (Dynamic Integrated Consequence Evaluation) for Integrated Safety Assessment

ABSTRACT. IDPSA (Integrated Deterministic and Probabilistic Safety Assessment) is an integrated method combining deterministic and probabilistic approaches so that the effect of component reliability or operator’s actions is reflected in a safety analysis process over time. Thereby we are able to identify risk factors that may be hidden in conservative assumptions or unidentified event scenarios. DICE (Dynamic Integrated Consequence Evaluation) developed in this study is a tool to perform dynamic reliability analysis based on DDET (Dynamic Discrete Event Tree), consisting of a physical module supporting thermal hydraulic simulation, an automatic/manual diagnosis module governing branching rules on the basis of real time status of the physical module, an reliability module that represents an availability and performance of the safety systems using reliability information, and a scheduler that runs as a comprehensive controller by managing the overall information exchange between these modules. This paper demonstrates a performance of DICE through a case study on SBLOCA (Small Break Loss Of Coolant Accident) in an NPP (Nuclear Power Plant), and verifies its numerical accuracy by cross-checking whether the simulation results match the outcomes computed by the conventional probabilistic and deterministic methods, respectively.

Towards Risk-Based Autonomous Decision-making with Accident Dynamic Simulation
PRESENTER: Renan Maidana

ABSTRACT. Accidents involving maritime vessels can have severe consequences, i.e., high potential loss-of-life and environmental impact. Hence, risk assessment is essential to the safety of a vessel’s operations. Risk assessment for conventional vessels can be considered well established - however, the introduction of vessels with autonomous behavior is challenging to address with the traditional risk assessment methods. Generally, traditional artificial intelligence methods used in a system to perform tasks autonomously are not risk-informed, which may later result in an accident scenario. For example, state-of-the-art autonomous navigation methods are reactive to risk, acting to avoid hazards or consequences only after identifying a potential accident scenario. By preemptively performing risk assessment and incorporating risk information in the autonomous decision-making process, we can proactively avoid an accident scenario altogether. The challenges with risk assessment of autonomous systems are present in complex and software-intensive systems in general. Dynamic Probabilistic Risk Assessment (DPRA) has been introduced to mitigate some of these challenges. In this paper, we present a novel framework for enabling simulation-based DPRA, using the Accident Dynamic Simulator (ADS) as the point of departure. We describe the concepts, structure, constituent parts of the DPRA framework, and how it will contribute to safer autonomous decision-making in the future.


ABSTRACT. According to risk perception research, numerous factors influence risk perception, and these factors will also play a role in terrorism threat assessment. However, there have been few attempts to connect risk perception research with the practice of threat assessments. This paper aims to fill this gap by examine factors associated with risk perception that can influence terrorism threat assessments. The aim of this paper is to scrutinize the psychological biases that can influence risk perception and discuss how these biases can affect threat assessments. The conclusion is that terrorism scores high on all fear factors associated with risk, and therefore, when conducting terrorism threat assessment, it is especially important to acknowledge that threat assessment is a subjective matter prone to individual biases. Similarly, biases at the societal and cultural levels must also be taken into account. It is hoped that increased awareness of how risk perception influences threat assessments can help us build a strong foundation for improved critical evaluation and better decision-making about terrorist threats.

16:10-17:30 Session WE4B: Occupational Safety
Location: Atrium 2
Index Method for Risk Assessment Using Load Lifting (Crane) and People Lifting (MEWP) Equipment

ABSTRACT. The safe use of load lifting equipment (various types of cranes) and people lifting equipment (Mobile Elevating Work Platform, MEWP) during use phase is regulated in European Union by directive 2009/104/EC. That directive concerns the minimum safety and health requirements for the use of work equipment by workers at work (second individual Directive within the meaning of Article 16 of Directive 89/391/EEC). The use naturally occurs after the manufacturing and placing on the market phase, which in European Union must follow the machinery directive 2006/42/EC, in order to guarantee the minimum safety and health requirements ("MSR") of the product itself, which, in this case, are Cranes and MEWP. The purpose of this article is to give the essential information to the employer, who will better evaluate work equipment in a conscious and complete way, considering that numerous accidents occurred despite the equipment respecting the "MSR". To do this, an index risk assessment method is proposed when Cranes and MEWP are used in company or building site, taking into account various risk-factors with reference to the place of installation and their use, working methods, maintenance conditions, etc. The proposed method follow the UNI ISO 31000:2018 standard "Risk Management - Principles and guidelines", based on the technique called "Consequence Likelihood Matrix" provided by ISO 31010:2019 "Risk Management - Risk Assessment Techniques". The above-mentioned method allows evaluating the level of risk (acceptable/unacceptable) for the specific company/building-site context, where Cranes or MEWP are used. If the risk is unacceptable, the employer must implement specific prevention and/or protection measures in order to protect the safety and health of the exposed workers.

Critical assessment of the technical standards and regulations about the energy isolation and unexpected start-up in machineries

ABSTRACT. Several international standards and regulations specify the practices and procedures necessary to remove the supply of energy and to disable machinery or equipment, thereby preventing the release of hazardous energy while employees perform servicing and maintenance activities. Among them, LockOut-TagOut (LOTO) is a common safety procedure used in industry to ensure that dangerous machineries are properly shut off and not able to be started up again prior to the completion of service or maintenance. Also, the essential health and safety requirement 1.6.3 of the Machinery Directive 2006/42/EC reports provisions about the isolation of energy sources. Despite all these documents address requirements to prevent unexpected machine start-ups to allow safe human interventions in hazardous zones, serious accidents continue to occur due to lapses and errors during these activities. Owing to these considerations, this paper compares standards and regulations dealing with unexpected machine start-ups and LOTO applications, discussing their strengths, weaknesses, and opportunities for improvements and making a comparison among the most relevant provisions. The aim is to critically discuss the most crucial requirements that users must follow when employees could be exposed to hazardous energy while servicing and maintaining equipment and machineries.

Localization systems for Safety Applications in Industrial Scenarios

ABSTRACT. When implementing indoor localization solutions [1] for safety applications, the system requirements and performance have to be discussed with reference to existing standard, regulation and technical specifications. In particular, the requirements of Machine Directive and of Safety at work Directive shall be considered to design a localization system integrated in assemblies of machinery for industrial applications. In this paper we investigate the properties required to the indoor localization system when designed for safety applications according to the Machine Directive (2006/42/CE). Then we present a state-of-the-art analysis of those localization systems suitable for the abovementioned aim. The paper is focused mainly on safety issues rather then on security ones, since they are already addressed in the majority of works. Among the considered technology, specific attention will be put on the Radio Frequency Identification (RFID) technology. The latter is already employed in industrial scenario to perform item inventory or item/people localization, but also to develop new systems able to improve the worker safety. First solutions foresaw the employment of active RFID technology in the Ultra-High-Frequency (UHF) band, but more recently the passive RFID system received great attention. Indeed, passive RFID technology can guarantee high accuracy in locating operators and therefore can be effectively applied in the area of worker safety or machine safety. The passive tags do not require the power supply as in active systems, with a consequent reduction in the tag weight / size, and a greater reliability due to the avoidance of the check of the battery status. Moreover, the low cost allows the adoption of RFID technology in all those applications that require a large number of objects / people to identify and locate. When addressing the operator identification in the proximity of complex production units, the low cost and the limited size of the passive RFID tags allow the designer of the safety system to increase the redundancy of the system itself by using multiple tags for each operator. As an example, they can be integrated within the individual protection devices such as the helmet, or ad hoc “textile” tag can be integrated in work gloves or in safety vests.

Risk assessment of pressure equipment during use phase

ABSTRACT. The safe use of pressure equipment (Steam Generators, Reactors, Pressure Vessels, Piping, etc.) during use phase is regulated in European Union by directive 2009/104/EC, that concerns the minimum safety and health requirements for the use of work equipment by workers at work (second individual Directive within the meaning of Article 16 of Directive 89/391/EEC). The use phase naturally occurs after the manufacturing and placing on the market phase, which in European Union must follow the directive 2014/68/EU (PED - Pressure Equipment Directive) in order to guarantee the Minimum Safety and health Requirements (MSR) of the product. PED shall apply to the design, manufacture and conformity assessment of pressure equipment and assemblies with a maximum allowable pressure PS greater than 0.5 bar. The purpose of this article is to give the essential information to the employer, who will better evaluate work equipment in a conscious and complete way, considering that numerous accidents occurred despite the equipment respecting the "MSR". To do this, a risk assessment with index method is proposed when pressure equipment and assemblies are used in a given company, taking into account various risk-factors with reference to the place of installation and their use, working methods, maintenance conditions, level of training of exposed workers, etc. The proposed method follow the UNI ISO 31000:2018 standard "Risk Management - Principles and guidelines", based on the technique called "Consequence Likelihood Matrix" provided by ISO 31010:2019 "Risk Management - Risk Assessment Techniques". The above-mentioned method allows evaluating the level of risk (acceptable/unacceptable) for the specific company context. If the risk is unacceptable, the employer must implement specific prevention and/or protection measures in order to protect the safety and health of the exposed workers.

16:10-17:30 Session WE4C: Petri Nets in reliability, safety and maintenance
PRESENTER: Rundong Yan

ABSTRACT. Although the occurrence of serious nuclear power plant accidents is very rare when they do happen they cause serious physical, social and economic damage to many people. Hence, the safety and reliability of Nuclear Power Plants (NPPs) remains an important research area. In terms of the reliability of NPPs, much effort has been made in the past using classical risk assessment approaches such as Event tree analysis and Fault tree analysis. However, it is found that these conventional methods have difficulty accounting for the influences of unpredictable events, such as earthquakes and tsunamis, Francis and Bekera (2014). Resilience engineering offers a promising alternative. Different from conventional risk assessment methods that aim to predict the failure rate or reliability of the system and eliminate the root causes of the failure, resilience analysis considers the ability of the system to recover in the presence of failure. Since many extreme events, such as severe weather and earthquakes, are inevitable, resilience analysis aims at enhancing the systems’ ability to anticipate and absorb the unexpected events and adapt from the event. Also, as the complexity of NPPs increases the traditional methods are no longer adequate in modelling this complexity. Hence there is a need for a different approach. To further advance the research in this field, a detailed study of the resilience of NPP’s is conducted in this paper within the context of nuclear safety engineering. To facilitate the research, a mathematical model is constructed using Petri nets (PNs) similar to the approach adopted in the earlier work of Yan et al. (2018). The model simulates the physical and control systems of nuclear reactors with respect to the occurrence of a range of possible risks, the corresponding responses to different risks, the mitigation of the consequences of disruptions, and the recovery from abnormal conditions. The critical physical parameters of nuclear systems are determined by the development of a physical model. This model interacts with the PN model with information passing between them. The research shows that PN modeling is an effective tool for evaluating the resilience of NPPs. Various potential resilience measures and their quantification using the models developed are investigated. It is deemed that the resilience evaluation methodology established in this paper can be used effectively in the design of future resilient NPP’s.

PRESENTER: Thomas Dosda

ABSTRACT. The communication presents the dynamic method developed and used by Framatome for the Probabilistic Safety Assessment (PSA) of a Sodium-cooled Fast Reactor (SFR). The purpose of this communication is to highlight different ways to pass through some limitations of the static reliability tools by moving to a dynamic modelling. For Pressurized Water Reactors (PWR) PSA models are commonly developed with fault trees and event trees. This kind of models can be qualified as “static” because it takes only partially the time factor into account. It is commonly admitted that this modelling is appropriate for PWR due to the relatively short mission time of safety features (some hours) related to the fast progress of physical phenomenon after an accident. When studying an SFR, one of the PSA objectives is to demonstrate that the frequency of the loss of Decay Heat Removal (DHR) is practically eliminated with a high level of confidence. For this type of reactor, the kinetic of an accident is different compared to a PWR and require the risk to be screened on a longer period for which repairs could be taken in account. In this context, Framatome is developing a dynamic PSA based on Petri nets. Calculations are performed with a statistical method (Monte-Carlo). As a static model, the dynamic model has to take into account failures on demand, failures in operation, common cause failures, initiating events, dependencies between systems or components, human reliability and preventive maintenance. Furthermore, to limit conservatisms, the dynamic model should be as close as possible to reality with real-time modeling of the plant configuration (repairs, number of available repairers, thermal-hydraulic situation at any time…). for that purpose, a “temperature module” has been developed. This module gives the image of the evolution of the primary sodium temperature at each time of the simulation. This module abolishes the use of pre-determined success criteria and replace it by a direct scanning of the real behavior of the installation (evolution of the reactor coolant temperature) allowing establishing a more realistic grace period towards adverse event (overtaking of limit temperature). Our latest progress in dynamic PSA modelling with Petri nets allows, when repairs are not taken into account, comparison with static PSA modelling which shows similar results. However, dynamic PSA robustness is to be strengthened in the objective to, one day, play a role in the licensing of a Nuclear Power Plant in combination with static PSA. For this purpose, improvements in terms of modelling accuracy and results are encouraging to continue our development in the field of dynamic PSA.

PRESENTER: Silvia Tolo

ABSTRACT. In parallel with the development of modern technology, the research community has advocated the benefits of novel approaches able to expand the potential of existing techniques and complement traditional risk assessment methodologies. This has opened the way to a progressive shift in the traditional risk assessment philosophy, resulting in the rise of the concept of resilience in the design, operation and management of engineering systems. Rather than to design against failure, the current trend is to consider the occurrence of the latter within the systems design, stressing the focus on their ability to efficiently absorb and rapidly respond to threats rather than to merely avoid them. This is far from being a radical alternative to risk analysis, but rather one of its natural evolutions. The potential advantages that such a theoretical framework delivers have not yet been matched by the availability of adequate numerical tools and methodologies targeting the challenges associated with such analyses. This study proposes a novel modelling approach for the estimation of the dynamic response of complex systems to safety-threatening perturbations in system variables, providing the basis for further development towards the theorization of a resilience assessment framework. The proposed approach relies on the integration of Petri Nets with physical simulators able to capture the physics of the processes involved in the system operation and its interaction with the technological installation. The framework is applied to a case-study focusing on the response of a CANDU nuclear reactor to cyber incidents preventing the correct operation of the reactor control system, resulting in a loss of core temperature regulation accident. The performance of the system, measured by adopting a probability-based metric as suggested by Yan et al. (2020), is tracked in time along with the evolution of the accident and quantified. The value of the analysis and its results are discussed in view of their potential role in resilience analysis as well as in terms of its generalization and applicability to other systems and engineering sectors.

RCM3 methodology applied to the cooling system of a Land Military Vehicle with the application of Colored Petri nets
PRESENTER: Enio Chambel

ABSTRACT. The main objective of this paper is to create a maintenance model for Land Military Vehicles, based on the study of reliability, maintainability, and availability throughout its life cycle. One other objective is the possibility to identify the logistical needs for the supply of spare parts for a military mission because of the number of engine hours that are expected to be operated. In the example shown, the model is applied to a cooling system for a Land Military Vehicle. The Reliability Centered Maintenance 3 (RCM3) methodology is applied to the physical system. Through simulations of the model of Colored Petri Nets (CPN), it is possible to investigate distinct scenarios and investigate the performance of the system. From the application of the RCM3, it is possible to conclude that this is a robust system, with several components with few failures, and in the case of the components with more recurrent failure modes, they present high Mean Time Between Failures (MTBF) values. It allows simulating the real situation and supports decision-makers in the dynamic formulation of the maintenance plan. It will be also possible to identify the logistical needs for the supply of spare parts for a military mission taking into consideration the number of engine hours that are expected to be operated in an operational mission.

16:10-17:30 Session WE4D: Prognostics and System Health Management
Location: Panoramique

ABSTRACT. The article describes the construction of optimizing Self-maintenance solutions based on PHM systems, represented as neurosingular machines [1]. The adaptation of the neurosingular machine to the tasks of optimal maintenance of engineering management with a goal-setting determined in real time is the self-maintenance optimization solution. The work discusses that change in hidden symmetries of the incoming data stream does not allow using other data-driven models such as artificial neural networks and Bayesian networks to construct optimizing solutions. The paper shows that different types of symmetry breaking determine the classes of system state functions, and these classes have a hierarchical structure that defines different types of fault hidden predictors in each class. The simplest example of such predictors is Hölder's strong α-singularity, which are further defined as α-singular processes. Such processes determine extreme events in the material and ultimately lead to the appearance of areas of material degradation in any engineering system. The paper describes the evolution of singular processes and hidden predictors that lead to the emergence of early predictors of failure. It is shown that the latent dynamics leads to the appearance of problems associated with uncertainty in the calculation of RUL. To remove the uncertainty in RUL calculations, a physical quantity is defined that is functionally related to the RUL. In terms of this quantity, the distance to the failure boundary is defined in some pseudometrics. Following the described process an optimized self-maintenance solution is constructed. The work demonstrates the results by processing experimental data and confirming the presence of α-singular processes. It also provides some examples of self-maintenance solutions and RUL estimates for wind power plants, wave energy converters, in operational sea state forecasting, risk analysis of the ship propulsion system.


1. S. Kirillov, A. Kirillov, N. Kirillova and M. Pecht, №4440 (E-proceeding ESREL2020 PSAM 15 conference, 2020).

Hierarchical Multi-Class Classification for Fault Diagnosis
PRESENTER: Pablo Del Moral

ABSTRACT. The purpose of this paper is to formulate the problem of predictive maintenance for complex systems as a hierarchical multi-class classification task. This formulation is natural for equipment with multiple sub-systems and components performing heterogeneous tasks. Often, the data available describes the operation of the whole system and is collected for security or control reasons, and lacks details necessary for accurate condition monitoring. In this setup, specialized predictive systems analyzing one component at a time rarely perform significantly better than random. However, using machine learning and hierarchical approaches, we can still exploit the data to build a fault isolation system that provides measurable benefits for technicians in the field. We propose a method for creating a taxonomy of components to train hierarchical classifiers that find the component responsible for the fault. The output of this model is a structured set of predictions with different confidences. We design our hierarchy of components to have models that are reasonably accurate at the higher levels and relegate weak performers to the bottom. We introduce a new metric to evaluate the benefits of our approach; it measures the number of tests a technician needs to perform before pinpointing the faulty component. We perform an experimental study on a real-case problem coming from the automotive industry. The dataset contains historical information about the usage and repairs of multiple vehicles. We demonstrate how traditional machine learning performance metrics, like accuracy, fail to capture practical benefits. Our proposed hierarchical approach succeeds in exploiting the information in the data and outperforms non-hierarchical machine learning solutions.

A closed-loop prescriptive maintenance approach for an usage dependent deteriorating item – Application to a critical vehicle component

ABSTRACT. The digitalization of the economy in the past decades has made data availability grow and become more important. Within the maintenance field, new technologies are raising and the possibilities opened with the Internet of Things are countless. At the same time, customers are more demanding on maintenance performance and added value, wanting to reduce their exploitation costs while having no breakdowns. Consequently, in recent years, advanced maintenance strategies such as condition-based maintenance, then predictive maintenance, and, more recently, prescriptive maintenance gathered a lot of attention. Most of the research works conducted on dynamic predictive maintenance focus separately either on the issue of prognostic and residual life prediction or on the issue of post-prognosis decision-making for health management, assuming that the information on the residual life prediction is readily available. Consequently, the prognostic information is not always specifically designed, nor fully adapted to feed the following decision-making step and the health management decision methods. Considering prognostic information as a given input does not always properly take into account the specificity of the prognostic information, as e.g. its associated uncertainty. Such a situation where the prognostics step and the health management process are not tightly interlinked may be limiting the performance of the whole PHM loop, and does not allow making decisions to exploit the possible different usage modes for the system, to choose what tasks to prioritize and to reduce the overall exploitation cost [1]. To achieve true exploitation cost optimization, several modelling and methodological issues have to be solved to achieve such a goal. • The first step is to model the reality of the degradation phenomena for different usage conditions, with a model linking the deterioration characteristics and system exploitation. • The second step is to identify all the possible actions that can be applied to influence and control the system deterioration, and to model how they affect its remaining useful life. • The third step is to set up a decision structure based on an optimization routine that indicates what the best actions to take are at a given time. A closed loop decision-making structure is required to adapt to the dynamics of the deterioration, to the effect of the taken action and to the stochastic system behavior and unexpected events. Therefore, the result and effect of previous actions on the system is naturally considered when deciding on which action to take later. This paper develops a contribution on a full closed loop decision-making process for a particular truck component, i.e. the brake pad. The contribution is to propose an end-to-end solution, providing a tool to schedule maintenance and to manage the usage of the vehicles within a fleet, considering both the prognostic and the decision-making problems. First, we model the brake-pad degradation as a Wiener process with a linear drift [2]. We then model different actions that can be applied to the vehicle, such as undergoing maintenance or following a given mission schedule with missions of different severities. Finally, we implement a genetic multi-objective optimization algorithm with a dynamic cost function to minimize the overall exploitation cost of a vehicle in the long term. At the end, we validate our solution through simulations.

PRESENTER: Pier Carlo Berri

ABSTRACT. Electromechanical Actuators (EMAs) for aircraft flight controls are progressively replacing hydraulic systems in safety-critical applications. Hence, simple and accurate EMA numerical models are required for the real-time health monitoring of such equipment [1], as well as more detailed and computationally intensive simulations for design and training of machine learning surrogates [2]. In order to validate these models, we developed a dedicated EMA test bench (Figure 1) intended to replicate the operating condition experienced by common flight control actuators. The bench is highly modular, allowing to easily replace components and test different EMA architectures. In order to contain costs and time associated to the development, we made extensive use of off-the-shelf hardware; most of the custom designed parts were manufactured through rapid prototyping techniques. The test bench is able to simulate the operation of the actuator in nominal conditions and in presence of incipient mechanical faults, namely a variation of friction and an increase of backlash in the reduction gearbox. Sensitivity to electrical fault modes will be included in a future upgrade. The output of the test bench was compared to the predictions of numerical models in nominal conditions. The results showed a good matching between the two systems, which is promising for the use of such models within real-time health monitoring routines.

16:10-17:30 Session WE4E: Autonomous system safety, risk, and security
Location: Amphi Jardin
Hybrid Modeling for the Assessment of Complex Autonomous Systems - A Safety and Security Case Study
PRESENTER: Rhea C. Rinaldo

ABSTRACT. The automotive industry is facing various challenges with the introduction of autonomous vehicles. One significant aspect is the assessment and verification of safety and security concerns that the legislators and the public demand. New methods and tools are needed to analyze and assess these advanced systems by considering all relevant features and parameters, such as the interdependencies of safety and security while keeping the time effort reasonable. Hybrid models combining fast and accurate analytical approaches with relatively slow but realistic numerical approaches may be the answer to assess these complex systems while conquering state-explosion problems.

In this paper, we apply an existing hybrid model that combines an analytical and a numerical method on a complex autonomous system to perform a holistic safety and security assessment. Thereby we assess the system under two safety-relevant assessment modes, representing different fail-operational behaviors of the system. The goal is to show that the hybrid model is capable of assessing realistic system architectures while allowing the consideration of different assessment modes.

New Architecture for determine the Safety Parameters Based on Standard
PRESENTER: Ossmane Krini

ABSTRACT. In the future, more effective medicines will mean a product specially tailored to the patient. This ongoing trend in the pharmaceutical industry, for example, is commonly known as "personalized medicine. In many personalized treatment concepts, a small quantity of a drug must be filled, packaged and delivered to the patient in a specific dose. In addition to the task of processing smaller batch sizes, pharmaceutical companies must frequently switch between products and batches, which in turn increases the variety of products, as well as the complexity of production. New developments in robotics allow fast and simplified changeover of machines to new products, which is not feasible in current large-scale machine production. Robotics allows a profitable representation of the packaging process even for the smallest series up to personalized packaging.

PRESENTER: Marilia Ramos

ABSTRACT. Autonomous systems (AS) are complex systems and, as such, their operation relies on the interaction between its sub-systems (software, hardware, humans) [1]. These interactions may lead to emergent failures modes that are difficult to predict. Additionally, most of AS will operate in a dynamic environment, interacting with non-autonomous and/or other autonomous systems. The anticipation of the systems and sub-systems’ possible decisions during these interactions is crucial for identifying and analyzing potential hazards and risks and guarantee a safe operation. In this context, Game Theory (GT) has been increasingly used for modeling the interactions between AS and other agents in conflicting or cooperating situations. Since its first publication in the 1940s [2], GT has been widely used in the area of Industrial Organization. More recently, several other applications have emerged, such as AS operation. In addition to modeling the interactions between autonomous and non-autonomous systems, recent applications of GT for AS also includes the use of game-theoretical approaches for algorithms testing and development and in the context of cyber-physical security. Yet, the application of GT for analysis of AS operations under a risk perspective can be considered as being at its early stage. This paper aims at discussing how GT is being applied to AS and how these applications may be leveraged in the context of risk. The discussion is based on a review of the recent literature on GT applied to AS. The search in the Scopus database using a combination of relevant keywords resulted in a total of 100 articles within the period 2015-2021. The articles were analyzed with regard to the technical domain of application, the scope of use of GT, and the type of game utilized. The review allows for identifying opportunities for further use of GT for different aspects of AS safety, reliability, and security.

Social Engineering Exploits in Automotive Software Security: Modeling Human-targeted Attacks with SAM
PRESENTER: Matthias Bergler

ABSTRACT. Security cannot be implemented into a system retrospectively without considerable effort, so security must be taken into consideration already at the beginning of the system development. The engineering of automotive software is by no means an exception to this rule. For addressing automotive security, the AUTOSAR and EAST-ADL standards for domain-specific system and component modeling provide the central foundation as a start. The EAST-ADL extension SAM enables fully integrated security modeling for traditional feature-targeted attacks. Due to the COVID-19 pandemic, the number of cyber-attacks has increased tremendously and of these, about 98 percent are based on social engineering attacks. These social engineering attacks exploit vulnerabilities in human behaviors, rather than vulnerabilities in a system, to inflict damage. And these social engineering attacks also play a relevant but nonetheless regularly neglected role for automotive software. The contribution of this paper is a novel modeling concept for social engineering attacks and their criticality assessment integrated into a general automotive software security modeling approach. This makes it possible to relate social engineering exploits with feature-related attacks. To elevate the practical usage, we implemented an integration of this concept into the established, domain-specific modeling tool MetaEdit+. The tool support enables collaboration between stakeholders, calculates vulnerability scores, and enables the specification of security objectives and measures to eliminate vulnerabilities.

16:10-17:30 Session WE4F: Civil Engineering
Vehicular loads hazard mapping through a Bayesian Network in the state of Mexico

ABSTRACT. Traffic counts collect information that is valuable for example the bridge and road design or maintenance processes. The average daily traffic (ADT) volume and the design hourly volume (DHV) are often the most collected measures of traffic, which are used in the design or assessment of major highways. Permanent control stations, situated in key locations of the highway network, gather data the entire year. However, one of the disadvantages of the traffic counts is that most counters used, do not measure total vehicle weight and axle load data. Traffic counts display only the classification of vehicles, volume counts, average daily traffic, and annual average daily traffic. Axle loads are required, for example, as input in the design of the pavement, the design of new bridges, and in the reliability assessment of existing ones. Weigh-in-motion (WIM) systems are usually used to collect vehicle load data. The State of Mexico has 115 permanent vehicle counting stations with 745 traffic counting points in its federal highway network. However, due to the lack of WIM stations, it is not possible to generate axle load data. In this work, a Bayesian Network (BN) quantified with data from WIM stations in the Netherlands, is used to describe the weight and length distribution of the heavy vehicles registered in the permanent vehicle counting stations of the State of Mexico federal highways. The Dutch and Mexican vehicle types are clustered according to similar characteristics. Later, synthetic WIM observations from the BN model are generated. These synthetic data adequately models the statistical dependence between variables. The outcome is a mapping tool with a linked database. The traffic volumes and axle loads can then be easily found and compared with other highways in the network. This work shows that the methodology here presented is widely applicable and depends mostly on the assessment of vehicle type configuration.

Bayesian networks for estimating hydrodynamic forces on a submerged floating tunnel

ABSTRACT. A submerged floating tunnel (SFT) is a novel structure that allows crossing waterways where immersed tunnels or bridges are not viable. However, no SFT has been built yet mainly, due to lack of experience. In consequence, there are several uncertainties regarding its design and construction. An effect that should be further investigated is the structural response of the SFT under the simultaneous action of waves and currents. For this purpose, extreme values of waves and currents that were generated through a vine-copula model are used as input in a statistical model based on Bayesian Networks (BNs). The BNs are used to study the conditional correlation (i.e the correlation between random variables conditionalized on a given event) between the hydrodynamic forces acting on the SFT and metocean variables such as waves and currents. This methodology was applied to a case study in China for a SFT aimed to be built at the Qiongzhou Strait. Moreover, the BN model was used to test twelve different configurations of the SFT, with varying submergence depths and diameter sizes. The proposed methodology can be used to provide a more realistic estimation of the forces on the SFT by considering the dependence between the variables of interest. Moreover, this methodology can be extended to test different configurations of the SFT and other hydraulic or maritime structures subjected to simultaneous loading.

Characterization of Long-period Ship Wave Loading and Vessel Speed for Risk Assessment for Rock Groyne Designs via Extreme Value Analysis
PRESENTER: Sargol Memar

ABSTRACT. During the last two decades, increasing vessel size in major German estuaries has led to the significant change of the local loading regime i.e. increased importance of ship-induced waves and currents. As a consequence, the intensity of ship-induced loads has increased considerably, resulting in damage to rock structures such as revetments, training walls, and groynes. Research into the causes of rock structure deterioration by the Federal Waterways Engineering and Research Institute (BAW) has shown that for large ships in relatively narrow waterways, the long-period primary ship wave loading has become the most prescient factor for rock structure damage. Looking into the future, it can be expected that the increase in the vessel dimensions will lead to an increase in the ship-wave loading. For this reason, analysing long-term changing trends of long-period ship waves and vessel speed to understand the wave-structure interaction is of significant importance. In this study, the stochastic characterization of long-period primary wave height, drawdown, and speed of the vessel through the water at Juelssand in the lower Elbe estuary was analysed via extreme value analysis and copula modeling. The bivariate return periods were calculated. The one-parameter bivariate copula was utilized to analyse the data. The dependence pattern between the variables was investigated using five parametric copula families: Gaussian, Gumbel, Clayton, Frank, and student’s t.

Adverse Event Analysis in the Application of Drones Supporting Safety and Identification of Products in Warehouse Storage Operations
PRESENTER: Agnieszka Tubis

ABSTRACT. In recent years, we have observed an increasing use of drones in various industrial processes,including, among others, in the area of supporting logistic processes. This upward trend will also continue in the coming years, largely due to the development of the Logistics 4.0 concept. Already today, the scope of logistics applications for Unmanned Aerial Systems (UAS) includes internal logistics of the company, infrastructure supervision, and last mile deliveries in highly urbanized areas. Therefore, it is necessary to conduct studies assessing not only the effectiveness and efficiency of the use of this technology, but also the accompanying adverse events.The aim of the article is to present the results of the analysis of adverse events that occur when using drones to support selected warehouse operations. These researches were carried out at the selected logistics operator. The main attention of researchers is focused on disruptions related to product identification in the material flow process in the operator's warehouse. The selected quantitative and qualitative method recommended in ISO 31010 was used for the conducted analyzes. Prepared analysis of adverse events includes: - identification of adverse events in the analyzed warehouse handling process; - grouping of events according to the developed classification; - assessment of the causes of adverse events in the selected logistics system. Then, the final conclusions were made based on the obtained results and verified with the results of the literature review. The presented studies are preliminary results and will be developed in the next publications. As part of further work, the authors plan to prepare a detailed risk assessment related to the use of drones in logistics processes.

16:10-17:30 Session WE4G: Asset management
Location: Atrium 3

ABSTRACT. The selection of Maintenance Significant Items (MSI) is undoubtedly one of the most important phases in the implementation of Reliability-Centered Maintenance (RCM) in any organization, being essentially a screening phase in which the number of items for analysis can be reduced and prioritized. Despite its importance, there are currently few studies that present systematic and structured methods for the identification of MSI. Fundamentally, there are two phases to identify the MSI in the physical asset portfolio. First, based on the system study and analysis, the criteria and scales are established. This phase can be carried out with the support of standards or experts’ knowledge, and objective criteria and scales can be obtained. Second, the criteria evaluation for each item is performed. In this phase, generally, a multicriteria decision method is used to rank the most critical items, such as Analytic Hierarchy Process (AHP) and Analytic Network Process (ANP). However, their intrinsic subjective evaluation can lead to bias results. To prevent this issue and simplify the process of defining MSI, this work proposes the use of an unsupervised method based on Principal Component Analysis (PCA). From the application of this method, not only are the MSI of a system defined, but the importance of the criteria selected in the first phase is assessed based on the variability of the scores associated with each item. Thus, criteria that end up not influencing the result of the criticality assessment can be disregarded from a minimum value of the cumulative percentage variation. To demonstrate the method, it is implemented in a Brazilian hydroelectric power plant and the results are compared to those obtained from a more traditional approach based on AHP. It is noted that the proposed method points to a robust MSI selection, consistent with the analyzed system.


ABSTRACT. The Seveso III Directive 2012/18/EU, implemented in Italy by a legislative decree issued in 2015 “D.Lgs. 105/2015” (1), imposes an obligation to provide a plan for monitoring and control of risks related to ageing of equipment and systems that can lead to loss of containment of hazardous substances, including the necessary corrective and preventive measures. In order to frame the issue, an overview of the Italian law and national standards and guideline (2) concerning ageing and asset integrity is given, also focusing attention on the role of Public Authorities in addressing ageing in hazardous installations (3). The main outcomes of the analysis of some industrial accidents, that recently occurred at Italian “Seveso” establishments, where ageing mechanisms have been identified as a significant cause, in terms of technical and organizational factors, are then presented. Starting from some examples on how organizations manage these problems through specific procedures oriented to the “asset integrity management”, a brief description of the processes and methodologies implemented and a focus on good practices about the methods used to assess industry’s response to ageing issues are proposed. In addition, the paper describes the results, lessons learned and return of experience of Safety Management System inspections, conducted in Italy in the last three years, where weaknesses emerged with reference to ageing and asset integrity of the hazardous installations inspected (deterioration and degradation caused by corrosion, erosion, stress, fatigue).

Benchmarking and compliance in the UK Offshore Decommissioning hazardous waste stream
PRESENTER: Sean Loughney

ABSTRACT. As an offshore installation reaches the end of its design life, a decision must be made as to whether to carry out a late-life extension or to decommission it. With the current move towards a circular economy [1], ways to decommission an installation safely and sustainably need to be developed. Part of the decommissioning process must address how to handle hazardous waste materials from the installation. Hazardous materials must be identified, transported, and processed in line with current legislation. Through the analysis of available literature, it has become apparent that there is a gap between the sustainability of the decommissioning of offshore installations and the management of hazardous waste materials. The decommissioning process produces waste that can be recycled, reused, or disposed of [2]. Calder (2019) highlighted that when an installation is being transported to an onshore yard, it is no longer under HSE until it reaches shore. Once an installation, or its parts, are brought onshore for dismantling, it is no longer under the permissioning regime, but an inspection led one. Wilkinson et al. (2016) discuss the importance of communication between stakeholders and those responsible for planning the decommissioning process of an offshore installation. When tasks such as risk assessments are outsourced, it is difficult for stakeholders to judge the technically complex issues and have confidence in the final proposals. Ahiaga-Dagbui et al. (2017) echo this and go on to suggest that information and knowledge need to be more freely shared amongst operators and contractors. Together, these issues have the potential to combine and reduce the sustainability of the decommissioning process. Walker and Roberts (2013) also raised a similar issue stating the lack of knowledge sharing, trust issues and a skills deficiency. Therefore, this paper investigates the sustainability of UK offshore decommissioning activities and the management of hazardous waste. As well as identify and evaluate a list of existing gaps in regulatory compliance that would be used for supporting to policy makers and regulators in the waste stream of the UK offshore decommissioning processes. This will increase awareness of risks, health and safety issues and compliance through the waste handling journey.

Analysis of failure rate of water pipes

ABSTRACT. The water supply system operation should be characterized by low operating cost and high operational reliability. The water supply system reliability involves operation with proper conditions so as to guarantee water supply to the recipients. The assessment of the reliability of the water supply system is considered in terms of ensuring the required pressure and the required water quality and at any time convenient to the water recipients [1, 2]. For this reason, the analysis of water supply network failure and related issues is an important issue in assessing the proper functioning of the water supply system because water companies put strong emphasis on the fast removal of failures [3]. In this work, the analysis of selected factors influencing the failure removal time of water pipes on the example of the water supply system in the south of Poland was presented. The analysis of failure removal time was performed for the following factors: water pipe functions, water pipe material and diameter, type of water supply network, failure and season. The analysis was based on the real operation data obtained from the water company. The highest failure rate was noted for the mains and the lowest for water supply connections, for the mains in the analyzed time, the failure rate significantly exceeds the recommended values. The presented analysis of the failure removal time in terms of cost of planned maintenance is an important issue which will allow for failure prediction and the initial estimation of the costs of failure removal, which in turn, will allow for a long-term budget planning in water companies. Time of failure removal in water pipes and failure analysis in the water supply network show the complexity of the problem and the need for further research related to the modernization of water network and the implementation of the efficient system of failure removal to reduce the waiting time for repair to minimum.

16:10-17:30 Session WE4H: Automotive Industry
Location: Cointreau

ABSTRACT. The market of electric vehicles (EVs) is rapidly growing across the world attributed to their unique feature of zero carbon emission. Take the Chinese market as an example, 984,000 pure electric vehicles were sold in China in 2018, which is an increase of 50.8% over the same period of the previous year. This means there will be more and more electric vehicles will run on the road in the future. However, the reliability of these electric vehicles is still an open issue remaining to resolve today. In particular, the reliability of the motor controller in electric vehicles is receiving more concern than ever before. On the one hand, this is because it is well known that power electronic components in the controller are much less reliable than the mechanical components in other EV subassemblies. One the other hand, it is because the failure of motor controller may lead to dangerous accidents on the road. Previously, much effort has been made to try to predict the reliability of motor controller, however detailed investigation of its reliability issues has never been done before. In view of this, a detailed reliability study of the motor controller in pure electric vans will be conducted in this paper, with the consideration of the fact that more than 90% of sold commercial electric vehicles are pure electric vans. In the research, the detailed root causes of the reliability issues in the motor controller will be investigated first and then based on which the failure rates of individual components (e.g. control module, driver module, communication module, and discharging module) in the controller will be estimated with the aid of fault tree analysis (Yan 2017) and the international standards IEC TR62308-2004, MIL-HDBH-217E and the technical standards for the Chinese electric vehicle industry. Finally, the tendency of the unreliability index of the entire motor controller against the service life of the electric vehicle is estimated based on the fault tree analysis results in order to obtain a more reliable understanding of the reliability performance of the motor controller over time. From such detailed reliability research, it has been found that the reliability performance of the motor controller will degrade gradually over time; and among the four functional modules of the controller the control module is most vulnerable, followed by driver module. This is due to the application of more electronic components and thinner printed lines on the modules.

Reliability engineering of electric vehicle powertrains: Data collection and analysis based on products in the usage phase

ABSTRACT. In recent times, the climate change has a massive impact on the choice of the drive technology of newly developed vehicles. The transformation to a reliable, safe and sustainable powertrain is pursued. The automobile industry is changing accordingly and the market shares of electric and hybrid vehicles have increased significantly. The data of electric vehicle powertrains differs from combustion vehicle powertrains for example in types of measurable and analyzable signals, usage behavior, and influence factors which have an impact on the components and systems of the powertrain. For the understanding and consideration of real conditions, it is important to collect field data of electric vehicles. To evaluate and improve the reliability, safety and sustainability of electric vehicle powertrains a comprehensive field data based assessment is needed. In this paper the acquisition and the analysis of field data of electric vehicles regarding the reliability, safety and sustainability of electric vehicle powertrains are presented. Requirements for the system architecture as well as data generation, pre-processing and management are outlined.

Driving Factors For Driving Simulators – A Feasability Study

ABSTRACT. Simulators are used for training purposes in many sectors where humans are required to perform in a safe and reliable manner and the costs and consequences of accidents are high (e.g., aviation, nuclear, oil and gas). Driving simulators are sparsely used in driver training, even though performing in a safe and reliable manner is without a doubt of high importance, and traffic accidents are among the most common cause of deaths around the world. This paper evaluates the factors influencing the development towards increased use of simulators in driving training, both enablers and barriers, discussing both current condition and future scenarios. Four different fidelity levels in driving simulators are presented; very low, low, medium, and high, and scenarios where these are used are discussed. The conclusion of the feasibility study is that there exist several potential markets for all four levels of fidelity in simulators, particularly set by demographic parameters and simulator content. The exploitation of this market depends strongly on the suppliers’ willing to adapt their product to market-specific needs and opportunities. Many simulator solutions reduce interaction between student and instructor. However, the driver instructor is still considered important in forming the students’ holistic understanding of driving, road attitude and understanding of risk.

Simulation of Parallel Layered Air Cooling Thermal Management System for Li-ion Batteries
PRESENTER: Pengbo Zhang

ABSTRACT. Battery thermal management structure design is an important measure to ensure high reliability and long life of battery. This paper combines the advantages of parallel ventilation and layered air channels and proposes a new parallel layered air cooled structure to improve the heat dissipation effect of the battery pack. The structure uses thermal conductive partitions to divide the air channel into upper and lower parts, and adopts a design where two independent fan channels work simultaneously. The upper and lower channels adopt reverse Z type ventilation. Firstly, the anisotropic heat transfer model of Li-ion battery is established based on the heat generation theory. Then, combined with the heat transfer model, the influence of different flow directions on the heat dissipation performance is studied by using FLUENT simulation software. Finally, the thermal management structure proposed in this paper is improved and optimized. The results show that the maximum temperature and the maximum temperature difference of the battery pack decrease after using the parallel layered air cooling structure, and the temperature field distribution of the battery pack is obviously improved; after increasing the number of outlets, the maximum temperature of the battery pack decreases by 9.55%, the maximum temperature difference decreases by 16.56%, and the heat dissipation performance is further improved.

16:10-17:30 Session WE4I: Artificial intelligence for reliability assessment and maintenance decision-making
Location: Giffard
Supplementing Fault Trees calculations with neural networks

ABSTRACT. The use of artificial intelligence algorithms is rapidly gaining ground in engineering applications, including safety engineering. In this paper, we investigate the possibility of using neural networks to supplement fault trees in the safety analysis for the estimation of reliability and importance metrics. For this aim, we employ data from an existing fault tree that models cruise ships blackouts to train a neural network that uses base-event probabilities as input and outputs the estimated top-event probability/frequency. This is done to reduce computational time, as the fault tree model has an extensive number of basic events and is thus computationally demanding. The information that is used as input to the Fault Tree is randomly sampled from a Sobol sequence and is used to estimate the top event probability. The resulting data cloud that corresponds to the faut tree’s input-output pairs, is used to train the neural network. The two models, i.e. the probabilistic and the neural network model, are compared to each in other in terms of accuracy and computational cost correlated with the number of sampling points that is used. The Fault Tree is developed in Matlab/Simulink and the neural network in Python. For case where the Neural Network is trained using 10,000 points, a 350 times decrease in computational cost is observed compared to the fault tree model, while the mean absolute percentage error (MAPE) remains at under 15%. Based on the results, recommendations for the application of artificial intelligent algorithms are made.

Prioritization of culvert maintenance combining Multi Criteria Decision Models and Data Mining techniques

ABSTRACT. A current challenge of modern societies is to keep aging infrastructure systems operative and in good condition. Given limited maintenance resources, a risk-based approach to the prioritization of maintenance actions is required. Multi Criteria Decision Models are suited for this task because they allow a combination of qualitative and quantitative information about several risk indicators which are also characterized by different unit measures. However, complex decision problems characterized by too many alternatives, attributes, and criteria are difficult to be tackled. Therefore, methods are required which support and simplify the application of these models for the risk assessment of aging infrastructures. This paper proposes the integration of Multi-Criteria Decision Models with Data Mining techniques. A cluster analysis based on the K-medoids algorithm is carried out in order to reduce the number of alternatives and identifying those which dominate or are dominated within the decision problem. The SMARTS model is then applied in order to aggregate single-attribute utility functions and compute the preferences over the alternatives. The proposed approach is applied to a system of aging culverts of the German waterways network. Results show that the proposed procedure allows a quick but comprehensive and easy to interpret risk assessment of aging infrastructures. The method shows also a great potential for considering multiple system levels and failure scenarios. These represent the next steps of this research.

Deep Reinforcement Learning-based maintenance decision-making for a steel production line

ABSTRACT. In the 4.0 industry, the adoption of system monitoring technologies provides a large amount of data about the health of the system, which raises a challenge to adopt condition-based maintenance (CBM). Due to its capability to act into the system in real-time based on its condition monitoring, which leads to a cost reduction and enhancing the system availability and reliability, CBM has become a powerful tool for industry competitiveness. However, to take the advantages of huge data in maintenance decision-making, an important issue to be considered is the large space of states and actions, which is impossible to cope with the traditional maintenance model. To overcome this issue, integrating emerging tools of Machine Learning and Artificial Intelligence into maintenance decision-making and optimization seems to be promising. Therefore, this work proposes a Deep Reinforcement Learning (DRL)-based maintenance policy for a steel production line, in which the maintenance decisions are made based on the real-time data about the condition of the system. The production line under study uses metal scrap as the raw material for the steelmaking. Before its usage, the scrap needs to be crushed in a shredder machine, which is the most crucial process. An intermediated buffer is used to keep supplying crushed scrap for the remaining stations when the machine is turned off for maintenance actions. To simulate the dynamic of the production line, two different scenarios considering distinct aspects and assumptions of the system are investigated. A DRL framework is then built for each scenario to learn through the interactions with the environment, which is the optimal maintenance policy aiming to minimize the expected long-run cost-rate. A numerical case study is performed to evaluate the proposed DRL policy comparing with existing maintenance policies. As a result, the proposed DRL policy shows a better result in terms of cost along with the increase of the system availability.

A Method based on Gaussian Process Regression for Modelling Burn-in of Semiconductor Devices

ABSTRACT. Burn-in is a systematic screening method to ensure the reliability of semiconductor devices. It consists in the operation of the manufactured devices under accelerated stress conditions, such as high temperature and voltage. The aim is to remove the devices that would fail in the initial portion of the bathtub curve of the failure rate and estimate the corresponding Early Life Failure Rate (ELFR) value. In practice, performing burn-in is costly and time-consuming, particularly for new technologies. In this context, the present work aims at developing an Artificial Intelligence (AI)-based method to: i) predict the number of defective semiconductor devices within a production lot by resorting to signals measured during the production process; ii) estimate the early life failure probability of the manufactured devices. The method combines: a) dimensionality reduction by Principal Component Analysis (PCA) for the extraction of features characterizing the production process from the measured signals and b) Gaussian Process Regression for the prediction of the number of defective devices in the lot. The method is applied to artificial data simulated to emulate burn-in process data. The obtained results show a satisfactory accuracy in the prediction of the number of defective devices in the lot and of the corresponding early life failure probability.

16:10-17:30 Session WE4J: Effectiveness, Management and Reliability of Natural Risks Reduction Measures and Strategies
Location: Botanique
Coupled numerical model CFD-DEM of debris flows impact to improve the vulnerability quantification of structures
PRESENTER: Rime Chehade

ABSTRACT. Debris Flows (DF) are dangerous events that can causes up to the complete destruction of structures (buildings, bridges…). They represent high risk for the public safety. We developed a numerical model to evaluate the impact pressure (P) of mass flows by better describing the effect of boulders: the boulders’ size strongly influences the impact pressure, which has a considerable effect on the structure’s damage [1]. The model can help for the design of civil protection measures by quantifying the vulnerability of the structures. The numerical model proposed considers a one-way coupling granular-fluid model based on a Distinct Element Method (DEM), using separated Computing Fluid Dynamics (CFD) calculation results for the fluid phase. This model estimates the impact of the boulders at the pillar’ local scale (Fig.1). The pressure due to fluid phase can be added afterwards. These measures were validated by empirical models [2]. The vulnerability of the structure depends on the intensity of DF: P is chosen as an intensity that indirectly integrates the height, velocity and boulder'size of DF [3]. An improvement of the existing vulnerability function is proposed based on our simulations considering a new parameter as the maximal diameter dmax: we will be able to estimate the value of the pressure (therefore vulnerability) taking into account dmax. These results help engineers to prepare defense structures and to build advanced engineering solutions based on the boulder’s size for the prevention and mitigation of risks caused by different flows.


ABSTRACT. PERFORMANCE OF THE SEDIMENT CONTROL DAMS BUILT AFTER THE 1999 DEBRIS-FLOW DISASTER IN VARGAS JOSE LUIS LOPEZ Institute of Fluid Mechanics, Engineering College, Universidad Central de Venezuela, Caracas, Venezuela E-mail:

The state of Vargas, in northern Venezuela, was devastated by multiple landslides and debris flows in December 1999. In a period of 8 years (2001-2008), 63 check dams were built by government authorities to protect the downstream population. The main function was to retain and sort the sediment material. A few of them were built to stabilize the river bed (consolidation dams). Basically, 37 of the structures are closed-type dams and 26 are open dams (slit, window or beam type). Regarding the construction material, 44 dams were made of gabions, 14 of concrete, three of steel pipe and two made of flexible barriers (Lopez, 2020). In this paper, a critical review of the performance of the check dams is attempted, based on 20 years of experience (field observations and topographical surveys). The morphodynamic effects in the channel beds after dam construction are discussed and summarized. Dams were tested by two subsequent floods in 2005 and 2010, where thirteen dams retained about 300.000 m3 of sediment protecting the towns of Maiquetia and Camuri of a new disaster. However, some dams were affected by the floods. Two gabion dams in Anare were destroyed by lateral bypassing of the flows. Another gabion dam in Caraballeda collapsed by scouring of a lateral abutment (Fig.1). Partial damages have also been observed in some structures due to abrasion of concrete coating and removal of gabion rows by rock impacts. Most of the dams were subjected to a rapid pace of sedimentation, even the open dams, due to large sediment yield and woody debris transported by the flows that clogged the openings. However, a partial self-cleaning process has taken place in some of the slit and beam dams. Additionally, channel degradation has occurred in some downstream reaches caused by the “hungry water” effect due to the interruption of bed load transport as sediment was being trapped upstream. Significant bed erosion, between 2 and 4.5 m has been measured at the foot of a few dams. In some cases bed lowering has been caused by downstream gravel extraction activities. In conclusion: a) Six dams and three counter dams have been totally or partially destroyed by subsequent floods; b) Three dams are in imminent danger due to significant bed lowering at the foot of the structure; c) all closed dams (37) are already full of sediment, so they cannot mitigate future debris flows; d) maintenance has been poor or almost inexistent; e) in most cases no access roads for machinery and equipment were provided to remove the sediment material retained in dams; f) a feasibility study is required to analyze which one is the best solution: removing the accumulated sediment upstream of the structures or building new dams; g) the presence of the control works (dams and channels) has created a false sense of security and new houses have occupied the river banks. Efforts have to be made to improve land use regulations and enforce the law to prevent reoccupation of areas subject to high levels of hazard.

Fig.1. Damages in some dams of Vargas caused by: a) rock impact; b) lateral abutment scour; c) bed scour; d) lateral bypassing. Keywords: Debris flows, Vargas, Venezuela, check dams, performance.


1. J.L. López, Aprendiendo del Desastre de Vargas, Work of incorporation to the National Academy of Engineering (2020) (

Optimizing recovery strategies for interdependent lifeline systems exposed to a natural hazard

ABSTRACT. Natural hazard events can lead to large-scale failures in lifeline systems. By enabling a fast recovery of these systems following failure events, one enhances their resilience. However, this resilience also comes at a cost, e.g. if additional reparation crews need to be hired. In this contribution, we present a work towards the identification of a robust recovery strategy that optimizes the trade-off between the downtime and losses on the one hand and the cost for enhancing resilience on the other. We use a simplified model of the network [1] and simulate system failure events [2] based on a stochastic set of natural hazard scenarios. We employ the frameworks described in [3,4] for modelling the system recovery, and identify the relation between recovery costs and the losses associated with system downtimes. Furthermore, we investigate the influence of the interdependence between lifeline systems in identifying the optimal recovery strategy. We illustrate the methodology with an example of interconnected power and water networks exposed to a seismic hazard.


1. B. Cavdaroglu, E. Hammel, J.E. Mitchell et. al. Ann. Oper. Res. 203, 279 (2013) 2. P. Crucitti, V. Latora and M. Marchiori. Phys. Rev., E69 045104R (2004) 3. M. Ouyang, L. Dueñas-Osorio and M. Xing. Struct. Saf., 36-37, 23 (2012). 4. G. Zhang, F. Zhang, X. Zhang et. al. IEEE Trans. Smart Grid, 11:6, 4700 (2020).

Integrating Imperfect Information in the Deterioration Modeling of Torrent Protection Measures for Maintenance and Reliability Assessment
PRESENTER: Nour Chahrour

ABSTRACT. Among several types of natural risks’ protection measures, check dams are the most dominant in French mountains. Over their lifetime and while being subjected to severe phenomena such as torrential floods, the check dams’ efficacy will be affected and therefore the level of protection provided by these structures will be reduced. Available budgets oblige risk managers to establish priorities between different maintenance strategies to be applied on structures that require maintenance. Modeling the dynamic deterioration of check dams and making decisions on their maintenance always depend on the amount of available data, diverse sources of data, assumptions, etc. This paper focuses on assessing the influence of uncertain inputs on the outputs obtained from a check dam’s deterioration and maintenance model. The end purpose of such analysis is to support check dams’ maintenance decision-making under the effect of information imperfection.

16:10-17:30 Session WE4K: Manufacturing
Location: Atrium 1
PRESENTER: Michal Cihlar

ABSTRACT. In today's practice, molten salts are used in thermal energy storage technologies, solar tower technology [1], and in the development of nuclear reactors of the fourth generation [2]. The greatest advantages of liquid salts undoubtedly include the possibility of achieving relatively high temperatures at atmospheric pressure, which leads to higher efficiency and safety of power plants using the molten salts. Liquid salts have a high boiling temperature and a high density compared to water. In nuclear power plants, the following advantages are also associated with the use of molten salts: the elimination of the need for fuel fabrication, the possibility of continuous refueling and the possibility of fission products removing. Despite these indisputable advantages, some aspects of molten salts (e.g. corrosion effects, melting and solidification, etc.) are little explored. Therefore, the behavior of molten salts and the technology of their use in practice is an important recent research subject. An experimental MSL device was built at the Research Centre in Řež for the purpose of studying the material properties of selected salts [3]. During the MSL loop commissioning, several minor or more significant failures and defects occurred. Analysis of the causes of these defects showed that their origin was in both, the device structure incorrect design and the operating procedures [4]. On the basis of this analysis, the critical points of the device were identified, and the geometric modifications were made to the structure and, with the help of the CFD tools, three modes of operation mode were established: normal; abnormal; emergency in the event of a power outage. Therefore, the risk-based design was used in the construction of the new equipment. Two procedures were used to do this. The aim of both procedures was to ensure process safety by reducing the criticality. The first procedure was based on an analysis of the failure of the original installation – its critical points were identified on the basis of technical principles and considering the possible phenomena that may affect the device operation, including the human factor; i.e. a checklist and scale for its assessment have been drawn up for risk assessment. The second procedure was theoretical and applied the CFD models. The article shows the resulting device design.

Working Situation Health Monitoring: proposal of method and case study
PRESENTER: Romain Duponnois

ABSTRACT. In a working situation on an automated assembly machine, technical drifts during operation can lead to machine dysfunctions. These dysfunctions may cause the operator supervising the machine to adapt and respond to reduce the effect of these technical drifts on the rest of the working situation. To respond to these dysfunctions the operator may expose him or herself to hazardous phenomena and thus be in a hazardous situation. (Lamy & Perrin, 2020) showed the feasibility of identifying this kind of potentially hazardous situation by observing the working situation. Here, we propose a method called Working Situation Health Monitoring (WSHM). The goal of this method is to identify these potentially hazardous situations by analyzing the potential drift of working situations and monitor the emergence of potentially hazardous situations using equipment and production data. It consists of three steps: firstly, we model the working situation studied to characterize the nominal working situation; secondly, we analyze cause-and-effect relationships between potential process drifts, potential operator responses and potentially hazardous situations; and thirdly, we construct a health indicator of the working situation based on knowledge of potentially hazardous situations identified in the second step and by equipment data. This paper also presents the application of the method to a case study (an educational automated assembly machine).


ABSTRACT. A general process of customization at the individual level is ongoing in several branches from the precision and personalized medicine (Duarte et al. 2016) to the individual financial and marketing methods (Matz, 2017). Safety the Risk assessment in manufacturing generally neglected the influence of the personal characteristics of the worker involved, strongly limiting the effectiveness of the risk based decision making.This paper is focused on the development of an innovative approach to explicitly take into account the individual characteristics of the operators (human factor) within the risk assessment in the work environment. Human factor is assessed with a set of test applied during the working activity. The resulting method allowed the definition of a personalized risk assessment based on the individual skill of each worker potentially involved (Human Capability) and the characteristics of the task (Workload). Preliminary results showed a relevant influence of human factor on the risk and consequently allowed the identification of area of intervention not highlighted with the more traditional approach. First line of development is represented by a more detailed evaluation of the rules that manage the interaction between Human Capability and Workload. Promoting a more detailed analysis on the importance of the single index in relation to human error probability of occurrence would make the model more accurate. This could be done at level of the single task, identifying for any operations which are the variables (memory, dexterity, etc) more stressed and which are less solicited. Second line of development could be focused on the shifting from practical test to the most recent tools related to the individual physical performance monitoring and visual behavioral monitoring. This will allow a migration from a discontinuous to a continuous data collection with a consistent upgrade in term of representativeness of results.

PRESENTER: Max Radetzky

ABSTRACT. In general, wear is caused by high mechanical forces acting on the cutting tool and the high temperatures generated by these forces. The mechanical and thermal loads depend on various parameters, such as the material of the cutting tool, the material of the workpiece, the cutting speed, the feed rate, the cutting fluid or the ambient conditions. To estimate the mechanical and structural reliability of the cutting tools, images of the teeth from different stages are analyzed (after the cutting process and pregrinded). Depending on the sawn material, a rating can thus be made on the degree of wear and the remaining service life. The color gradient of the images is determined and statistically evaluated in order to draw conclusions about the type of wear and defects of the cutting tool. The tool geometry of the teeth is recorded from different perspectives. Through the formation of characteristic values, a simple evaluation method of the tool reliability is presented. This research work contributes to the image based analysis of machining processes and wear behavior of cutting tools using statistical methods that allow comparability. In this process, important knowledge is gathered with regard to the degradation behavior and the prediction of the service life of cutting tools.

19:00-23:59 Gala Evening

Cocktail at the Jean Lurçat Museum then

Dinner at Greniers Saint-Jean