previous day
next day
all days

View: session overviewtalk overview

08:30-10:10 Session TU1A: Risk Assessment
Location: Auditorium
PRESENTER: Mathias Eidem

ABSTRACT. Norwegian authorities have an ambition to develop the E39 road as a continuous Coastal Highway Route between Kristiansand and Trondheim without ferries. The western coast of Norway is characterized by deep and long fjords cutting into the mountain landscape. These fjords are to be crossed, but this is challenging since the fjords are long, and up to 1350 meters deep and 5 km wide at favourable crossing points. Floating bridges are probably suitable to cross the fjords, but the coast and the fjords are exposed to significant ship traffic. Some of the new bridges along E39 will, when realized, be the world’s largest bridges of their kind, and critical to the future communication in Norway. Hence, the bridges need to be designed such that the risk from severe ship allisions from the ship traffic passing the bridge is within acceptable limits. The Norwegian rules for bridge engineering demands that the design ship(s) should be assessed in a separate risk analysis, where the design ship size and mass, the ship's speed at collision and the associated accidental actions (impact energy and momentum) are determined such that the risk acceptance criteria for the bridge crossing is fulfilled. The risk assessment includes both frequency analysis, and impact analysis. Previous ship allision research has not focused on floating bridges. Long and slim constructions, like a 5 km long floating bridge, have other challenges than fixed bridges. This paper reviews and discusses the risk assessment process suitable for floating bridges. We present an overall applied methodology for risk assessment of ship allisions against floating bridges. We argue that ship allision risk may be a major contribution to the total risk. The risk assessment should pay more attention to the impact side, to understand how the bridge responds to a ship impact, like the distribution of the impact energy absorbed in global deformations and the impact energy to be dissipated through crushing of pontoons or bridge girder.


ABSTRACT. In a market with great demand for differentiated products and services, the search for technologies that guarantee more efficient processes grows. The advent of Industry 4.0 makes it possible for companies to invest in innovations in the hope of boosting their business. In this context, the concept of reliability as a measure of the expected delivery of the planned results in a given period of time and under specific conditions becomes essential to reduce the risks that threaten a project in the context of Industry 4.0. This study proposes to identify risk factors that can threaten the success of a project inserted in this scenario. As a methodological approach, a literature review to identify risk factors and a case study of a real project with a food manufacturer were carried out using Bayesian Belief Networks (BBN) in the analyses. The main risk factors identified were non-compliance with good practices or specific legislation, low guarantee of efficiency of the solutions proposed by the project, impossibility of integration between obsolete systems and components with 4.0 technologies, and error in project cost planning. This research is valuable for industrial managers who need to anticipate the difficulties in implementing Industry 4.0 projects.

Functional Safety Assessment of Distributed Predictive Heating and Cooling Systems for Electric Delivery Vehicles

ABSTRACT. In modern sustainable transportation, thermal management systems control energy from several sources (internal combustion engine, hybrid- and pure-electric motors and their inverters) as well as waste heat from energy storages and chargers controlled by battery management systems. In battery electric vehicles (BEVs) the main source for generating heat or cold energy is electrical power from the battery. The challenge, in particular in the case of highly dynamic driving profiles, is a well-adapted heat distribution controller to compensate or dissipate high-temperature differences from source to sink components. Several applications have been introduced to use optimal synergized thermal and energy consumption, such as regenerative braking, cabin comfort, cold and hot storage for delivery service. Here we propose an innovative and predictive thermal management system with related cooling and heating elements to intelligently reduce overall energy consumption based on vehicle driving profiles. To ensure the safety of the thermal management functionalities, the safety analysis approach needs to be systematically designed and compliant with international safety standards. This paper provides an assessment methodology. Classical inductive and deductive system analysis methods are involved in the analysis and are interconnected to determine functional safety requirements for the overall thermal management system. Safety margins are identified to deploy waste energy without deteriorating the battery system. The outcomes of the analysis show which functionalities are key to control all identified potential hazards. In addition, reliability requirements regarding intelligent management and sensor capabilities are identified. The approach is exemplarily applied to a small electrical delivery vehicle.

PRESENTER: Romano Giovanni

ABSTRACT. Methane is widespread for storage and transport practical reasons and is commonly used in industrial plants in gaseous form in their applications: accidental high-pressure unignited jets (methane) is a typical scenario investigated during industrial risk assessment. Considering a typical industrial plant, it is common that flammable gas release scenarios could affect one or more obstacles like buildings or equipment (e.g., columns, tanks, pipe rack, etc.) around the leak. When a flammable gas release interacts with an obstacle, its behavior can significantly change the relevant damage areas, as demonstrated by historical accidental experience and literature recent works (Bénard et al., 2016(1), Colombini and Busini, 2019(2)). The study of the behavior and effects, in terms of safety of a high-pressure jet affecting one or more obstacles inside an industrial plant, is not so common in literature. This work investigates how a series of obstacles in different configuration influence the jet cloud extent and, consequently, the hazardous areas. Varying the dimension, the shape and the relative position of the obstacles, the effect of the flow involving multi-obstacles was systematically studied through an extensive Computational Fluid Dynamics (CFD) analysis. For different methane releases, in terms of upstream pressure and accidental hole size, the main achievement of this work is a simple criterion, able to derive engineering correlations of practical use and determine the limits of influence on the jet of multi-obstacles. The situations in which a specific obstacle of the scenario inside an industrial plant does not influence the high-pressure methane jet behavior are also identified, therefore simplifying models and/or geometries during a CFD simulation.


1. P.Bénard, A.Hourri, A.Angers, and A.Tchouvelev in Adjacent Surface Effect on the Flammable Cloud of Hydrogen and Methane Jets: Numerical Investigation and Engineering Correlations, International Journal of Hydrogen Energy, 41, 18654-662 (2016). 2. C.Colombini and V.Busini in Obstacle Influence on High-Pressure jets based on Computational Fluid Dynamics Simulations, Chemical Engineering Transactions, 77, 811-816 (2019a).

Comparative Risk Assessment and External Costs of Accidents for Passenger Transportation in Switzerland
PRESENTER: Matteo Spada

ABSTRACT. Holistic assessments of mobility technologies and systems cover environmental, economic and social dimensions of sustainability. Commonly, two approaches can be used for the comparative assessment of technologies, the estimation of total (internal plus external) costs and Multi-Criteria Decision Analysis (MCDA), e.g. Hirschberg (2016). One of the central social indicators used for total costs and MCDA is the impact on health caused by accidents. Generally, when focusing on risks related to passenger transportation, only vehicle accidents are considered, neglecting other types of risks like, for example, the ones related to the production of the fuel used, etc. Therefore, a comprehensive risk assessment and the related external costs considering all the potential accident risks associated to all transportation modes is missing.

The aim of this study is to present a comparative risk assessment for passenger transportation. In particular, it considers for each type of transportation (cars, busses, trains, etc.) the combination of the vehicle accident risk and the accident risk that is related to the production and use of the fuel in different drivetrains (e.g., internal combustion, batteries, etc.). Based on the method described in Spada and Burgherr (2020), this study uses a multi-dimensional accounting method to assess the import-adjusted fatality rates for the different fuels, and a Bayesian model is developed to assess accident risk in the current scenario (2020). The analysis is performed for the case study of Switzerland considering historical observations for the upstream energy chains collected in PSI’s ENSAD for the time period 1992-2020 (e.g., Kim et al. (2018)) and trade data collected from IEA (e.g., IEA (2019)), while accidents on vehicles are collected from FEDRO. Once the accident risk indicators are estimated for different means of transportation and drivetrains, external costs of accidents are assessed and compared with previous studies in the literature (e.g., BfS (2019)).

08:30-10:10 Session TU1B: Mathematical Methods in Reliability and Safety
Location: Atrium 2
PRESENTER: Arne Bang Huseby

ABSTRACT. Within the field of reliability multistate systems represent a natural extension of the classical binary approach. Repairable multistate systems quickly become too complex for exact analytical calculations. Fortunately, however, such systems can be studied efficiently using discrete event simulations. In the binary case importance is usually measured using the approach by Birnbaum. Several authors have extended the notion of importance measures to multi-state systems. In the latter paper the component state processes were modelled as homogenous semi-Markov processes. Such processes typically reach stationary states very quickly. Thus, most properties of the system can be analysed using asymptotic distributions which typically are determined by mean waiting times and the transition matrix of the built-in Markov chain. In the present paper we focus on systems where the components are subject to seasonal variations or aging. In order to model this we use a generalized trend-renewal model. When the component processes are not homogenous, the analysis should cover the entire time frame, not just the asymptotic properties. This makes comparison of importance more complicated. Several numerical examples are included in order to illustrate the methodology

Dynamic grouping maintenance policy for the road infrastructure
PRESENTER: Ikram Najeh

ABSTRACT. The quality and the ability of the road infrastructures play a very important role to ensure safe and convenient transportation. Current human needs oblige us to optimize the use of our travel resources. The existing road infrastructure must therefore be developed and maintained to address the availability needs induced by the new uses of mobility. In consequence, the maintenance of the road infrastructure must be optimized. In literature, during the last decade, some algorithms for grouping maintenance actions has been proposed (1) (2) (3). In last ESREL conference we proposed a maintenance policy for lines and road cracks pavement of the road infrastructure by considering it as a 4 components system: median strip line, emergency line, broken center line and pavement. The proposed strategy is based on the individual optimal maintenance plan of each component. Then, over a finite planning horizon, the scheduled maintenance actions are grouped together to ensure both the proper functioning of the system and to minimize the cost of maintenance (4). To improve this previous work, a new long-term horizon (30 years) dynamic grouping maintenance strategy is proposed, dynamically considering new monitoring data. This new algorithm is applied to the maintenance optimization of the road infrastructure, using the Long Term Pavement Performance database (limited to Texas infrastructure data).

Designing Reliability-Informed Customer Surveys
PRESENTER: Neda Shafiei

ABSTRACT. Because new products enter the market very rapidly, estimating the reliability of these products is challenging due to insufficient historical data. Customer survey data can be used as the prior information in a Bayesian analysis integrated with product returns, reliability tests, and other reliability data sources to improve reliability estimation. Customer surveys are usually designed for purposes other than reliability estimation. Therefore, extracting reliability information from these surveys may be hard or impossible. Even when possible, the extracted reliability information contains significant uncertainties. This study provides an approach for using a reliability-informed customer survey and analyzing the collected data. This paper describes the critical elements of a reliability-informed survey. A generic and flexible mathematical model is then proposed, which utilizes the critical elements to estimate the life distribution. The model converts the various applicable stress profiles into usage cycles/times and estimated the life distribution. The parameters of the life distribution model are estimated through the maximum-likelihood estimation method and Bayesian analysis. The proposed approach is generic and can be used to estimate the life distribution and the reliability of a product at different stress levels and times. A case study is presented in which the approach is applied to a device using a simulated dataset.


ABSTRACT. In reliability engineering, we need to understand system dependencies, cause-effect relations, identify critical components, and analyze how they trigger failures. Three prominent graph models commonly used for these purposes are fault trees (FTs), decision trees (DTs), and binary decision diagrams (BDDs).

These models are popular because they are easy to interpret, they are used as a communication tool between stakeholders of various backgrounds, and they support decision-making processes. Moreover, these models help to quantify and understand real-world problems as they allow computing reliability metrics, finding minimum cut sets, deriving logic rules, and displaying dependencies.

Nevertheless, it is rather unclear how these graph models are compared. Thus, in this paper, we present an overview and a systematic comparison based on their (i) purpose & application, (ii) structural representation, (iii) analysis, (iv) construction, and (v) benefits & limitations. Furthermore, we use a running example to showcase the models in practice.

Our results show that FTs, DTs, and BDDs are mainly different, especially in terms of purpose and information encoded in the graph structures. However, we found that DTs and BDDs share a similar type of elements, information propagation, and induction process and that BDDs and FTs are more suitable for modeling cause-effect relationships. Finally, in order to take advantage of each type of model, we provide conversion methods to translate between them.

Misspecification analysis of a gamma- with an inverse Gaussian-based perturbed degradation model by using a new expectation maximization particle filter algorithm
PRESENTER: Nicola Esposito

ABSTRACT. Gamma and inverse Gaussian degradation processes are often considered equivalent, though this is not true. For this reason, the misspecification of these models is a problem of concern [1]. The point of this paper is evaluating whether and how the presence of measurement error impacts on this model misspecification issue. This specific issue was preliminarily investigated in [2]. Mainly due to numerical problems encountered in retrieving the MLEs of models parameters, the study performed in [2] was restricted to only two simulated datasets. In fact, computing the likelihood functions of the considered perturbed models, which are not available in closed form, requires intensive numerical methods that, at the same time, increase the computational burden and exacerbate convergence issues of numerical algorithm used to maximize the likelihood. This preliminary study indicated that the presence of measurement error increases the risk of incurring in a wrong diagnosis. Yet, it did not permit to say whether, and/or in which extent, a misspecification causes severe consequences in terms of prognostic and reliability assessment. In this paper, we extend the study presented in [2] in two directions. In fact, we propose a new sequential Monte Carlo EM algorithm, which allows to hugely simplify the ML estimation task and present the results of a vast Monte Carlo study developed by taking advantage of its use. The risk of incurring in a misspecification is evaluated as percentage of times the Akaike information criterion leads to select the wrong model. The severity of a misspecification is evaluated in terms of its impact on reliability and remaining useful life estimates.


1. Tseng S. T. and Yao Y. C., Misspecification Analysis of Gamma with Inverse Gaussian Degradation Processes. In Chen D. G., Lio Y., Ng H., Tsai T. R. (Eds.), Statistical Modeling for Degradation Data. ICSA Book Series in Statistics. Springer, Singapore, 2017. 2. Castanier B., Esposito N., and Giorgio M., Misspecification Analysis of a Gamma- with an Inverse Gaussian-Based Degradation Model in the presence of Measurement Error, Eds Baraldi P., Di Maio F. and Zio E., e-proceedings of the 30th ESREL conference and 15th PSAM conference, 1-6 November 2020, Venice, Italy.

08:30-10:10 Session TU1C: Maintenance Modeling and Applications
Extension of the concept of importance to multi-state systems with binary components

ABSTRACT. In this paper the concept of importance of a component in a complex system is generalized from an attribute of a single component to an attribute of a group of components that can fail or be repaired simultaneously (e.g. cascading failures can occur). It is assumed that the considered system is a multi-state one, its states are partially ordered, and its operation can be modeled by a Markov chain. The group importance is defined as the probability that the simultaneous failure or repair of all components in a group G results in a transition from state a to state b, where a<b or b<a. It is referred to as the importance of G to a transition from a to b. The paper’s main result are the formulas expressing the rates of transitions between the system states in terms of the above defined importances. It is also demonstrated how the obtained transition intensities can be applied to compute a number of practically important reliability parameters of the considered system. For better understanding, the presented theory is illustrated on the example of a simple three-state power supply system.

A virtual simulation originated data supplement model of maintenance time test based on multi-stage iteration and a neural network

ABSTRACT. Maintenance time reflects the level of product maintainability design, and is a quantitative parameter that must be considered in the stages of product finalization, evaluation and use. At present, the maintenance time of each stage is mainly obtained by statistical test, and then the specified time sample size is used to judge whether the product meets the requirements of maintainability design. However, due to its complex system, high test cost, too many test stages and long test cycle, the time data of aviation product finalization, evaluation and use stage can not reach the minimum sample size required by common statistical methods, which makes it difficult to carry out the maintainability verification of aviation equipment. At the same time, the simulation time data and historical time data of different stages obtained by virtual maintenance methods are not used in maintainability verification, resulting in a waste of resources. Therefore, this paper proposes a maintenance time verification data supplement model based on virtual simulation and multi-stage iteration. Firstly, the model trains the virtual simulation data and maintainability related information through neural network to obtain training data, so as to supplement the time data in the finalization stage. Then, according to the time test data in the finalization stage, the model updates the simulation time data symmetrically and reversely to complete the supplement and update of the maintenance time data in the finalization stage. And with the continuous progress of the stage, the time test data of this stage and the training data of the previous stage are used to complete the supplement and update of the time data of this stage. The model can not only train the time and maintainability data of virtual maintenance by neural network to supplement the time test data of finalization stage, but also continuously supplement and update the maintenance time data of three stages by multi-level iterative model. To sum up, the model fits the actual development process of the product, and has significant significance to reduce the maintenance and verification cost and shorten the development cycle.

Modelling of Condition-based Inspections and Deterministic Maintenance Delays for Bridge Management

ABSTRACT. An efficient transportation system is a prerequisite for the overall development of a modern society, where considerable resources are devoted to its construction, operation and maintenance. In Norway, more than 7 billion Norwegian Kroner is spent each year to ensure the required performance of the road network. With an increased number of ageing constructions in recent years, almost half of these expenses went to maintenance activities. As a vital element of the Norwegian road network, a total of over 18,000 bridges are distributed across the country. Their maintenance strategy can be classified as condition-based, where periodic inspections are carried out based on predefined rules and maintenance decisions are made based on the inspection findings. In view of the large stock of bridges, it is sometimes difficult to follow all inspection plans due to the limited budget and resources and the operators are struggling with the backlogs. An optimized inspection policy is therefore of great value to conduct less inspections without increasing the risk. Many different studies have been done regarding such issue. However, according to the review paper by De Jong and Scarf (2020), the majority of these studies focused on periodic inspections and fewer studies considered aperiodic inspection policies. Based on the studies by Arismendi et al. (2019) and Laskowska and Vatn (2019), this paper attempted to investigate the modelling process of condition-based inspection intervals with multi-state Markov process. A case study is presented based on empirical data from the Norwegian Public Road Administration, the agency responsible for planning, building, operating and maintaining national and country road networks in Norway. The resulted system failure rates and long-run expected costs are compared with the current strategy.

A framework to analyze reliability and maintainability for maintenance strategies considering usage variables
PRESENTER: Tomas Grubessich

ABSTRACT. The representation of the life and repair behavior of equipment is a fundamental topic for the definition of maintenance strategies. The information obtained from these representations is essential to different analysis such us the decision-making process related to the replacement of equipment; continuous improvement processes; design of representative indicators of the system, among other, all this towards the definition of the organization's maintenance strategies. However, based on the authors' experience, most of time difficulties arise when defining the possible states through which an asset has passed in a period. The possible states to be considered could be for example equipment in operation, operational detention and failure. Errors in the identification of equipment states over time can lead to an erroneous representation of the life and repair behavior, which is a fundamental input for a correct definition of the maintenance strategy. Therefore, this paper will present a framework whose main objective is to guide the process of determining the possible states of an equipment over the time, looking forward the different variables that represents its utilization in the best possible way. With this work it is expected to achieve an effective definition of the life and repair behavior of the equipment, identifying the best data sources and considering the operational characteristics of the system. In order to measure the potential of this framework, part of its implementation will be shown applied in a system in a copper mine in Chile. It will be presented the step by step process of the framework showing with special interest its characteristics, advantages and requirements, this in order to verify how the results achieved are in accordance with the objectives of the framework.

08:30-10:10 Session TU1D: Reliability, Availability and Maintainability of Safety systems
Location: Panoramique
Challenges in reliability estimation of modified technology using information from qualification testing – An offshore well integrity solenoid valve case
PRESENTER: Jon T. Selvik

ABSTRACT. Continuous improvement is a main principle in modern risk and safety management. To demonstrate technology improvement, for example related to reliability performance, testing could be performed as part of qualification activities. When testing modified drilling and well technology, the designer and technology provider already have a basis from the original technology regarding reliability performance and improvement potentials. In addition, there might also be a strong incentive to demonstrate some target reliability or safety integrity level, which might influence the reliability demonstration and estimation. In this paper, we refer specifically to qualification testing of a solenoid valve as part of a well integrity verification system, as an example case, where a main objective is to identify and discuss challenges related to the test information collected and used to estimate the reliability of modified technology. A main issue discussed is the trade-off between demonstrating acceptable levels with high confidence and the cost of testing. As part of this, we address statistical biases such HARKing and the file drawer problem, and the use of Bayesian updating for mitigation. And we give some reflections regarding the uncertainty of reliability estimates building on such tests in calculation of safety systems per ISO/TR 12489:2013, which could challenge the usefulness of the results.

The Safety Integrity Level of mitigation functions

ABSTRACT. Most companies have standardized Safety Integrity Levels (SIL) according to IEC 61511 for mitigation functions, e.g. emergency shutdown. In most cases a rationale for the choice of these SILs is lacking, other than field experience.

Mitigation functions differ from process safety functions in that they prevent the escalation of incidents, rather than preventing their occurrence. And whilst process safety functions are normally aimed at preventing a specific incident (e.g. failure of a vessel due to overpressure), emergency functions can mitigate the consequences of many different incidents.

LOPA is often used to determine the SIL of process safety functions and is typically performed under the assumption that the mitigation functions are working, i.e. that incidents do not uncontrollably escalate. Mitigation functions also violate the assumption of independence between safety functions. LOPA may therefore not be suitable to determine the SIL of mitigation functions.

This paper discusses the difficulties of SIL assessment for mitigation functions of process installations and proposes a method for this purpose, illustrated by an example from the subsea domain. The results show that the standard assumptions for the SIL of mitigation functions are reasonable.

On the importance of using realistic data for safety system calculations
PRESENTER: Stein Hauge

ABSTRACT. The use of realistic failure data is an essential part of any quantitative reliability analysis of safety functions. It is also one of the most challenging parts and raises several questions concerning the suitability of the data, the assumptions underlying the data and what uncertainties are related to the data.

The IEC 61508 and IEC 61511 standards, (ref. 1 and 2), present requirements to safety instrumented systems (SIS) for all relevant lifecycle phases, and have become leading standards for SIS specification, design, implementation, and operation. IEC 61508 is a generic standard common to several industries, whereas IEC 61511 has been developed especially for the process industry.

A fundamental concept in both IEC 61508 and IEC 61511 is the notion of risk reduction; a large risk reduction requires a high safety integrity level (SIL), and a correspondingly low probability of failure on demand (PFD). It is therefore important to apply realistic failure data in the design calculations, since too optimistic failure rates may suggest a higher risk reduction than what is obtainable in operation. In other words, the predicted risk reduction, calculated for a safety function in the design phase, should to the degree possible reflect the actual risk reduction that is experienced throughout the operational phase, see also ref. 3. This is further emphasized in IEC 61511-1 (sub clause 11.9.3) which states that the applied reliability data shall be credible, traceable, documented and justified and shall be based on field feedback from similar devices used in a similar operating environment.

The paper discusses challenges that arise when collecting and applying field data from operational experience, including how to identify and treat systematic failures such as repeating failures, bad actors, and common cause failures, and how to incorporate the effect of diagnostic coverage (DC). Guidance is provided on use of failure data for different applications such as design calculations versus operational follow-up. The paper is based on extensive reviews of more than twenty thousand SIS maintenance notifications from the Norwegian petroleum industry, documented in the new revision of the PDS data handbook (ref. 3).

References 1. IEC 61508 Standard. “Functional safety of electrical/electronic/programmable electronic (E/E/PE) safety related systems”, part 1-7, Edition 2.0, 2010. 2. IEC 61511 Standard. “Functional safety - safety instrumented systems for the process industry sector”, part 1 – 3, Edition 2, 2016. 3. Reliability data for safety equipment. PDS data handbook 2021 edition. SINTEF 2021.

The benefit of ISO/TR 12489 for reliability modeling and calculation of safety systems, illustrated by oil and gas applications
PRESENTER: Florent Brissaud

ABSTRACT. Safety systems are widely used to protect industrial installations against undesired events. They are traditionally classified according to their design (conventional or instrumented safety systems) or their mode of functioning (demand or continuous mode of operation). In any case, they should be designed to reach sufficient probabilities of success leading to acceptable safety levels for protected installations. Therefore, it is obvious that relevant methods and tools are needed to do that.

The technical report ISO/TR 12489, issued in 2013 by the ISO/TC67/WG4/PG3 involving eleven countries, provides guidelines for reliability and safety system analysts of the oil and gas industries. The ISO/TR 12489 is in line with the IEC 61508 to deal with the functional safety of safety related systems and aims to close the gap between the state-of-the-art and the application of probabilistic calculations for safety systems of any industries. After gathering the relevant definitions and raising the typical challenges, the technical report explains how to solve them. It also analyses how simplified formulae can be established for simple safety systems and how the common standardized models – reliability block diagrams, fault trees, Markovian approach and Petri nets – may be used to deal with more complex situations. Moreover, the ISO/TR 12489 details approaches mentioned in the IEC 61508:2010 Part 6 Annex B for SIL related calculations. It also provides guidelines about the multiple safety systems mentioned in the IEC 61511 ed. 2.

The proposed paper presents the benefit of applying the ISO/TR 12489 in industry, notably: the identification and explanation of weaknesses encountered when implementing the IEC 61508 and its derived standards (e.g. IEC 61511) ; the consolidation of the simplified approaches; the demystification of the systemic approaches and demonstration that they are simpler to implement than "simplified" formulae; the identification of difficulties, raise of warnings and provision of extensive solutions to overcome those difficulties; the detailed explanations about the solutions proposed to reliability engineers; the development of typical examples from simple to complex safety system allowing to compare the various approached and to illustrate how to use them; and the development of the evaluation of the spurious failure frequency. These benefits are illustrated using examples of safety systems in gas production and transmission applications.

Impact of Imperfect Proof Testing on the Performance of Safety Instrumented Functions

ABSTRACT. Periodic proof testing (PT) is critical in providing adequate assurances that the Safety Instrument Functions (SIFs) provide the required risk reduction throughout its lifecycle. The purpose of the PT is to detect dangerous undetected failures (λDU ) that cannot be detected by diagnostics. However, it is recognized that not all failures can be detected by diagnostics or PT and will only be identified either at equipment overhaul or when a demand is placed on the SIF. The fraction of failures detected by the proof test is referred to as the Proof Test Coverage Factor (PCT). This paper shall define proof test coverage, identify areas of consideration as to what can impact the PCT, propose methods for determining the PCT for greenfield and legacy equipment, how the PCT can impact the average probability of failure on demand (PFDAVG). This paper shall conclude that the impact of imperfect proof testing can have a significant impact on the designed risk reduction requirements and the suitability of the defined proof testing method when the PCT is not considered. Therefore, a theoretical and pragmatic approach should be adopted considering the prescribed proof testing methods in the safety manual and its predefined PCT for the selected operation mode. Consideration should also be given to the persons’ responsible for writing and conducting the proof testing and their ongoing relevant competency requirements.

08:30-10:10 Session TU1E: Human Factors and Human Reliability
Location: Amphi Jardin

ABSTRACT. Human operators play a key role in the safe and successful conduct of maritime and aviation transport operations. Human error is often reported as a contributor to maritime and aviation accidents. Therefore, the implementation of human-informed design considerations is essential to improve safety and operational performance in both sectors, especially in the maritime sector, where there is a lack of an established framework to systematically consider human factors at the design stage. Therefore, the SAFEMODE project brings together key experts from both aviation and maritime sectors to address this important gap. The SAFEMODE project aims to deliver a framework that includes human factors considerations and enables designers to make risk-informed decisions. The methodological approach of SAFEMODE builds upon four areas: the collection and analysis of accident data; the development of a toolkit for human performance assurance, the development of Human Factors-based risk models and the creation of a framework to support risk-informed design. The type of safety events considered in SAFEMODE for both domains includes collision and grounding for the maritime sector, and runway collision, taxiway collision and wake vortex during en-route flight phase for the aviation sector. This paper will provide an insight into the efforts conducted as part of the SAFEMODE project to assess the human contribution to risk and the benefits of applying these models to support risk-informed decisions in design and operations.

PRESENTER: Luca Podofillini

ABSTRACT. In state-of-the-art Human Reliability Analysis (HRA), the analysis of operator (or more generally of personnel) tasks is scenario- and plant-specific, informed by the plant design, thermo-hydraulic analyses, procedural guidance, and so forth. Observations of crew performance in simulators also informs a plant-specific HRA, for instance, on the timing of operator actions, on the critical decision points and emerging failure modes. Yet, actual performance evidence from simulator (e.g. number of successes, failures, response difficulties) is not used, mostly because very few failures are observed in the highly reliable operator performance. At the Paul Scherrer Institute, the Risk and Human Reliability Group is investigating methods to incorporate plant-specific performance evidence into the plant’s HRA. The idea is to go beyond failure counts and adopt performance measures such as crew situation awareness and task performance measures e.g. from ref 1. In this paper, the concept is implemented via a Bayesian Belief Network. Numerical examples demonstrate how incorporation of performance measures allows informing the HEP on performance measures, going beyond the simple counting of (rare) failures (and even in case of no failures observed). In the demonstration, the model addresses a single performance measure (Lack of Situation Awareness): future work will extend to the multiple performance dimensions. A foundational assumption of the model is that a relationship exists between the HEP and the performance measure. In particular, a linear relationship is assumed between the logarithmic of the HEP and the measure, with some noise to represent uncertainties. Another future challenge will be to calibrate the relationship.

PRESENTER: Rossella Bisio

ABSTRACT. Disciplines like human reliability assessment (HRA) and human factors engineering (HFE) depends significantly on empirical data. A large number of approaches and methods have been developed responding to specific demands coming from safety critical industries and evolving technologies, particularly in control centers. Along many years of activity, the Halden Reactor Project [1] has collected a remarkable amount of data related to human performance in the context of nuclear power plant safety. A considerable amount of data is produced at the Halden Human Machine Laboratory (HAMMLAB) [2], a full-scale human-in-the-loop research simulator. Other data have been collected at plant training simulators in experiments, usability studies, and verification and validation exercises. As common to many empirical fields, not all collected data are reported in papers, conferences, and client reports, as these focus on answering specific research questions by analyzing the data specifically collected for the study on aggregate. Details on individual tasks, operators/crews, and detailed contextual conditions, are seldom provided. Data collection, a very costly activity, occurs with a particular set of research question in mind, adapting a research and analysis methodology functional to the study. The consequence is that most of the information produced, like in icebergs, remains invisible and largely underexploited. On the same time, technological development like data analytics, are pouring on the marked tools for facilitating data sharing, analysis and visualization. Hence, we have now the opportunity to share empirical human performance data to solve past and newly arising research questions, posed by the increasing pace of digitalization that is constantly bringing new ways of organizing work, re-designing human automation interaction, and responding to always stringent safety and efficiency requirements. HRA and HFE need to devise new approaches to setting up human performance studies, quicker methods to analyze data, to increase the quantity (and quality) of data produced, and exploit, when suitable, data and findings, produced by others. At the Halden Project we started to build a human performance repository and are moving in the direction to test how new tools could facilitate better exploitation of data and findings. In this paper we present the challenges we have faced along this path and how we have approached them. The challenges can be grouped in the following categories: the plurality of methods that define the target data, the integration in the human performance research process, the population of the repository, data security and privacy requirements for international data exchange.


ABSTRACT. In the past twenty to thirty years, organizations have extremely changed and these changes in addition to technological changes such as use of augmented reality (AR) introduce new system risks. Post normal accidents theory describes that organizations are more globalized and digitalized and are formed as networks of organizations, which would lead to post normal accidents such as network failure accident. In addition, it states that strategies and organizational structures are more financialised and networked respectively and technology and task are more digitalized and standardized. These organizational factors affect also on human performance. Organization and human are considered as the socio parts of socio-technical systems. Metamodels should provide the modeling elements required for modeling human and organizational factors in new AR-equipped socio-technical systems. Current metamodels do not consider factors that would lead to post normal accidents. In this paper, we elaborate the theory of post normal accidents and we extract the influencing factors leading to post normal accidents. We also consider global distance including geographical, temporal and cultural distances, as an influencing factor on human performance. Then, we use the extracted influencing factors for extending modeling elements in our previously proposed conceptual metamodel for modeling AR-equipped socio-technical systems. Our proposed extended metalmodel can be used by analysis techniques in order to perform risk assessment for AR-equipped socio-technical systems.

Evaluating Electroencephalogram Channels using Machine Learning Models for Drowsiness Detection
PRESENTER: Plínio Ramos

ABSTRACT. The oil and gas (O&G) industry has suffered several catastrophic accidents over the years, and many of which are attributed to human factors. Indeed, human operators continue to play a central role in performing complex tasks in which cognitive functions influence their performance, especially in emergency situations. However, operators may not effectively respond when presenting tiredness, such as mental fatigue and/or drowsiness. Therefore, the development of a drowsiness detection system is desirable for industries dealing with safety-critical tasks, such as control rooms in O&G context. In this paper, we analyzed two electroencephalogram (EEG) channels: Fz and Pz. The data are processed based on distinct time-domain signal representation, analyzed through four well-known Machine Learning (ML) classification techniques in order to identify patterns related to drowsiness in different subjects submitted to monotonous tasks. We perform all analysis and comparisons considering a real and public database for human drowsiness. For most of the subjects analyzed, the ML models achieved a balanced accuracy (BA) greater than 95% when considering information of Pz channel only, which makes the drowsiness detection system less invasive, and then opens the possibility of using it in an actual environment.

08:30-10:10 Session TU1F: Health monitoring and predictive maintenance of offshore systems
Internet of Underwater Things to monitor offshore wind turbines fields
PRESENTER: Fekher Khelifi

ABSTRACT. Internet of Underwater Things (IoUT) has gained large popularity 1. The real time supervision requirement of both ecosystems and offshore infrastructures explains this world wide interest. Large offshore wind projects are now launched off the French coasts and among them, the first farm will be installed by 2023 offshore St Nazaire harbour, providing an opportunity to establish an IoUT in an industrial environment. The paper is presenting some preliminary results of the Blue IoT - Eolia project which has the objectives to design and to test an underwater acoustic network dedicated to the supervision of subsea infrastructures and environmental parameters needed to ensure safety and reliability of wind turbines. Typical sensors (force and pressure sensors, inclinometers, thermometers, etc.) and different acoustic modems are tested to illustrate the capacities of such underwater networks to accommodate various parameters. Strategies to avoid collisions in the different acoustic channels are developed using a Medium Access Control (MAC) layer, some results and analyses being presented in the paper 2;3. Finally a first experimentation in a natural environment conducted during summer 2021 (in the river Erdre at Nantes, France) will be detailed, illustrated and compared with the literature 4;5. A demonstration underwater network is expected to be installed during 2022 in real conditions on SEM-REV, the marine renewable multi-technology field test site of Centrale Nantes, located offshore Le Croisic (France).

PRESENTER: Fabien Caleyron

ABSTRACT. This paper presents a model-based approach to monitor loads in wind turbine components. The workflow presented on Figure 1 is composed of an offline training phase and a diagnosis phase and is fed with data from SCADA and possibly from sensors time series. Different tools dedicated to the approach will be presented : • Surrogate modeling from dynamic simulations in order to link available input data (typically 10 min SCADA statistics, sensors time series) to required outputs (fatigue or extreme loads). These surrogate models are used in the diagnosis phase in order to evaluate the loads in the required component. • Sensors time series condensation in order to generate scalar features to be used as inputs of the workflow. These features are used to extract relevant information from time series typically to characterize stochastic loading from wind and waves. • Data assimilation with an online parameter inference approach based on a Bayesian filtering method, such as the Ensemble Kalman Filter, for performing parameter estimation. Data assimilation can be used to characterize uncertain parameters related to the structure (static or quasi-static behavior) or to the loading (dynamic behavior). Different applications will be presented such as fatigue loads estimation at wind turbines blades root validated with on-site measures and structural and environmental parameters estimation performed with a data assimilation approach.

Incorporating reliability assessment in the design development & optimization of floating structures

ABSTRACT. Offshore wind turbines are exposed to fluctuating environmental loads and have to deal with aero-hydro-servo-elastic coupled dynamics. Such complex engineering systems need thorough design processes, as well as sophisticated monitoring and maintenance approaches. The current development trend towards floating support structures for offshore wind turbines makes maintenance and repair work more difficult. This is, on the one hand, as access to, transfer of personnel to, and work of technicians on floating systems are complicated and entail additional hazards. On the other hand, floating wind turbines can be located further from the shore, which significantly reduces the allowable weather windows for offshore work. Thus, researchers and industry must not only focus on efficient maintenance strategies but also put more emphasis on the design process of such offshore structures, focusing on reliable systems ab initio.

For reliability assessments of highly complex engineering systems, the combination of different techniques - building on approaches for creating approximate system representations and subsequent reliability analysis and calculation methods - is most promising. The development and assessment of floating wind turbine systems, however, also requires numerical modeling to correctly represent and simulate the fully coupled dynamics. The Modelica® library for Wind Turbines MoWiT (, developed at Fraunhofer Institute for Wind Energy Systems IWES, allows for component-based modeling and can be coupled to a framework, programmed in Python, for automated simulation and optimization. To incorporate the computationally intensive reliability assessment within the highly iterative optimization process, a methodology is developed, by which means approximate models in form of response surfaces for a few potential system geometries out of the entire optimization design space are created ahead of the optimization, corresponding response surfaces for any other system designs are derived based on an interpolation approach, and, finally, the reliability is determined time-efficiently within the optimization procedure, using Monte-Carlo simulation.

This methodology is applied to a floating wind turbine to obtain a reliability-based optimized support structure, accounting for uncertainties in environmental conditions directly within the design development and ensuring that the structure, including the mooring lines, fulfills certain reliability constraints. Furthermore, the numerical framework allows developing digital twins by optimizing the numerical model based on measurements. Such digital twins are highly suitable to assess the system condition and estimate, e.g., the damage or remaining lifetime.

PRESENTER: Gaëtan Blondet

ABSTRACT. Offshore structures monitoring meets crucial economic and environmental issues. Simulations are performed to estimate the behavior of offshore structures. However, simulations are based on assumptions on operating conditions, which are not always sufficient to take into account all the complexity of the real time-dependent environment and structural properties. Efficient monitoring and maintenance operations relies on data gathered from sensors, which provides real conditions and behavior of the structure. Simulations may be too expensive and time-consuming to estimate the updated lifespan from large dataset of measurements. To address these issues, Phimeca and Principia had developed a methodology based on measurements to obtain rapid and updated prediction of the structure's lifespan. This methodology was successfully applied to the monitoring of a riser of a FPSO (Floating Production Storage and Offloading). First, the episodes of motion measured on the FPSO for 2 years were used to simulate the damage of the riser. The time series describing these episodes were transformed, by applying a combination of machine learning methods, so that the problem dimension was drastically reduced: with initially thousands of variables, only two explain the simulated damage efficiently. Finally, an emulator (metamodel), has been trained and validated to be embedded into the FPSO and to estimate the lifespan of the structure during its life. The dimension reduction makes the emulator numerically light enough to be embedded into the structure and to be compatible with constrained systems. This methodology enables offshore structure owners and operators to obtain a fast updated lifespan prediction from an embedded light software, without any delay due to complex simulations. Thus, decision can be made faster to ensure structural and environmental safety in case of a dangerous event.


ABSTRACT. Nantes French Maritime Academy (ENSM) developed computer calculations defining berthing criteria for a boat with low friction fender, which is a fender allowed to move with the wave. The aim is now to focus on a boat with high friction fender, which is one able to grip against the boat landing. The objective is to understand how various offshore wind turbine monopiles or floaters mask a maintenance boat from an incidental wave coming from the opposite side. The works performed are: 1. The calculation of the efforts of a monochromatic wave applied against the boat in the case she can heave (low friction) or not (high friction). 2. The formula used for calculating the friction coefficient of the fender against the boat landing. 3. The comparison between the results for accessing against a boat landing with a low friction or high friction fender.

08:30-10:10 Session TU1G: Nuclear Industry
Location: Atrium 3
Probabilistic modeling in a Bayesian framework of nuclear containment buildings structural tightness
PRESENTER: Donatien Rossat

ABSTRACT. Structural tightness constitutes one of the main function of French 1300-1450 Mwe nuclear power plant reactor buildings, which is provided by a reinforced and prestressed concrete inner wall without steel liner, and an outer wall ensuring protection against external effects. Tightness is evaluated through the measurement of the inner wall leakage rate during periodic pressurization tests. The leakage rate should not exceed a regulatory threshold value. Under several physical processes related to concrete ageing and operating loads, structural tightness may evolve during the operational phases. In this context, the development of a numerical methodology aiming at assessing the time evolution of the containment buildings tightness will contribute to an improved anticipation of potential repair works, as well as decision aid in the framework of containment structures maintenance.

In recent years, many studies have been performed in order to provide physical modeling approaches of the thermo-hydro-mechanical and leakage (THML) behaviour of concrete in large containment structures. These approaches involve numerous uncertain parameters. However, in-situ observations of the structure’s response are available in significant quantity. Thus, they allow an inverse uncertainty quantification in a Bayesian framework. It combines a prior state of knowledge and noisy observations of the system response, in order to derive an a posteriori state of knowledge which summarizes available information at a given moment of the structure life. In this context, a Bayesian probabilistic modeling strategy is proposed, aiming at assessing the structural tightness of containment buildings, based on a physical THML modeling strategy. The forecasts of mechanical and leakage responses are sequentially, using Markov Chain Monte Carlo algorithms. In order to accelerate these algorithms, sparse polynomial chaos expansions surrogate models are built. The risk of exceeding regulatory leakage rate thresholds is then assessed in a reliability-based framework. The proposed approach enables to deal with a complex physical model with numerous uncertain parameters, and several types of in-situ observation data with varying quantity and measurement noise. Application results regarding a 1:3 scale nuclear containment building mock-up show accurate leakage forecasts with a significant reduction of uncertainties throughout the structure’s operational phase.


ABSTRACT. Small modular reactors (SMR) are smaller than conventional reactors and are designed to produce up to 300 MWe. Very small modular reactors (vSMR) (about 10 to 50 MWe) are a type of SMR. These reactors can be manufactured and assembled at the factory and then transported to the place where they will be installed – which makes them cheaper and built faster. Plants equipped with this type of reactor can be used in water desalination, heat generation and energy production in remote locations. As with other nuclear installations, risk analysis is necessary to assist in the design and implementation of vSMR. The Level 2 Probabilistic Safety Assessments (PSA) of nuclear power plants (NPP) identify the pathways, magnitude and frequency of radionuclide release through containment during an accident, usually based on an event tree analysis (ETA). The end states of the event tree provide significant insights on accident prevention and mitigation, pointing measures with great potential to improve the design and operation of the vSMR. This paper describes the Level 2 PSA of a vSMR, based on ETA, on NUREG/CR-2300 vol. 1 and on the Level 1 PSA of a generic NPP in Low Power and Shutdown operating modes, taking into account the methodological aspects described in the IAEA TECDOC 1144. A model was developed with Computer Aided Fault Tree Analysis System (CAFTA) to quantify the risk during a loss of cooling accident in the generic vSMR in Low Power and Shutdown operating modes.


ABSTRACT. The operational policy of nuclear facilities including research reactors established and practiced by the utility is required to give safety the utmost priority, overriding the demands of production and project schedules [1]. The fault-tree analysis approach has been used as a standard probabilistic safety assessment (PSA) tool for the safety evaluation of nuclear facilities; however, concerns have been raised about its capability to treat the coupling that may arise from the dynamic interaction between the control system, human operators, and controlled process. In this paper, the operational vulnerability identification procedure for nuclear facilities characterized by a high level of interactive coupling between human operators and digital control systems is proposed. In the procedure, Systems-Theoretic Process Analysis (STPA) [2] is used to model the operational process of the systems (control structure) and the interaction between subsystems (control action and feedback) in the model is identified based on the operation procedures and design specifications. The unsafe control action (UCA) related to interaction are derived with standardized failure taxonomy. To evaluate the feasibility of the proposed procedure, it was applied to an example system, namely the cold neutron source (CNS) system of High-Flux Advanced Neutron Application Reactor (HANARO) in Republic of Korea [3]. As a case study, the UCAs which may lead to the spurious trip of HANARO reactor due to the instability of hydrogen pressure in CNS during its operation were derived. The UCA lists were examined by the HANARO operators and HANARO experts found important trip scenarios caused by the unsafe interaction between human operators and other systems that may require potential procedure or design improvements.

Evaluation of Risk Dilution Effects in Dynamic Probabilistic Risk Assessment of Nuclear Power Plants
PRESENTER: Kotaro Kubo

ABSTRACT. Probabilistic risk assessment (PRA) effectively extracts risks from nuclear power plants and is used in various agencies. Dynamic PRA has attracted much attention because it enables analysts to perform the more realistic assessment by reducing the assumptions and engineering judgments. The risk dilution is an effect that increasing uncertainty of the input parameter leads to a decrease in calculated risk [1]. This effect has been reported in the field of risk assessment on the geological dispersal of nuclear waste. However, little has been reported on this effect in dynamic PRA on nuclear power plants. In this study, we focus on the risk dilution effect that was caused by the correlation between parameters and the parameter distribution in dynamic PRA. Specifically, dynamic PRA on station blackout (SBO) in boiling water reactor (BWR) was performed using a dynamic PRA tool called RAPID (Risk Assessment with Plant Interactive Dynamics) [2] and a severe accident analysis code called THALES-2 (Thermal-Hydraulic Analysis of Loss of Coolant, Emergency Core Cooling and Severe Core Damage Code, Version 2) [3]. We evaluated the risk dilution caused by the following two items. (1) The correlation of the failure timing of high-pressure core injection (HPCI) pump and reactor core isolation cooling (RCIC) pump (2) The distribution of alternating-current (AC) power recovery-timing As the results, it was found that the risk dilution caused a difference of about 10 to 20% in the conditional core damage probability (CCDP) in the SBO scenario. The results obtained in this study suggested that risk dilution may occur in dynamic PRA. The findings are useful information for future use of dynamic PRA in regulatory decision-making. Keywords: Probabilistic risk assessment, Probabilistic safety assessment, Dynamic PRA.


1. R. Wilmot and P. Robinson, The issue of risk dilution in risk assessments. In proceeding of OECD/NEA Workshop, Management of Uncertainty in Safety Cases and the Role of Risk (2004). 2. X. Zheng, H. Tamaki, T. Sugiyama and Y. Maruyama, Severe accident scenario uncertainty analysis using the dynamic event tree method. In proceedings of 14th International Conference on Probabilistic Safety Assessment and Management (PSAM14) (2018). 3. M. Kajimoto, K. Muramatsu, N. Watanabe, M. Funasako and T. Noguchi, Development of THALES-2: a computer code for coupled thermal-hydraulics and fission product transport analyses for severe accident at LWRs and its application to analysis of fission product revaporization phenomena. In proceedings of International Topical Meeting on Safety of Thermal Reactors (1991).

Multi-Step Prediction Algorithm for Critical Safety Parameters at Nuclear Power Plants Using BiLSTM and AM

ABSTRACT. In abnormal or emergency situations at nuclear power plants (NPPs), operators are expected to recognize the situations and sometimes take effective safety measures quickly. Under these situations, the appropriate situation awareness affects the effective mitigation of events. Especially, Level 3 Situation Awareness, i.e., the prediction of future plant behavior, is critical, but one of the most difficult tasks to the operators. In order to help the operators' situation awareness, some studies have proposed prediction algorithms using artificial intelligence techniques. However, those methods were focused on the single-step prediction so that they could not perform long-term prediction. Multi-step prediction is known to be difficult and challenging because of the lack of information and uncertainty or error accumulation. In this light, this study suggests an algorithm using a sequence-to-sequence network that combines bidirectional long short-term memory (BiLSTM) and attention mechanism (AM) for predicting the multi-step parameters. AM is an excellent method in handling serialized data such as speech recognition, machine translation, and part-of-speech tagging. The AM can make the neural network focus more on crucial temporal information by assigning higher weights. BiLSTM is used for extracting temporal features of time series data. The suggested algorithm is also tested to demonstrate prediction performance. The algorithm is implemented using a compact nuclear simulator for Westinghouse 930 MWe NPP.

08:30-10:10 Session TU1H: Aeronautics and Aerospace
Location: Cointreau
Research on Reliability Management and Control Method of Aeronautical Product Development Based on Systems Engineering Management Plan

ABSTRACT. The SEMP (systems engineering management plan) is a management program that controls the entire product development process by managing the entire manufacturing process. The development risk of existing aerospace products is highly related to its complexity and maturity, which are mainly reflected in requirements demonstration, design analysis and test verification. Therefore, based on the theory of SEMP, this paper puts forward a quality control scheme from three aspects of requirement demonstration, design analysis and test verification, and establishes a management and control method for quality characteristics of aerospace product development, such as reliability. It can help product development enterprises to manage and control their product development process more comprehensively, so as to reduce the risk of product development, which is of great significance to improve the overall quality of aviation products and reduce the risk of development. This method can also provide a new idea for the development and control of aviation products in the future, which is helpful to avoid risks and improve the quality of aviation products.

AUTOSAFE: Automatic Fault Tree Synthesis for Cyber-Physical Systems

ABSTRACT. Safety analysis is a key pillar of compliance demonstration for aircraft and other complex cyber-physical systems with the safety requirements of the certification authorities. Fault tree analysis is a well-established and accepted methodology for this purpose. However, with the growing complexity of systems, consisting of software and complex electronic hardware and their inter-dependencies, it is becoming increasingly challenging and costly to manually conduct fault tree analysis. It is not only time consuming and error prone but also the quality (e.g. consistency and correctness) of the analysis is highly dependent on the ability of the individual Engineer. In order to address this issue, a software tool AUTOSAFE is being developed to automate the fault tree generation process. In AUTOSAFE, a domain-specific model is used to model system hardware structure and functions, as well as the failure propagation. An algorithm is developed to automatically generate fault trees. Key issues for the trustworthiness of generated fault-trees are completeness and understandability. Completeness is addressed with semi-automated inclusion of external events from an external events database into the automatically generated fault tree. Understandability is addressed with a novel requirements model and rigid naming conventions that are automatically considered during fault tree generation. In addition, a web-based tool architecture provides multi-user modeling. AUTOSAFE will not only decrease the time required to develop fault trees but also improve their consistency and correctness. In this paper, the concept and methods of the AUTOSAFE tool are introduced. Additionally, the workflow of system modelling, failure propagation modelling, and auto-generation of the fault tree are demonstrated with an exemplary system study.

PRESENTER: Alise Midtfjord

ABSTRACT. Contamination of runway surfaces with snow, ice, or slush causes potential economic and safety threats for the aviation industry during winter season. The presence of these materials reduces the available tire-pavement friction needed for retardation and directional control, as discussed in Giesman (2005) and Klein-Paste et al. (2012). In order to activate appropriate safety procedures, pilots need accurate and timely information on the actual runway surface conditions.

Previous research on how available runway friction is affected by weather conditions and runway contamination has mainly been reduced to engineering- or physics-based models. The complexity of the physical relationships controlling the surface friction and their dependency on each other makes this a difficult task. Machine learning methods have in several occasions shown to be able to simplify and model complex physical phenomena with a good accuracy, when domain knowledge is included. In this paper, we build a model using the state-of-the-art boosting algorithm XGBoost, introduced by T. Chen and C. Guestrin (2016), to predict runway conditions using weather data and runway reports. The model is trained to predict the runway surface conditions represented by the tire-pavement friction coefficient. This coefficient is estimated using flight data from the Quick Access Recorder of Boeing 737-600/700/800 NG airplanes, as further described in Midtfjord and Huseby (2020). Physical knowledge of the relationship between runway friction and environmental factors is included in the model by involving weather trends over time and engineering of environmental variables. Our model is compared to a currently in-use system at several Norwegian Airports, which is a scenariobased model introduced by Huseby and Rabbe (2012). This model was created based on meteorological and runway knowledge and was further developed in Huseby and Rabbe (2018) using a decision theoretical approach. The machine learning model is tested and compared using cross validation, and the results show the strong abilities of machine learning to find and use patterns to model physical phenomena.

A Data-Driven Method of Predicting Hard Landing Based on RFECV and XGBoost

ABSTRACT. Aircraft hard landing means that the impact load of the landing gear on the ground exceeds the specified limit at the moment of landing. As a common flight accident, hard landing poses a huge threat to flight safety. The mass production and storage of flight data is also a challenge to the existing analysis methods. In this study, we took the flight parameter data during the landing process of the aircraft as the research object, and established a hard landing prediction model based on historical data and machine learning. First, we established a flight parameter selection method combining data cleaning and recursive feature elimination with cross validation (RFECV), aiming to identify the key parameters that affect the flight landing state and quantify parameter importance. Second, we build a hard landing prediction model based on the ensemble learning method XGBoost. The results of the evaluation metrics shows that the method proposed in this study can effectively predict the aircraft hard landing to realize safety warning. The trained model can be used in flight practice. When the key parameters of the aircraft are input, the model can assist the pilot to adjust the aircraft attitude, and prevent the occurrence of a hard landing.

How to enhance safety culture and safety management at airports – the Safety Culture Stack

ABSTRACT. Safety culture relates to how an organization values and prioritizes safety: for example, in terms of shared norms about the importance of safety, management commitment to safety, and systems for ensuring organizations capture and learn from incidents. Research in aviation has tended to focus on the norms and values within specific industry sectors: for example airlines, air traffic control, and airports. Yet, a systems-approach to safety management recognizes that safety is often a product of interactions between the different organizations that need to collaborate in order to deliver a service. For aviation, this means that the safety culture of different organizations within the aviation system should not be studied in isolation: rather an approach should be taken where organizations are considered synchronously. This is most obviously the case at an airport, where many key players – the airport authority, airlines, air traffic services and ground handling services – must interact. At a regional airport this may entail fifty or so companies who must collaborate to ensure safe and efficient operations. At a larger international airport it can easily be two hundred and fifty organisations. Surveying every single organization’s safety culture independently would be an arduous and possibly never-ending task, and would miss the essential interactions and safety interfaces where the roots of accidents and resilience are often found. A more integrative, encompassing, and yet agile approach is needed.

In 2016, utilizing the validated EUROCONTROL safety culture methodology used to measure safety culture in air traffic management, an adapted version of the safety culture survey was distributed across airlines (cabin crew, pilots, support staff), air traffic control (air traffic controllers, engineers, support staff), airport staff, ground handling companies, and other key participants (e.g., emergency services) at one of London’s regional airports (London Luton Airport). Focus groups with staff and management from the whole spectrum of airport activities were also used to dive deeper into the issues raised by the survey, leading to an Action Plan for improving the safety culture of the airport as a whole. A number of these actions aimed to help certain airport companies learn from others who exhibited best practices in key areas such as Just Culture, Reporting and Learning, etc. The Luton Safety Stack, as it has become known, has seen a transformation in ‘the way safety is done’ at the airport, with competitive companies (e.g. ground handling services) working together for safety, each company, no matter how small, now feeling it has a ‘voice’ in safety discussions, and implementation of a Just Culture Framework that encompasses more than seventy organisations at the airport. What started out as a safety culture initiative has also changed a number of safety management processes related to incident management and risk reduction, as well as collaborative use of safety-related equipment and resources, and harmonized procedures across the airport for all ground handling activities.

The Safety Stack process has been applied successfully to a second UK airport, and is scheduled for application to two international airports in 2021. The paper will focus on how the Stack is formed, how it works, and the tangible benefits seen so far at the two airports who now have an operational Safety Stack.

08:30-10:10 Session TU1I: Model Based Safety Assessment
Location: Giffard
Multi-core processor: Stepping inside the box
PRESENTER: Kevin Delmas

ABSTRACT. The last decade has seen the emergence of multi-core and many-core processors. They became mainstream in the embedded market and there is no doubt that the next generation of aircraft will rely on these technologies. The impact of relying on multi-core processors, especially with regard to safety assessment, is a major issue. Indeed, currently, a processor is considered as a black-box component where any single internal failure leads to the loss of all executed software. Considering this assumption for multi-core processors would lead to an over-dimensioning of the overall platform and therefore there would be no benefit in using them. We believe that there is necessity to open the box and see such a processor as a sub-system. We introduce a formal modeling framework capturing the main characteristics of software/hardware failure propagation. This framework is applied on a simplified UAV control use-case.

Model-Based Safety Assessment of an Insulin Pump System with AltaRica 3.0
PRESENTER: Julien Vidalie

ABSTRACT. The safety analyses of systems is a classical field of engineering, with its well-known processes and tools (e.g. fault trees, event trees, reliability block diagrams or a combination among them). Nevertheless the increasing complexity of systems has to be handled with appropriate approaches and tools: for example the so called Model-Based Safety Assessment (MBSA) approach. The objectives are firstly to consider the system at a higher level, in order to be close to the description of system architectures; secondly to be more precise on behavioral features. AltaRica 3.0 is such a solution of the MBSA approach. It is an object-oriented, formal, modelling language dedicated to probabilistic risk and safety analyses of complex technical systems [1]. It comes with a versatile set of assessment tools to compute safety and risk indicators on models: compilers to fault trees, stepwise and stochastic simulators, etc. In this publication, we present the modelling and assessment of an insulin pump [2]. Also called “artificial pancreas”, this system is a typical example of a cyber-physical system, which contributes to reduce the constraints of diabetic patients caused by their illness. The system is composed of a continuous glucose sensor, which monitors the patient glucose rate, a computing module that sends orders to a dosing device delivering insulin to the patient. These three components are in closed loop and can communicate through a wireless connection. The safety assessment of this system has to be finely analyzed since failures may result in severe harm to the patient. We show how the expressive power of AltaRica 3.0 makes the modelling activity easier: e.g. by using modelling patterns for monitored systems. The safety assessment is then realized according to different kinds of tools according to the static or dynamic points of view of the designed model.

Efficient Modeling of large Markov chains models with AltaRica 3.0
PRESENTER: Michel Batteux

ABSTRACT. Markov chains are one of the modeling formalisms used in reliability engineering. Even if it is powerful, from a mathematical point of view, one of its big issue is the design of models for large scale systems. In fact, designing a Markov chain of a system with several components, each one may be in several states, is an important amount of job. There are no structural constructs to efficiently design such a model (e.g. composition, synchronization, etc.). In this article, we present how the AltaRica 3.0 modeling language can be used to design efficiently large (discrete or continuous time) Markov chains. We consider an example composed of several production units of a petrochemical plant system. Such system is composed of combinations of series-parallel components: parallel lines with components connected in series. Each component may be in different states, taking into account degradations and failures. Each line has different operational modes, e.g. in operation, standby or under maintenance. Stochastic or deterministic events make the components changing states, and make the lines changing modes. The number of required lines depends on the demand and also follows stochastic delays. Finally there is a limited number of repairers who can repair only one line at a time. We show that the design of the model is very efficient thanks to the advanced structural constructs of the AltaRica 3.0 modeling language. We also present how the model can be easily enhanced with new lines. Finally, we use assessment tools available for AltaRica 3.0, e.g. the stochastic simulator, to evaluate the availability of the system over a given mission time.

AltaRica 3.0 modeling pattern for production systems availability assessment

ABSTRACT. The design and assessment of production models is often delicate because the production of a unit may depend not only on its internal state but also on flows circulating downstream and upstream the production line. In this article, we present a modeling pattern to address this issue. It consists in using controllers working on three flows: a diagnosis flow moving forward from source units to a controller, a command flow moving backward from a controller to source units, and finally a production flow moving forward from source units to target units. The production depends on the command, which depends itself on the diagnosis. We present the implementation of this new modeling pattern using AltaRica 3.0 - a high level formal modeling language dedicated to risk and performances assessment. We demonstrate, by means of an example of a production system, the ability of this pattern to represent such a system. Finally, we show how to evaluate the system availability over a given period of time using the assessment tools available for AltaRica 3.0, e.g. the stochastic simulator.

PRESENTER: Kester Clegg

ABSTRACT. To gain the benefits of MBSA within MBSE, traditional fault logic models need to align closely to the system's functional decomposition. To achieve this in a consistent fashion, we have developed a new SysML safety profile based on the functional blocks provided to us by system design. Functional deviations derived from the FHA (which is modeled in SysML using the same profile) are propagated between blocks through ports defined by the system engineers, and the associated fault logic (including hardware generated base events) is modeled in standard internal block definition diagrams. Functional failures represent only part of the failure model (which will always include events and scenarios outside the functional requirements), but it ensures that for functional deviations at least, fault logic models follow the functional hierarchy and use the same design blocks as the system engineers. It means that specification changes to functional blocks or ports can be picked up and flagged to the safety team as requiring inspection, and it enables direct traceability within the SysML repository of derived safety requirements from the PSSA / FHA through to the fault logic used to demonstrate acceptable mitigation of risk. We demonstrate the new profile’s use in the context of a gas turbine control system design, and discuss the advantages and shortfalls it provides in an industrial setting.

08:30-10:10 Session TU1J: Natural Hazards
Location: Botanique
Exploring Sensitivity Analysis to Support Urban Flood Risk Prioritization under a Multidimensional Perspective

ABSTRACT. Flood accident reports worldwide have been demonstrated that many factors (such as climate change, urbanization, and population density) drive as catalysts for the aggravation of many flood impacts in the urban system. This way, decision models should consider these complexities, in which decision maker's (DM’s) preferences take into account multiple criteria to plan adaptive measures and reduce urban flood damages from a long-term perspective. Given this context, sensitivity analysis (SA) is a useful tool that supports DMs in relying on the recommendations provided by quantitative models. In practice, modeling this problem is a complex task due to the uncertainty inherent in defining parameters so that the SA reports can base his/her decisions through the simulation analysis. Thus, this paper undertakes a global SA based on a multidimensional decision model which uses Multi-Attribute Utility Theory and Decision Analysis, to prioritize urban areas according to flood risks. The SA simulated two groups of parameters: (i) the uncontrolled factors, i.e., the influence of climate effects and the urban population growth in the estimation of rainfall patterns and flood consequences, respectively; and (ii) the DM’s preferences, which means the utility functions and the compensatory relation among criteria. The SA report uses statistical and graphic visualization tools to evidence how robust the original risk ranking is according to the parameter’s settings. Furthermore, additional discussions detailed how DMs can be benefited from this analysis when implementing strategic decisions against floods.

U.S. National Risk Index - A Foundational Review
PRESENTER: Seth Guikema

ABSTRACT. The U.S. recently created the National Risk Index, a tool intended to provide a nation-wide assessment of natural risks. This method provides county and census tract level estimates of economic losses, a measure of social vulnerability, and a measure of community resilience. These three measures are then combined as a "risk" estimate. This paper provides a critical perspective on the National Risk Index reviewing in three area: (1) the conceptualization and definition of risk used, (2) what is actually being measured, and (3) how it is actually measuring those quantities. We find that there is significant room for improvement in all three areas.

Large scale landslide and flooding hazard susceptibility assessment using semi-automated frequency ratio (FR) model
PRESENTER: Lena Schäffer

ABSTRACT. As extreme weather events become more frequent and world’s population is growing an increasing number of built areas and critical infrastructure networks are challenged by natural hazards like heavy rain, urban flooding or landslides. At the same time, the quantity and quality of remote sensing data delivering earth observation products is continuously increasing and widely accessible. With the help of high-resolution, open access data and fast engineering approaches new options arise to investigate objects at risk. This study presents a semi-automated fast engineering approach, deploying only open access tools and data to create a large-scale hazard susceptibility assessment. The model objectives include its ability to rapidly identify critical areas, which will further allow to derivate the exposure of critical infrastructures to hazards. A bivariate frequency ratio (FR) model is applied for flooding and landslide susceptibility mapping on two study sites within the German federal state of Bavaria. Flood and landslide conditioning factors are selected based on performance criterions. For improved comparability of the results different normalization approaches are used. The resulting hazard susceptibility maps are validated in both cases by hazard inventories and statistical analysis of the area under the receiver operator characteristics curve (AUROC). Further, the susceptibility is partitioned into five defined zones. The results lead to the following conclusions: (i) the model is able to produce overall sufficient predictive accuracy, (ii) a higher number of parameters does not necessarily lead to enhanced model performance, and (iii) a higher resolution of the digital elevation model (DEM) can significantly improve the predictive performance. Moreover, the automation is a large benefit regarding the preparation and validation of the model independently of the employed resolution. Finally, the possibility of upscaling is discussed by considering varying environmental characteristics. It is expected that the FR model will prove to be a very efficient tool for local government administrators, researchers and planners to construct flood or landslide mitigation schemes.


ABSTRACT. The increasing population concentration in coastal cities exacerbates the vulnerability of infrastructure to hurricane damage. The subject of this paper is vulnerability of residential mid/high-rise buildings (MHRB), 4-story or higher, to hurricane-induced interior and content damage from wind-driven rain ingress. A new physics-based methodology was developed to extend previous work on low-rise buildings [1]. The methodology combines estimates of impinging and surface run-off wind driven rain, envelope defects and breaches, interior water distribution and propagation, and component cost analyses to produce realistic estimates of interior and contents damage in mid/high-rise buildings. The physics of the mechanisms of rainwater ingress, distribution and propagation provide the basis for a probabilistic vulnerability model (PVM). At the heart of the PVM is a Monte Carlo simulation engine which runs simulations for combinations of wind speed and direction for a variety of building classes. Key parameters (component capacity, water ingress, etc.) are treated as random variables. The resulting vulnerability and fragility curves, when used in a catastrophe model, can lead to improved loss projections and facilitate the evaluation of the effectiveness of mitigation measures. The paper will describe the model, similarities and differences with a companion residential low-rise model (< 4 stories), and present vulnerability functions. It will also describe its different variables with their uncertainty, and provide an estimate of the overall uncertainty attached to the process.

PRESENTER: Adriana Pacifico

ABSTRACT. This paper discusses the Italian seismic risk assuming that the existing buildings portfolio is substituted by new code-conforming structures. The seismic risk is quantified, at municipality scale, via the evaluation of failure rate per building class. This requires: (i) the probability that the structures fail for a given ground motion intensity value, that is, the fragility functions and (ii) the hazard curves resulting from probabilistic seismic hazard analyses. The adopted fragility functions come from the Italian research project RINTC – Rischio Implicito delle Strutture progettate secondo le NTC, in which a large set of buildings was designed for three sites representative of different seismicity. Thus, the Italian municipalities were divided in three seismic classes and it was assumed that fragility functions from RINTC are representative of new design (residential) structures, according to a replacement criterion that was established to associate the structural typologies of the existing buildings to those considered in the project. The failure rates per building typology were computed first, combining the structural fragility functions and the computed hazard curves. Then, the failure rates were averaged over the building typologies and the percentages of soil conditions characterizing each municipality. The results, presented in the form of maps, show that the fragility of masonry structures have the main impact on the maps, which are also affected by the identification of the hazard and soil classes of the sites.

08:30-10:10 Session TU1K: Accident and Incident Modeling
Location: Atrium 1
Man to Machine(MTM) Accident Model Based on Multiple Regression Analysis of Process Industry Machineries through a Scientific Questionnaire Design

ABSTRACT. Accidents in process industries are increasing rapidly whereby it affects the employees and the productivity as well. The usage of safety critical equipment in process industries increase the risk of major accidents. A questionnaire is designed to evaluate the risk of machinery safety and accidents of employees in a 2700MW coal fired power plant. 31 statements involving machinery reliability, equipment aging and operational control method elements are included with an open-ended statement for suggestions from the respondents chosen in the workplace. Reliability analysis is later performed and a Cronbach Alpha value of 0.9103 was recorded from the questionnaire which marks the effectiveness of the questionnaire. Data analysis shows that the accident level is higher in the area of machine reliability and employees have a lower awareness level on machinery reliability compared to equipment aging and operational control method. A Man to Machine (MTM) model is designed based on the results which involves the correlation of the relationship between employee and equipment using Multiple Regression Analysis (MRA). The model involves the accident root causes of a process equipment and the major hazards in the area of the equipment. The R2 value of 0.559 is obtained from the MRA Statistic Data which determines that most statements from machine reliability contribute to the increase of accident rate of the power plant. The MTM model is synced to fit to the reduction of accidents under machine reliability and to increase the awareness among employees.

Safety in Road Tunnels: Analysis of Fire Accident Location inside the Gran Sasso Tunnel in Italy
PRESENTER: Fabio Borghetti

ABSTRACT. The aim of this research was to analyze the fire events occurred inside the Gran Sasso unidirectional highway tunnel in Italy. The Gran Sasso tunnel consists of two parallel one-way tubes with a length of about 10100 meters. The tunnel, managed by Strada dei Parchi S.p.A., is one of the 14 tunnels belonging to the A24 and A25 motorways subject to the application of the European Directive 2004/54/EC. This Directive requires a Quantitative Risk Analysis - QRA for tunnels belonging to the Trans-European Road Network and longer than 500 meters; the QRA analysis for road tunnels has to be based on both traffic and accidents data (European Directive, 2004). This study analyzed the fires and fire principles that occurred in the Gran Sasso tunnel during the period 2007-2020 and considered the left tube of the tunnel which consists of an uphill section of about 4500 meters (with slope between +1.6% and +2%) and a slightly downhill section (-0.2 %) for the remaining part of the tube. The occurred fire events involved both light and heavy vehicles. For each fire event the possible causes and its location along the tunnel were analyzed. Out of the 12 events analyzed (6 fires and 6 fire principles) it was observed that 11 events occurred in the uphill section while only one event occurred in the downhill section. The data analyzed were compared with those of literature studies that consider the different zones of the tunnels where accidents occur (Pervez et al., 2020). The results of this research can be used for the implementation of risk analyses according to the European Directive (Borghetti et al., 2019; Beard and Carvel, 2005) but can also represent a useful tool for the tunnel managers to evaluate the possible increase in the number and location of fire detection devices and/or fire suppression/control systems.

References Beard, A., Carvel, R. (2005). The Handbook of Tunnel Fire Safety, Thomas Telford Publishing, UK Borghetti, F., Cerean, P., Derudi, M., Frassoldati, A., 2019. Road Tunnels An Analytical Model for Risk Analysis. POLIMI SPRINGERBRIEFS, Springer, ISBN: 978-3- 319-49516-3, ISSN: 2282- 2577, doi: European Directive, 2004. 2004/54/EC of 29 April 2004 on Minimum Safety Requirements for Tunnels in the Trans-European Road Network Pervez, A., Huang, H., Han, C., Wang, J., Li, Y., 2020. Revisiting freeway single tunnel crash characteristics analysis: A six-zone analytic approach. Accident Analysis and Prevention, 142 doi: 10.1016/j.aap.2020.105542.

Time to Failure Estimation of Cryogenic Liquefied Gas Tanks Exposed to a Fire
PRESENTER: Federico Ustolin

ABSTRACT. Different fuels which are at a gaseous state at atmospheric conditions can be liquefied to increase their densities both for storage and transportation purposes. There are mainly two manners to liquefy the gases, either by increasing their pressure (e.g. for propane) or by reducing their temperatures (e.g. for liquid hydrogen, LH2, and liquefied natural gas, LNG). The latter method converts the fuels in cryogenic fluid, therefore extremely well insulated tanks (double walled type with vacuum jacket) are required to decrease the boil off gas formation. This type of tanks is already employed in few applications such as the maritime sector and their utilisation is expected to grow in the near future. These hazardous materials might lead to major accidents even though in liquid phase, hence a risk assessment is strictly necessary. For instance, an accidental fire might be ignited in the vicinity of the cryogenic tank. This unintended event may cause a loss of integrity of the vessel with a consequent catastrophic rupture leading to many undesired consequences. For this reason, the estimation of a potential time to failure (TTF) of the container in the worst-case scenario is critical. This estimation can aid the emergency responders intervention and their training as well as the planning of the evacuation procedure. Computational fluid dynamic (CFD) codes are very accurate tools to determine the pressure buildup and hence the tank TTF. On the other hand, CFD is usually complicated to set up, and is computationally demanding. In this study, an analytical model was developed based on well-known thermodynamic equations, to estimate the heat transfer between the cryogenic tank and the surrounding fire in the worst-case scenario. The thermal conductivity of the double walled tank insultation and its modification due to the operating conditions is one of the most complex and critical parameters to evaluate. Different uncertainties regarding the vessel insulation were highlighted in the manuscripts. The outcomes of this model were validated against experimental results. Additional experimental tests are necessary to thoroughly validate the model and understand the behaviour of the cryogenic vessels when exposed to a fire. This type of tests will be conducted for LH2 during the safe hydrogen fuel handling and use for efficient implementation (SH2IFT) project.


ABSTRACT. This paper presents the experimental validation process of an innovative CFD approach, called SBAM ("Source Box Accidental Model"), developed in ANSYS Fluent and aimed at a more efficient characterisation of accidental high-pressure gas releases in congested environments (e.g. offshore Oil&Gas, nuclear plants). In this work, the experimental setup, methodology and a preliminary CFD-experimental data comparison are described. The campaign has been carried out in the SEASTAR-WT wind tunnel, realized at the Environment Park in Turin (Italy) and completed at the beginning of October 2020. This subsonic, open-cycle tunnel with a total installed power of approximately 100 kW allows a range of air speeds between 0 and ~8 m/s in the test chamber. A 1:10 scaled Oil&Gas platform mockup, equipped with flow and gas sensors, was built and installed inside the wind tunnel, allowing to reproduce, through a custom scaling procedure, the conditions of dynamic similarity with the real cases. Preliminary tests were performed to calibrate the tunnel and be acquainted with sensors behaviour and accuracy. The core of the campaign has been devoted to a set of gas releases meant to validate the concentrations and velocities predicted by the CFD modelling. For most of the case studies, first results have shown that normalised concentration profiles present a good consistency with CFD simulation results. New tests are ongoing to validate also absolute concentration values and improve the understanding of the physical phenomena in such a complex setup. The activity has been funded by the Italian Ministry of Economic Development (MiSE) and carried out at the SEADOG laboratory of the Politecnico di Torino.

A STAMP-Game model for accident analysis of an oil spill and explosion accident

ABSTRACT. Accidents in oil and gas storage and transportation are generated by complex socio-technical factors. Conventional accident models regard the accident evolvement as event chains, which have limitations in analyzing systems with increasing complexity and coupling. Thus accidents in complex systems can be investigated from the viewpoint of system engineering. System-Theoretic Accident Model and Processes (STAMP) is widely used to provide insights into the accident causation and risk prevention. Simultaneously, game theory can be adopted in accident analysis to depict the competition and cooperation relationships between stakeholders. This is due to that the stakeholders in STAMP can be regarded as players in game. This paper provides a new perspective to analyze accidents in oil and gas storage and transportation by the integration of STAMP and game theory (i.e., STAMP-Game model), with a case study of the oil spill and explosion accident in Dalian, China on July 16, 2010. The STAMP analysis uncovered the in-depth accident causal factors. Based on STAMP results, game theory was applied to analyze roles that government and companies played in the Dalian accident. Our results demonstrate that the STAMP-Game model is feasible for the causal investigation, risk prevention, and control of accidents in the oil and gas storage and transportation.

10:10-10:25Coffee Break
10:25-11:45 Session TU2A: Risk Assessment
Location: Auditorium
Critical Success Factors for Risk-Based Inspection of Corrosion-Loop Pipelines

ABSTRACT. Petrochemical companies use equipment like vessels, reactors, furnaces, heat exchangers and pumps during the production of petroleum fuels, oil & gas and other chemicals. Plants also use pipelines to transport the various process mediums within and outside the petrochemical plant. Equipment, vessels and pipelines that are used in plants are susceptible to deterioration due to a variety of damage mechanisms, depending on the fabrication materials, process mediums and process parameters. Risk-based inspection (RBI) is an engineering methodology or tool that determines and ranks the risk of failure associated with the operation of physical assets. RBI can be applied to individual equipment and pipelines or to similar pipelines that have been grouped and collectively referred to as corrosion-loops.

A literature review indicated that the critical success factors for the successful implementation of RBI of corrosion-loops have not yet been standardized. This research focused on identifying and ranking the critical success factors for implementing RBI of corrosion-loops. Some 27 critical success factors for risk-based inspection were identified from literature. A questionnaire was developed and sent to RBI stakeholders, requesting stakeholders to rank the importance of 27 factors for a successful corrosion-loop RBI program.

Questionnaires were e-mailed to potential respondents and 231 completed questionnaires were received. The most important CSF as ranked by the respondents was “data collection” and the 2nd most important factor was “record keeping”. The 27 factors were grouped into eight categories. The most important category was identified as “data collection and integration”. Comparison of the critical factors of single equipment RBI and corrosion-loop RBI showed that “data collection” is the most important factor in both RBI types.

The creation of corrosion-loops for pipelines simplifies the task of managing the maintenance of numerous individual pipelines and reduce the cost of maintenance by incorporating the risk ranking of each corrosion-loop into the extent of inspection required, based on risk. The results of this study show maintenance and reliability engineers which RBI factors are most important for the successful implementation of a pipeline corrosion-loop RBI program. The results could be useful for similar processing plants but cannot be generalized for all processing plants.

On the meaning and use of the plausibility concept in a risk analysis context

ABSTRACT. The plausibility concept has gained increasing attention in recent years in risk analysis settings. A number of definitions exist, most of which interpret plausibility as an expression of uncertainty. The concept is frequently referred to in scenario analysis and emerging risk contexts, which are characterized by large uncertainties. The difficulty of assigning probabilities in such cases has led some to claim that plausibility, by offering a purely qualitative approach, is a more suitable tool for measuring uncertainty. However, a proper clarification of what the plausibility concept means in a risk analysis context is missing. Furthermore, current definitions lack a clear distinction between the meaning of plausibility per se, and how it is measured. The present paper aims to rectify these issues by i) reviewing and discussing how the plausibility concept is interpreted and used in the literature, ii) providing a suggested interpretation of the concept in a risk analysis context, and iii) giving our recommendations on how the practical application of the plausibility concept can be enhanced by drawing on contemporary risk science, specifically with regards to highlighting the knowledge and surprise dimensions of risk.

PRESENTER: Stefan Bracke

ABSTRACT. In 2021, the COVID-19 pandemic continues to challenge the globalized, networked world. Restrictions on the public life and lockdowns with different characteristics define the life in many countries. This paper focuses on the second COVID-19 pandemic phase (10-01-2020 to 02-15-2021). Transferring methods used in reliability engineering for analyzing occurrence of infection, this study continues previous research; cf. Bracke et al. (2020). Weibull distribution models are used to evaluate the spreading behavior of COVID-19. Key issue of this study is the impact of measures on the spread of infection comparing different characteristics of lockdown (e.g. light or hard focusing). Therefore, the occurrence of infection in a normed time period after lockdown measures are analyzed on the example of Germany. As well, a comparison of spreading speed with the first phase of pandemic (January to April 2020) is made outlining the dynamic development of COVID-19. In a further step, the occurrence of infection of COVID-19 is put into the context of other common infectious diseases in Germany like Influenza or Norovirus. Differences in spreading behavior of COVID-19 are outlined. Additional information is generated by the application of Weibull distribution models with easy interpretable parameters, in comparison to classical infection analyzes models like SIR model (Kermack and Kendrick 1927). The various impacts of different lockdown measures are pointed out as well as the infectiousness of COVID-19 in comparison to well known infectious diseases.

COVID-19 pandemic: Analyzing of restrictions, medical care and prevention measures in Germany and Japan
PRESENTER: Stefan Bracke

ABSTRACT. In 2021, the COVID-19 pandemic continues to challenge the globalized, networked world. Restrictions on the public life and lockdowns with different characteristics define the life in many countries. This paper focuses on the second COVID-19 pandemic phase (10-01-2020 to 02-15-2021). Transferring methods used in reliability engineering for analyzing occurrence of infection, this study continues previous research; cf. Bracke et al. (2020). Weibull distribution models are used to evaluate the spreading behavior of COVID-19. Key issue of this study is the impact of measures on the spread of infection comparing different characteristics of lockdown (e.g. light or hard focusing). Therefore, the occurrence of infection in a normed time period after lockdown measures are analyzed on the example of Germany. As well, a comparison of spreading speed with the first phase of pandemic (January to April 2020) is made outlining the dynamic development of COVID-19. In a further step, the occurrence of infection of COVID-19 is put into the context of other common infectious diseases in Germany like Influenza or Norovirus. Differences in spreading behavior of COVID-19 are outlined. Additional information is generated by the application of Weibull distribution models with easy interpretable parameters, in comparison to classical infection analyzes models like SIR model (Kermack and Kendrick 1927). The various impacts of different lockdown measures are pointed out as well as the infectiousness of COVID-19 in comparison to well known infectious diseases.

10:25-11:45 Session TU2B: Risk Analysis and Safety in Standardization
Location: Atrium 2
Effect of sunlight exposition on withstanding capability of thin polycarbonate sheets

ABSTRACT. The design of guards has a fundamental role for safety of machinery. One of the purposes of the safety guards is to mitigate the risks of ejection of workpieces or tool parts. The ISO 14120:2015 standard represents the state of the art for the design, construction and selection of guards used to as to avoid access to tools and moving parts as to protect people from ejection of objects. The withstanding capability of a guard is tested through a single impact of a projectile of standardised shape hitting perpendicularly the surface of a flat plate. Whether or not the panel is perforated determines the suitability of a given material with a given thickness to be used for the construction of protective panels. The aim of this study is to analyse the influence of ageing of polycarbonate guards due to a storage for a long time before their use. During the storage period, the material may be exposed to different environmental conditions, the worst of which, involve the outdoor exposure to the direct sunlight. In order to verify the possible effect of ageing due to irradiation caused by exposure to sunlight, a simulation of this influence was performed using UV fluorescent lamps in a machine for accelerated tests. The ageing cycle chosen for the simulation requires continuous exposure under constant lamp irradiation and temperature without condensation cycles. The accelerated test was designed to obtain the ageing equivalent to a period of one year of exposure being the panel placed in a vertical position and stored outdoors under direct sunlight but without rain on it. The reference solar irradiation conditions were chosen to replicate the same amount of energy typical of the centre of Italy. The conditions fixed for the accelerated ageing tests were worse than in real cases and will be fully explained on the paper. The withstanding capability of the aged panel is tested though impact tests of two sets of panels: a set of aged panels and a set of the same material without ageing, in order to compare the results and highlight the possible effects of deterioration. The tests were performed using the gas cannon of INAIL laboratories in Monte Porzio (Rome). The withstanding capabilities of the panels were analysed with the well-known Recth & Ipson equation, using a probabilistic method described in Landi et al. [1]. The data obtained from the tests, that will be presented, show us that, for those type of polycarbonate panels, the effect of ageing is neglectable/not possible to claim: the energy absorbed by the aged and unaged panels remain more or less constant.

Withstanding capacity tests of roller covers, bellows and aprons as guards for machines

ABSTRACT. Roller covers, bellows and aprons are generally used in order to protect machines from coolants and swarf and even people from noise but they are usually not designed in order to protect against impacts due to ejections of work-pieces or tool parts. In addition, these protections are often designed ad-hoc for a specific machine because of strict space requirements. Consequently, manufacturers design a lot of specific roller covers, bellows and aprons with several and not homogeneous characteristics. Even if the initial aim is to protect against the effects of coolants, swarf and noise, it is possible that these components have to be considered as guards as requested by Machinery Directive (2006/42/CE) because of whole design even considering specific tasks of the life of the machine (e.g., maintenance, setting…) In the first part of this paper, we discuss about safety requirements of these protections with nonhomogeneous characteristics. We show that it is very hard to accomplish standard requirements and we are often obliged to limit or interpret normative documents in order to get reproducible tests. In the second part of the paper some tests performed on a roller cover made of aluminium solid stripes hinged with plastic connectors will be presented. We propose an adapted version of the standardized test of ISO 14120:2015 annex B in order to get repeatable tests able to discover the withstanding capabilities of such components when used as guards. Finally, thanks to the use of high-speed images of impacts, we make considerations about impact phenomenology of these components.

Safety of Machinery – Risk Estimation for Technical Failures of a Gravity-loaded Axis

ABSTRACT. Risk estimations in occupational safety are usually only qualitative, so that a comparison with other risks in scaled form is often not possible, although risk comparisons are indispensable in the legally prescribed risk assessment of machines. In the following considerations, a general possibility for scaling risks in occupational health and safety will be explained for a typical safety function (SF) in machine tools, which is: the safe standstill of a gravity-loaded axis (GLA). For simplicity, the considered causes of dangerous failures of this safety function will be an electronic control (A) in combination with a mechanical component (B). For the former, the probability density function PDF-A is assumed to be an Exponential distribution, for the latter PDF-B a Weibull distribution. As for a GLA, we are taking it for granted that the two failure modes are connected in parallel. Thus, if only one of the two component A or B fails, a hazardous event, such as a lowering movement of a tool spindle by gravity, is still prevented by the other component. Only when both components fail significant hazards must be assumed, such as crushing and shearing. By connecting A and B in parallel, the probability of failure of the SF at a time “t” can be determined from the integrals of the two probability density functions: F-SF (t) = f[CDF-A(t), CDF-B(t)]. For the scaling of the probability that a hazard HSF can actually arise from a failure of the above safety function, the parameter "O", the so-called "occurrence probability of a hazard" according to ISO 12100, is used. For this purpose, a uniform distribution is to be assumed, so that O always has the same value, independent of all conceivable situations. Consequently, at a time “t”, the time-dependent hazard H-SF(t) = O · F-SF (t) results. For a dangerous failure to become a hazard, however, a person must be exposed to a danger, i.e. the person needs to approach the hazard to such an extent that an injury could occur. This would be the case, for example, if an operator were to perform manual operations in the work area of a machine tool under a GLA, e.g., to manually change a workpiece in order to clamp another raw part after completing the part from before. Such operations are recurring on machine tools. Their frequency distribution can be described with a Possion distribution, from which an expected value of the mean frequency of exposure events can be assumed. As for the occurrence of a hazard HSF due to a failure of the safety function assumed above, an independence between the failure and the presence of the operator in the hazardous area shall be assumed. This results in a multiplication of the two probabilities (or expected frequencies). The relative hazard exposure Ex is composed of the frequency of exposure events NEx and their respective duration DEx in the form of a product formation Ex = NEx · DEx with its probability P(Ex). A log-normal distribution is to be assumed for DEx, so that there are many short-term activities (with low exposure durations) and few moderate activities and very few activities of long duration. Thus, the mean expected frequency of hazard exposures Ex is the product of the expected values of a Poisson distribution and a log-normal distribution. The last element between a hazardous event and the occurrence of an injury is the so-called controllability with the parameter "C". The expression "1-C" is directly connected to the expected frequency of injuries. For this "inverse controllability" (i.e. the non-controllability), a Gaussian distribution is to be taken as a basis on the one hand, and on the other hand it is assumed that the non-controllability is independent of the other risk elements. Then it follows that the Gaussian distribution must be related to the result of the previous operation. This is done using its mean (1-C)average. For simplicity, the severity of an injury is assumed here to be scalar with a value of "S", ignoring the fact that the severities of machine tool injuries are typically distributed between minor injuries and severe injuries in their frequencies over four powers of ten. That is, for a total of approximately 10,000 injuries, there is typically one fatality.

This paper answers the question of how often injuries are to be expected on average with the assumptions made in a given time frame (e.g. a service life of 10 years or, for comparison, 30 years). This produces a plausible result, which can be compared with other occupational risks. A consideration of the scatter and confidence intervals would go beyond the scope of this paper. A follow-up paper is in preparation.

An innovative integrated smart system for the safe management of de-energization in maintenance activities of assemblies of machinery

ABSTRACT. In the Machinery Directive 2006/42/EC the risk management of the maintenance activities is particularly important, because the operators face several hazardous situations. The safe entrance into the danger zones of the machinery assemblies often requires the isolation from energy sources and the dissipation of stored energies to assure the avoidance of release of energy, the unexpected start-up of the machinery and harmful incidents. In this context conventional well-known procedures, such as Lock Out/Tag Out (LOTO), are widely exploited by the employer. Their actual effectiveness in terms of prevention of incidents depends strongly on the correct application by the operators and currently several dramatic events still occur. So, we present a novel smart system, based on the paradigm of Industry 4.0, which supports the operators and guides the procedure step-by-step with the aim of mitigating the error probability and the consequent risk. The proposed smart system exploits the Radio Frequency IDentification (RFID) technology to measure the real-time position of workers through a synthetic-array method. A cloud-based software supervises and manages all activities. Based on the tracking results, it communicates step-by-step to the operators the safe procedures on remote devices such as smartphones and receives feedbacks about the right execution. In this way the de-energization procedure can be carried out in safe way with low level of error risks for all operators involved in the maintenance activity.

10:25-11:45 Session TU2C: Adaptative Optimization of Maintenance Strategies for Complex Systems
Grouping maintenance strategies optimization for complex systems: A constrained-clustering approach
PRESENTER: Maria Hanini

ABSTRACT. Maintenance actions constitute critical tasks that ensure the availability of industrial systems and improve their operating safety. However, maintenance faces numerous challenges and is no longer limited to only guarantying availability. It has become a strategic concern and abides by imposing quality, safety, and cost requirements. The problem of finding optimal grouping strategies for maintenance activities is NP-hard. This problem is well studied in literature where various economic models and optimization approaches are proposed.

While most literature models use heuristics, such as evolutionary algorithms, to locate cost-reducing grouping strategies, context-specific constraints that could arise within each system are not taken into consideration. Moreover, for large complex systems, heuristic approaches are not scalable and cannot guarantee the convergence to a feasible solution. Therefore, we propose a new scalable and adaptive optimization algorithm to group maintenance activities in multi-component complex systems. Our Constrained-Clustering-based approach takes into consideration the specific constraints and provides optimal grouping strategies in negligible times.

Numerical simulations are performed by a dynamic simulator to evaluate the proposed optimization algorithm through different systems. Our results showed the efficiency of the constrained-clustering algorithm that converges in negligible times even for large complex systems.

Fault detection in a multi sensors context by 3D descriptor method

ABSTRACT. The monitoring of an asset in an industrial context is a real challenge today, as data are more and more available, and computation power becomes cheaper with time. However, if we want to use data from different sensors to detect if there are anomalies of any kind, it is usually needed to individually consider a whole time series, or the values of several time series at a particular moment. In this article, we propose an adaptation of 3D objects descriptors to the detection of unknown faults in a multi-sensors context for features extraction. Then, classical outliers detection methods such as Local Outlier Factor and isolation forests are used. This allows us to detect an unknown problem to come on an asset monitored by several sensors. To our knowledge, this problem has not been completely solved yet, and opens new opportunities in class disequilibrium contexts. Final performances confirm the interest of the proposed approach adapted to a real time industrial context, and allow to consider a new way for extracting features in the pretreatment of multi-time series.

Simulation of complex system based on optimization methods for Maintenance scheduling
PRESENTER: Michel Batteux

ABSTRACT. Industrial systems are subject to faults and failures on their components, which can lead to the unavailability of the systems and increase costs due to interventions. A suitable maintenance strategy is therefore a good solution so to reduce such costs, and also to increase the availability of the system. Combination of different kinds of maintenance policies on components of a system can be a good solution. Nevertheless, it has to be finely analyzed, so to search the optimal maintenance strategies on the system, according to specified criteria (e.g. availability, cost, etc.). In this publication, we illustrate how the combination of a simulation tool, based on stochastic discrete event systems, and an optimization algorithm can be used to find (one of) the best strategy of maintenances. The simulations are led by an optimization algorithm. We propose a solution that optimizes system availability, and cost with system-maintenance constraints using an exact mathematical formulation. A stochastic simulator performs calculations according to parameters provided by an optimization algorithm, which plans preventive maintenance schedules. The optimization algorithm provides the optimum maintenance scenario defined by the kind of maintenances to apply and the suitable schedules. The experiments show that the simulation based optimization algorithm gives more flexibility to the decision maker.

Optimal Planning of Preventive Maintenance Tasks on Electric Power Transmission Systems
PRESENTER: Miguel Anjos

ABSTRACT. We present a mathematical optimization-based approach to transmission maintenance scheduling for electric power systems that maximizes the energy transported by the system while meeting target times for maintenance tasks. The optimization model integrates constraints from possible urgent corrective maintenance events so the network meets the N-1 security criterion while in preventive maintenance. We illustrate the capabilities of this approach using the benchmark IEEE 24-bus Reliability Test System.

10:25-11:45 Session TU2D: Prognostics and System Health Management
Location: Panoramique
CNN based analysis of grinded surfaces

ABSTRACT. The optical perception of high precision, fine grinded surfaces is an important quality feature for these products. Its manufacturing process is rather complex and depends on a variety of process parameters (e.g. feed rate, cutting speed) which have a direct impact on the surface topography. Therefore, the durable quality of a product can be improved by an optimized configuration of the process parameters. By varying some process parameters of the high precision fine grinding process, a variety of cutlery samples with different surface topographies are manufactured. Surface topographies and colourings of grinded surfaces are measured by the use of classical methods (roughness measuring device, gloss measuring device, spectrophotometer). To improve the conventional methods, a new image processing analysis approach is needed to get a faster and more cost-effective analysis of produced surfaces. For this reason, different optical techniques based on image analysis have been developed over the past years. Therefore, fine grinded surface images have been generated under constant boundary conditions. The gathered image material in combination with the classical measured surface topography values is used as the training data for machine learning analyses. Within this study the image of each grinded surface is analyzed regarding its measured arithmetic average roughness value (Ra) by the use of Convolutional Neural Networks (CNN). CNNs are a type of machine learning algorithms which can particularly be applied for image analysis. For the determination of an appropriate model, a comprehensive parameter study is performed. The approach of optimizing the algorithm results and identifying a reliable and reproducible CNN model which operates well indepent of the choice of the random sampled training data is presented in this study. The classification is part of the development of a condition monitoring tool for the fine grinding process of knives.

Method of Calibration Period Determination for Temperature Chamber based on Risk Analysis
PRESENTER: Xinrui Zhang

ABSTRACT. The temperature box is the main equipment for temperature testing, and its accuracy affects the quality and credibility of the test results. Currently, temperature chambers are calibrated regularly, but too long a calibration period will result in too high calibration risk, and too short a calibration period will result in too high calibration costs. In order to formulate a scientific and reasonable calibration cycle and provide a theoretical basis, the following formulating methods are proposed.First, a combination prediction model that integrates similar product information is proposed, in which the Grey Model(1,1) and Autoregressive moving average models with the best dimensions are used for overall prediction, the Markov model is used for residual prediction, and the combined weighting is used to obtain calibration data. Calculate the Euclid distance between the specific temperature box and the similar temperature box, determine the prediction model of the specific temperature box based on the similarity function, and predict the drift trend of the performance parameter in the next calibration period; secondly, the probability density function of the calibration parameter is compared with Reliability model, using the fitting method to establish the reliability function in a certain fixed period, considering the aging of the temperature box, combining the actual use time and the number of calibration cycles to establish a hybrid failure rate evolution model, based on the existing reliability function and failure Rate derives the reliability function in the prediction period, and refines the specific change time of the calibration period according to the reliability function; finally verifies the effectiveness of the proposed strategy based on the case analysis.


ABSTRACT. Health representational factors are representational parameters related to health status. Health measurement model can be established through the mathematic description of health status, so as to objectively and accurately express the current health status of products. First of all, this paper analyzes the modeling principle of health degree, studies three different description forms of equipment health measurement models. Then, through comparative analysis, the applicability of these equipment health measurement models is summarized. It is an important content to determine the weight of health representation factors in equipment health measurement model. However, most of the weight determination methods include factors of different degrees of expert experience. In order to eliminate the subjective factors that may lead to inaccurate results, this paper analyzes the health status recognition rate of the equipment health measurement model under different weights, based on the objective operation data of the equipment with different health status and the same health status, based on the relationship between the weight and the equipment health status measurement results. In the end, this paper establishes a difference measurement model to determine the weight of health representation factors, and verifies the feasibility of the model through an example, as well as the next improvement.


ABSTRACT. Printed circuit boards (PCBs) are very important parts for almost every electronical device since they are the most frequently used interconnection technology for components in electronic products. Due to that fact, the reliability of PCB is very important. In order to guarantee the quality of a PCB, the single manufacturing steps have to be monitored. One of the most important manufacturing steps is the tinning process of the copper pads using hot air leveling. The importance of this process lies in the fact that oxidization of the copper pads has to be avoided, and the solderability of the PCB needs to be ensured. For this reason, uncoated copper pads must be prevented. In a previous study a model for the automatic detection of uncoated copper pads has been developed using a patch-based classifier. The model showed good results, but the approach was not suitable to be implemented in an industrial application. In this present study an instance segmentation approach was used to tackle the problem. The basis of the model is a R-CNN with mask prediction that has to be trained using image data. A PCB inspection unit was employed to generate the image data. Due to the high expense of the instance labeling, an approach adopted from active learning was used as strategy to select images that extend the training dataset efficiently. Evaluation of the developed model shows, that the great majority of defects is detected correctly. The future goal is to improve the model such that it can be included in an automatic fault detection system as part of an online quality control unit in the manufacturing process of PCBs.

10:25-11:45 Session TU2E: Risk management for the design, construction and operation of tunnels
Location: Amphi Jardin
PRESENTER: Rebecca Nebbia

ABSTRACT. Workers’ exposure to asbestos minerals plays an important role in the Occupational Safety and Health (OS&H) risks typical of tunneling. The distribution of asbestos minerals in rock formations is often highly irregular, since their possible formation during the metamorphic process depends on pressure, temperature, host rock composition and the structural framework [1]. This makes a special Risk Assessment and Management necessary. Many case studies of risk assessment and management in tunneling are available in literature, so the aim of this study is to define their implementation in case of formations containing asbestos minerals, and organize them in hierarchical order. Initially, an extended literature review was carried out, in accordance with the PRISMA statement [2], to select and discuss the current solutions used to manage possible workers’ exposure. Subsequently, the various solutions (e.g. special excavation techniques, air dispersed pollutants control systems at the source, ventilation systems, and pollutant control of underground areas through physical separation by means of brattices and water curtains, etc.) were hierarchized, by order of priority, and their real effectiveness discussed. The solutions selection, and the priority order were got, taking into account the safety requirements of the construction of the base tunnel of the Turin-Lyon high capacity railway line. The study brought into evidence that despite technological progress there are still some not yet exhaustively scientifically supported, even if widely used, control measures in the management of carcinogenic pollutants.

PRESENTER: George Sisias

ABSTRACT. As a rule, tunnels are considered safe road infrastructures. There are many reasons for their increased level of safety. That is, the fact that are thoroughly inspected and monitored, drivers are more careful when passing through them and are unaffected by open road weather conditions. From this point of view, the tunnels are generally considered safer than the open road network, in terms of accident rates. Nevertheless, when an accident occurs inside a tunnel it can maximize its impact and casualties due to its constrained space of occurring events. Undoubtedly, fire accident events are the greatest threat to road tunnel systems and destructive experiences such as the Mont Blanc fire in France (1999) or the fire in Yanhou, China (2014) are indicative of the severity of such incidents. The use of automated deep learning and data mining algorithms that can provide accurate detection, frequency patterns and concentration predictions of dangerous goods passing through tunnels, is a significant fire incident restriction factor. To achieve automated detection, a post processing image detection tool has been developed, that identifies and marks the passage of dangerous goods through tunnels. This tool receives input from toll camera images and offers timely information of vehicles carrying dangerous goods, since such vehicles are signaled with a proper ADR label number (ADR vehicles). Knowing the exact number of ADR vehicles along with their carrying substance at any particular time, followed by classification and associated rules to fire incident occurrences, can lead to an effective management of the passage of such vehicles and consequently to an effective preventive management of fire incidents in tunnels.


ABSTRACT. Since road tunnels may expedite transport and communication, their numbers are increasing worldwide (PIARC, 2016). Research has shown that tunnel safety is a matter of great importance, with emergency situations such as a fire carrying the risk of great loss of life (Ntzeremes and Kirytopoulos, 2018). Recent studies have shown that, while driver behavior is one of the decisive factors in road tunnel accidents, drivers hardly receive proper education about the particularities of the tunnel environment, and they also exhibit deficiencies on how to deal with emergency situations inside a road tunnel (Kirytopoulos et al., 2017, 2020). As a means of tackling the above challenge, the aim of the present research endeavour is to develop a software tool based on the concept of “serious games”, to educate and inform potential users on the specific rules and behavioural patterns that should govern driving through tunnels. In order to create this environment, the initial step was the determination of the basic instructions that a user must be familiar with, while driving through tunnels. The proper behavioral patterns were gathered from the relevant standards and the specific needs for education (i.e. areas that tunnel users lack knowledge) have been explored through previous studies such as Kirytopoulos et al. (2020). After identifying the most important instructions, the research proceeded with the development of an innovative tool for the purpose of users’ training, consisting of a game environment which simulates from a first-person perspective the task of driving through a tunnel. Within this environment various different scenarios were developed with the aim of evaluating the knowledge of users as well as educating them. Moreover, it will enable future simulation experiments which will allow the extraction of information illustrating the user’s decisions, as well as attention-grabbing elements in the environment. The ultimate aim of this research is to further increase safety within road tunnels, focusing on driver behavior as one of the most crucial parameters.

PRESENTER: Tonja Knapstad

ABSTRACT. This paper aims to explore how and what kind of learning is emphasized in the use of virtual reality to develop fire evacuation knowledge among participants. Experiences from several large tunnel fires in Norway during the recent years, highlight the need for deeper knowledge of tunnel fire evacuation. Virtual reality technology has developed rapidly in recent years and regarded as a highly useful tool among fire evacuation researchers. The advantage of virtual reality is to give participants experiences with fire and smoke conditions without exposing them to real risk. With a theoretical understanding of learning as a concept which can be approached and understood from different perspectives, a literature study was set up to examine learning in virtual reality. Our understanding of learning is based on a combination of cognitive and sociocultural perspectives. Virtual reality technology provides a safe context for behavioral challenges in different constructed and simulated tunnel fire scenarios. The complexity of real fire evacuation situations may however challenge the use of virtual reality as a validation tool for learning transfer and behavior in real situations.

10:25-11:45 Session TU2F: Probabilistic vulnerability estimation, lifetime assessment and climate change adaptation of existing and new infrastructure

ABSTRACT. Coastal bridges are crucial traffic components and vital to the social economy. However, they are susceptible to enlarging risks from hurricane-induced surges and waves due to the increasing temperature and humidity, rising sea-level, and amplification of hazard intensities. An understanding of how to mitigate the potential risks of coastal bridges from natural hazards is a critical step toward reliable transportation systems; while relative adaptation measures were seldom discussed. This study conducts a comprehensive analysis of the effects of different adaptation measures to help the bridge resist hurricane-induced risks under climate change scenarios. The long-term loss assessments associated with different retrofit measures are evaluated by considering deep uncertainty in future climate change. Different retrofit measures are investigated and compared, including inserting air venting holes, enhancing connection strength, and elevating bridge structures. Specifically, a Computational Fluid Dynamics (CFD) model is established to compute wave-induced forces on the coastal bridge. Vulnerability curves are derived based on the deck unseating failure mode, and long-term losses are assessed considering the stochastic occurrence of hurricanes and climate change scenarios. The effects of retrofit adaptations on reducing long-term losses are examined and compared according to the proposed framework. Such a study results in systematic evaluations of different adaptation measures, which could help optimal and robust designs of coastal bridges and modifications of existing ones.

Effect of climate change on railway maintenance: a systematic review

ABSTRACT. According to international reports and publications, the effects of climate can cause rail failures and consequently leading to disrupt travel schedules and unforeseen delays. Extreme heat, cold, and snowfall are among the most important climate change conditions that affect the normal railway operation. Thus, in order to determine the appropriate maintenance strategy that reduces the social and economic impact of the repair interventions, it is fundamental to identify and predict in advance the potential points of failure. This paper compares and analysis different published studies related to railways failures by measuring the effects of weather on rail defect.

Pressure Distribution Patterns Between the Ballast and the Concrete Slab in Railway Trough Bridges

ABSTRACT. In Sweden, a substantial amount of railway bridges is approaching their intended lifespans and are planned to be replaced. However, it is not sustainable neither from a financial nor an environmental perspective to replace these bridges if they are still sound and safe. Thus, an evaluation of their actual capacity is required with the aim of extending their lifespans. A way to obtain a more accurate capacity is to determine the loads that are acting on them. Available literature points out the lack of experimental investigations on sleeper-ballast contact pressure, as well as on the stress distribution along and across the ballast. Consequently, railway bridge design has been based on traditional rather than rational assumptions, which can be quite conservative. In this paper, a review of models is carried out for evaluating stress patterns on the surface of the slab on ballasted concrete bridges. Then, a simplified finite element model of a concrete trough bridge, a common type of structure in Sweden, is used in a parametric analysis aimed to understand how the identified pressure distribution patterns affect the performance of this type of structure. Finally, with the purpose of studying how some parameters influence the bridge safety, a probabilistic reliability analysis is used. The reliability index beta () is obtained using the polynomial response surface method and its value is compared for different boundary condition scenarios. Also, the sensitivity factors for the considered random variables are compared and analyzed. Results show that the assumption of support condition and pressure pattern has a significant impact on the capacity, failure mode and probability of failure of this type of structure.

Design of structures in the changing climate

ABSTRACT. Important task is to evaluate the potential impact of climate changes on construction works, mainly for bridges and other structures with longer design life. The aim is to analyse how anticipated changes in European climate could affect the assessment of design weather parameters, as far as they may be justified for the reliability-based design, including the partial factor design approach for structures according to Eurocodes, based on current knowledge concerning projection models of future climate in Europe. The design weather parameters are analysed, considering common types of structures which might be particularly sensible to variations in those parameters. The biggest contributors to the inherent uncertainty in the estimation of climate projections include natural variations in climate due to solar activity, future emissions of greenhouse gases and other harmful resources and uncertainties related to decision on effective reduction of emissions of greenhouse gases. These uncertainties make it rather difficult to provide substantial recommendations concerning design parameters for actions on structures regarding climate changes on a regional scale. It is however possible to indicate certain trends of selected basic variables which influence models of climatic actions on structures, environmental actions or degradation of materials, e.g. carbonation of concrete or steel corrosion. The climatic data on which the current generation of the Eurocodes is based are mostly about 20 years old, with some exceptions of recent updates of national data. The second generation of the Eurocodes is expected to be nationally implemented by 2025 year. It is foreseen that climatic maps should be revised in Eurocodes. The partial factors for climate actions should be further calibrated taking into account the characteristics of climate actions. Potential enhancement factor for consideration of climate changes, if needed, should be specified in connection to relevant partial factor of a climate action.

10:25-11:45 Session TU2G: Oil and Gas Industry
Location: Atrium 3
Decisions in conditions of uncertainty involving the development of offshore oil fields: a proposal of a framework for a Decision Support System

ABSTRACT. The oil and gas industry is often faced with important decisions in environments with a high uncertainty level. Decisions involving the development of offshore fields, with the possibility of introducing new technologies, is a typical problem in this industry and generates great interest due to the large financial resources it moves. Analyzes involving questions of this type are positioned between two fields that have great synergy: decision analysis (AD) on the one hand; and risk analysis (AR) on the other. However, despite this synergy, these topics usually are studied and developed independently. It is reasonable to say that RA has developed giving greater priority to aspects related to risk - probabilities of occurrence, consequences, among others; on the other hand, AD was developed prioritizing the evaluation of alternatives’ outcomes, the psychological aspects of the decision, the decision criteria, among other aspects. Considering the responsibility for offshore oil fields' development decisions, this work presents a basic architecture proposal for a Decision Support System evolving both aspects – the risk and decision analysis. The system has the purpose of contributing with the executives responsible for decisions, providing an environment with an adequate set of information and suggestions, presenting prescriptive proposals, which consider the various options, the uncertainties involved, the different states of nature, the consequences of each alternative and the attitude of the executive in the face of risk.


ABSTRACT. During wastewater control activity, in a small monitoring well located out of a crude oil extraction/storage plant, polluted water with hydrocarbons have been found inside a sewage pipeline linked to the Water Treatment System (WTS) consortium. Same finding was noticed inside the WTS plant by the operators, and in the ground area around the plant. Following investigations on the oil storage tanks assured that a loss of containment occurred from the bottom of tank D. The slow migration of the oil through the ground led to the pollution of a layer of 26000 square meters area from top surface down to groundwater level, with consequent strong environmental impact. Almost 400 tons of crude oil have been estimated being released in the environment. All plant activities have been suspended for 90 days during which inspection, check and monitoring phases took place. The event had a slow long evolution, was not immediately evident and discovered only after months from the starting of the release, when trying to recover of some oil release from the groundwater following a monitoring/control activity. After a detailed reconstructions of the facts, it was assured that the oil was released from the storage tank bottom, due corrosion phenomena, and migrated easily through the ground along a fast drainage way underground, unknown by the company. The accident is still under investigation, but it is possible to put in evidence some interesting critical issues elements linked to the causes of the accident: • faults in identification/analysis of the event: no scenarios with environmental impact were adequately considered in the Company risk analysis; • faults in operational control, and in maintenance procedures, according also with the Best Available Techniques BAT defined for oil storage tanks; • faults in lessons learnt consideration from the analysis of past experience: essential both in Seveso and IED experience. Important Safety&Environmental-Management-System improvements have been carried out after Seveso and IED inspections. The accident put also in evidence, in order to avoid situations like the one occurred, the need to find ways to improve communication between Seveso and IED control activities and to adopt common approaches when dealing with the operation of an establishment in the respect of both safety and environmental issues.

PRESENTER: Deshai Botheju

ABSTRACT. On the basis of critical significance of “process safety culture” concept within industrial safety management domains, this paper aims to identify and discuss various factors that can cause positive safety cultures to degrade over the time. The concept of “safety culture” has been defined in different ways (Botheju et al, 2015; Botheju & Abeysinghe, 2015); but for the purpose of this paper, “the process safety culture” can simply be explained as the embedded psychosocial and cultural medium within which the decision makers who are crucial in making important process safety design decisions have to operate. Through long term experience, authors have seen how some of the worlds’ best safety cultures have evolved over the time, as well as how the same safety cultures are eventually started to degrade due to various causes. The primary aim of this article is to recognize such factors leading to the degradation of good safety cultures. The adopted methodology in this study is to critically analyze the observations made during real life safety management endeavors. Economic pressure, absence of major accidents over a long duration, impact of globalized human resources, lack of regulatory intervention willpower, and natural degradation without reinforcing actions are recognized as the key factors needing attention in this regard. A number of brief case examples will be used to emphasize the facts presented in this paper. This work can serve as a useful experience transfer for any resource person involved in designing or managing industrial safety systems and management tools, or for any stakeholders directly connected to the process industry sector.

Automatic fault trees generation and analysis for thousands of gas transmission units

ABSTRACT. GRTgaz owns and operates the longest high-pressure natural gas transmission network in Europe. The industrial assets of GRTgaz include more than 32,000 km of pipes, 26 compression stations, about 4,800 shut-off stations and more than 5,000 pressure reduction stations (notably for delivering public distributions and industrial consumers). In particular, the pressure reductions stations can have one two lines, each line having one or two pressure regulators, plus safety devices (shutdown valves and/or safety relief valves), and other items (filters, manual valves, gas meters…). In addition, each type of device exists in different models (technology, manufacturer, sizes, parameters…) so that there are probably no two stations that can be assumed alike. Since GRTgaz must transport natural gas on behalf of all its customers while ensuring optimum safety, cost and reliability, risk assessments need to be performed for thousands of these stations.

To perform efficiently the safety and reliability assessment of all gas pressure reduction stations, a process has been developed for automatically generating and analyzing fault trees. Two undesired events are considered per station: pressure too low (including loss of gas furniture) and pressure too high. The fault trees aim at modelling these events considering the proper architecture of each station and the characteristics of its devices. The inputs data are: the list of all stations with their architecture and the references of their devices; the list of all devices with their characteristics, including ages; the list of all failures observed over the last 15 years; and the maintenance policy. These data allow to estimate reliability parameters, using Weibull distributions, for required failure modes of each device according to its characteristics (from 3 to 5, depending on the type of device). Previous developments were dedicated to making available all these inputs within a digital platform for data visualization.

Inputs data are then exported to Excel files where further developments have been performed in Visual Basic for Applications. First, fault trees are generated for each station according to its architecture, by creating a “.dag” file (readable by several fault tree software tools) per undesired event and per station. The basic events are parametrized by the Weibull parameters defined for the corresponding devices, the ages of the devices, and the periods of preventive maintenance that allow detecting the failure modes. Then, a specific development has been done within a commercial fault tree analysis tool in order to launch at a time any number of fault trees (“.dag” files) – for which access paths are given in a single “.txt” file – with given calculation parameters (e.g. the average frequency of undesired events over the 5 next years). Then, results are compiled within a “.csv” file, with one line per analyzed fault tree and requested results in columns.

The resulting risk assessments are now used for identifying the most critical stations in terms of safety and availability, optimizing periods of preventive maintenance, and prioritizing investments in terms of asset renovation.

10:25-11:45 Session TU2H: Maritime and Offshore Technology
Location: Cointreau
Safe Speed for Maritime Autonomous Surface Ships - The Use of Automatic Identification System Data

ABSTRACT. Introduction: All vessels are required by law to proceed at a safe speed while at sea. However, there is no acceptable method of determining what value of speed could be considered safe. One way of determining safe speeds in different conditions could be the utilization of Automatic Identification System (AIS) data to create a safe speed model that maritime autonomous surface ships (MASS) could follow. Objectives: Investigate if MASS can determine the safe speed without human support by utilizing historic AIS speed data of other vessels. Investigate further if AIS and visibility data show a strong relationship between visibility and vessel speeds, and if vessels generally show a reduction of speed in restricted visibility. Methods: AIS and visibility data was collected and merged in an area off Western Norway in the period between 27 March 2014 and 31 December 2020. A simple linear regression was calculated and supplemented by two graphical methods for revealing relationships between two variables. Results: A significant regression equation between visibility and speed was found. This relationship was not strong. Average transit speed was highest when visibility was below 1,000 meters. Conclusion: The problem of quantifying the safe speed of a vessel in different conditions does not seem to be solvable by only using historic AIS data to create a model of normalcy which a MASS can follow.


ABSTRACT. Around 70% of maritime accidents have been attributed to crew unsafe acts (Chauvin, Lardjane et al. 2013). Human unsafe acts may be predicted and analyzed through the identification and assessment of factors that influence human error, namely performance shaping factors (PSFs). This paper presents a hybrid early-warning system that integrates a detection model for procedure violation, and a prediction model for unsafe acts resulting from skill-based errors, decision errors, and perceptual errors (Figure 1). The early-warning system utilizes voluminous datasets collected from multi-source sensors, including on-body wearable sensors, video cameras and microphones placed on board. The prediction model is human error type specific and established using PSFs covering personal characteristics, task characteristics, environmental conditions, and ship characteristics. The PSFs and corresponding relative importance are collected from a literature review, historical maritime accident reports review, and questionnaires among experienced seafarers. A set of indicators are further proposed to rate the PSFs for quantifying the probability of unsafe acts. The hybrid early-warning system is planned to be installed and validated on a cruise ship to detect, predict, and early warn the unsafe acts of crews to prevent accidents from happening. The National Key Research and Development Program of China (grant number 2019YFB1600602) is acknowledged as the main sponsor of the project.

Sources of LNG Bunkering Leak Frequencies

ABSTRACT. A cleaner future for maritime transport relies on the ability to refuel ships with liquefied natural gas. Risk assessments of this LNG bunkering are sensitive to the likelihood of accidental leaks. While several sources offer leak frequencies for LNG transfer, few acknowledge their uncertainties. This paper investigates the available sources and synthesizes them to estimate an uncertainty distribution for LNG bunkering leak frequencies.

The paper first compares the leak frequencies that have been used in published LNG bunkering quantitative risk assessments and guidance documents, finding that they vary over 3 orders of magnitude. It then traces the original sources of the leak frequencies, which are rarely acknowledged, and sometimes incorrectly transcribed. Understanding of the original source, together with the judgments (and sometimes errors) that have been added to it, is necessary to appreciate the quality of any leak frequency. The paper proposes a set of criteria indicating high-quality leak frequencies, and uses them to rank the quality of the available sources. This provides a way of combining the available sources to estimate an uncertainty distribution for the leak frequency.

Until improved leak frequency models are available, this “wisdom of the crowd” estimate provides a better understanding of the likelihood of leaks than any single existing approach. It also highlights the importance of uncertainties when evaluating the need for additional safety measures in LNG bunkering.

10:25-11:45 Session TU2I: AI for safe, secure and dependable operation of complex Systems
Location: Giffard
New probabilistic guarantees on the accuracy of Extreme Learning Machines: an application to decision-making in a reliability context

ABSTRACT. This work investigates new generalization error bounds on the predictive accuracy of Extreme Learning Machines (ELMs). Extreme Learning Machines are a special type of neural network that enjoy an extremely fast learning speed thanks to the convexity of the training program. This feature makes ELMs particularly useful to tackle online learning tasks. A new probabilistic bound on the accuracy of ELM is prescribed thanks to scenario decision-making theory. Scenario decision-making theory allows equipping the solutions of data-based decision-making problems with formal certificates of generalization. The resulting certificate bounds the probability of constraint violation for future scenarios (samples). The bounds hold non-asymptotically, distribution-free, and therefore quantify the uncertainty resulting from limited availability of training examples. We test the effectiveness of this new method on reliability-based decision-making problem. A data set of samples from the benchmark problem on robust control design is used for the online training of ELMs and empirical validation of the bound on their accuracy.

Safety of Autonomous Ships - Interpreting High Confidence Mistakes of Deep Neural Networks using Heat Maps
PRESENTER: Erik Stensrud

ABSTRACT. Deep Neural Networks (DNN) are used for image recognition in safety-critical functions of autonomous cars and ships. Car accidents have exposed DNN’s lack of robustness to irregular events like unusual image objects and scenes. A misclassification with a high score, which we term a high confidence mistake, is of a particular concern to autonomous ships where we foresee a remote, land-based human operator in the loop who can intervene if warned. A high confidence mistake will not generate a warning to the human operator. To assess the safety of the classifier, we need as a minimum to understand why the classifier fails. This study evaluates the Layer-Wise Relevance Propagation (LRP) heat mapping method, applied to maritime image scenes. The method is evaluated on a classifier, trained using transfer learning to classify marine vessels into one of four different vessel categories. As a part of this, test images have been manipulated to deliberately provoke failures in the classification module. The resulting heat maps have then been used to investigate the cause of the failures. The results suggest that heat maps help us better understand what features are relevant for the classification which is an important first step. Further research is however required to provide an assurance framework to assess the safety level or to assist in debugging a DNN.

Contrastive Feature Learning for Fault Detection and Diagnostics

ABSTRACT. A multitude of faults can occur in operating systems. Some pose a safety-critical problem instantaneously and therefore, require immediate intervention. Others evolve slowly and only require maintenance if they are particularly pronounced. Due to the vast variety of different faults, it is not efficient to react to each detected fault with the same maintenance action. Instead, the maintenance intervention planning needs to take into account which type of fault is detected and how severe the fault is. This allows planning maintenance efficiently. Too early maintenance downtime can be prevented and intervention time can be reduced e.g. by preparing required spare parts. To achieve this, a fault diagnostics model is required that can accurately isolate different faults and determine their severity. Yet, specific challenges apply when learning data-driven fault diagnostics models. First, an operating asset is exposed to varying operating conditions and external influencing factors that cannot be controlled or known before. This results in high variability of the condition monitoring data within the healthy condition that is not caused by faults. A data-driven model might raise a false alarm for inherently unknown variations in the data if these were not part of the training distribution. Adding complexity to the task of defect diagnostics is that data often lacks detailed labeling to diagnose a fault. For example, the operator might not distinguish between different fault types or different fault severities in its maintenance reports. In that case, detailed information on the fault type and its severity is lacking when training the corresponding models, which essentially makes it an unsupervised learning task. The task of unsupervised fault diagnostic is often approached by clustering a low-dimensional feature representation of the data. Auto-Encoders (AE) are often used to learn a compact feature representation. Yet, the objective when training an AE is to fully reconstruct the input signal i.e. to pass all information about the data through the feature layer including data variations relating to varying operating conditions. This makes them sensitive to changing operating conditions at inference time. Contrastive learning poses an interesting alternative to extract features that explicitly aims to extract semantic meaning. The feature space is optimized with the triplet loss such that similar data points are closer to each other than dissimilar ones. In its supervised implementation, similar data points correspond to those with the same label whereas dissimilar data points are those with different labels. Hence, the triplet loss explicitly is designed to cluster data in the feature space according to their class label. This results in a compact feature representation.

In this work, we propose contrastive learning for the task of defect diagnostics. Our work is the first that applies the triplet loss to PHM applications. Further, we adapt the triplet loss to the case where no refined labeling is available. The resulting feature representation of the data shows to be particularly suited for defect identification under the limitation that certain operating conditions have not been observed in the training dataset. Our evaluation is conducted on the CWRU Bearing benchmark dataset.

AI Factory – A framework for digital asset management
PRESENTER: Ramin Karim

ABSTRACT. Advanced analytics empowered by Artificial Intelligence (AI) contributes to the achievement of global sustainability and business goals. It will also contribute to global competitiveness of enterprises through enablement of fact-based decision-making and improved insight. The digitalisation process currently ongoing in industry, and the corresponding implementation of AI technologies, requires availability and accessibility of data and models. Data and models are considered as digital assets (ISO55K) that impact a system’s dependability during its whole lifecycle. Digitalisation and implementation of AI in complex technical systems such as found in railway, mining, and aerospace industries is challenging. From a digital asset management perspective, the main challenges can be related to source integration, content processing, and cybersecurity. However, to effectively and efficiently retain the required performance of a complex technical system during its lifecycle, there is a need of appropriate concepts, methodologies, and technologies. With this background, Luleå University of Technology, in cooperation with a number of Swedish railway stakeholders – fleet managers, railway undertakings, infrastructure managers and Original Equipment Manufacturers (OEM), has created a universal platform called ‘the AI Factory’ (AIF). The concept of AIF has further been specialised for railway industry, so called AI Factory for Railway (AIF/R). Hence, this paper aims to provide a description of findings from the development and implementation of ‘AI Factory (AIF)’ in the railway context. Furthermore, the paper provides a case-study description used to verify the developed technologies and methodologies within AIF/R.

10:25-11:45 Session TU2J: Resilience Engineering
Location: Botanique
Logistics of critical supply and resilience during the Covid-19 pandemic in Norway

ABSTRACT. The Covid-19 pandemic in Norway has challenged the logistics of critical supplies such as food, fuel, and necessary medical supplies. To understand and document how different actors of the transport sector handle the logistic challenges as they develop during the pandemic, a research project was executed with the objective of gathering information as the pandemic evolves in 2020. With a regional scope, mainly focusing on the situation within Mid-Norway, key research questions has been: (1) How is demand and logistics impacted by the pandemic (especially on critical supplies)?; (2) What is the impact on and between different transport modes (e.g. air, sea, road, rail), and their ability to operate as normal?; and (3) How is Norwegian import and export activity impacted? Logistics of critical supplies have been identified as a critical area in national risk assessments – but has not been prioritized in actual action plans. Hence, this paper presents the results, describing a limited literature review on the risks of a pandemic on critical supplies, and the outcome of systematic interviews among key actors involved in transportation and logistics. Interviews were carried out in the period from April to June 2020, before analysing and documentation of results by end of August. By that time, 11 candidates had been approached for in-depth interviews, and 9 candidates for more informal conversations. All candidates were approached minimum 2 times to document possible effects from the pandemic over a defined period.

How Far is from Fully Automatic Operation to Unmanned-driving: Comparison of Operating Resilience of Fully Automatic Operation Systems with Communication Based Train Control Systems

ABSTRACT. Recently, Fully Automatic Operation (FAO) system is so globally utilized (, especially in China,) that the train with the system is even called “unmanned driving train” in many medias, as FAO system can provide automatic operation for whole journey from outbound to inbound. However, at present FAO system is rarely running in driverless mode, i.e. GOA4 level, in China. In order to look in detail operators’ scruples, this paper, in comparison with the traditional CBTC (Communication Based Train Control) system, values the performance of FAO systems under abnormal conditions owing to its obvious advantages under normal circumstances. The study introduces the concept of resilience and a set of system-based metrics to describe impacts of disruptions and evolution of the system performance. The Multi-agents DES models of the train’s operation process are proposed for resilience calculation. The results of the calculation of FAO and CBTC in selected degrade scenarios are compared on the basis of the real layout of a metro line in Beijing. It is held that the lack of autonomous perception and decision-making ability of trains in dynamic uncertain environment is the bottleneck of FAO system running in GOA 4.

Research on Resilience Evaluation Method of Train Operation Control System Based on Random Failure

ABSTRACT. In order to avoid the one-sidedness of the resilience evaluation for the train operation control system under a single failure scenario, the Monte Carlo simulation method is adopted in this work to simulate the randomness of the disturbed equipment and the degree of performance degradation, so as to provide a probabilistic resilience evaluation method for the train operation control system. Taking Beijing Yanfang line as an example, collect failure data from Signal System and Integrated Supervision Control System in the past three years, the fault frequency of each equipment is taken as the probability of the equipment suffering from disturbance. Meanwhile, determine each equipment’s probability distribution function of failure recovery time. In order to quantify each performance state, the "state virtual value" is set for each performance level of the equipment. The output data flow weight of the equipment is the same as the "state virtual value", and the change in the total data flow weight of the system represents the fluctuation of system performance. Using Zobel’s resilience measure, after 100000 times of Monte Carlo-based disturbance simulations, the estimated resilience value of the train operation control system is 0.9896. Furthermore, perform separate simulations for each equipment to obtain multiple probability cumulative distribution curves of system resilience, it can be seen that when CI, ATP and ZC are disturbed alone, the resilience of the system fluctuates greatly, and lower resilience value may appear. When ATO, ISCS and Axle Counter are disturbed, the system resilience is relatively stable, and the resilience value remains above 0.97.


ABSTRACT. Integrative system analysis requires a tool that facilitates both an investigation of systems from a holistic perspective and research that scrutinizes particular aspects of a specific system while retaining a holistic understanding. This paper proposes such a tool – a novel, multi-perspective kaleidoscope that constitutes a conceptual framework for integrative system analysis. This theoretical frame of reference synthesizes the results of a previous literature analysis that yielded a detailed conceptualization of the terms system, infrastructure and governance in the realm of critical infrastructure protection (CIP). The multi-perspective kaleidoscope for integrative system analysis (KISA) considers four perspectives: system, infrastructure, process and governance. These four perspectives are founded on three layers that mirror the ability of the perspectives to adjust the special focus on the micro, meso or macro level of a system of interest. The presented KISA model contributes a systemic perspective that can guide the exploration of complex issues in society to acquire beneficial, multi-faceted knowledge and a multi-perspective understanding. The integrative system perspective that this study originates will be a valuable tool for a variety of assessments in the context CIP and beyond.

10:25-11:45 Session TU2K: Economic Analysis in Risk Management
Location: Atrium 1

ABSTRACT. Real systems contain potential vulnerabilities which can be exploited by adversary(ies) attempting to disrupt system operations. Highly aggregated macroeconomic Gordon-Loeb (G-L) optimization model for cyber security investments assumes known system-level return on the security investment, measured by the corresponding macroeconomic utility. A missing link in application of the G-L model to a specific system is derivation of this macroeconomic utility from the system-specific microeconomic model for cybersecurity investments. This microeconomic model should identify the optimal mixture of investments in elimination/mitigation of specific vulnerabilities, given the aggregate level of security investments. In this paper we report on work in progress on developing microeconomic optimization model for security investments for system with monotonic structures, which produces the system-specific macroeconomic utility to be used in the G-L model. Our analysis reveals intricacies of the problem, which deserve further investigation. In particular, we demonstrate that system structure critically affects the sensitivity of the optimal investment to the exogenous risk, which justifies and quantifies earlier phenomenological observations. We discuss practical implications of this phenomenon for economically efficient risk management and system design.

PRESENTER: Valery Lesnykh

ABSTRACT. The paper deals with the problem of effectiveness of inspection control activities assessment. The analysis of various criteria for evaluating the effectiveness of control activities of hazardous industrial facilities is carried out. The analysis showed that the value of prevented damage can be used as one of the indicators of effectiveness. Оne of the approaches to assessing the prevented damage can be a pattern identified in the Heinrich-Berd pyramid. Examples of using this approach in solving various practical safety and reliability problems are given. A methodological approach to the assessment of prevented damage as one of the indicators of the effectiveness of inspection control activities using the methodology of building the Heinrich-Berd pyramid is proposed. In addition to the traditional 4-level classification of events in the field of industrial safety, it is proposed to introduce the 5th level related to the identification of inconsistencies as a result of inspection and control activities. Identified inconsistencies are prerequisites for events of the 4th level of classification. Based on the analysis of statistical data of events at industrial hazardous facilities in the gas industry, a theoretical relationship between events of 1-5 classification levels (1-3-30-300-3000) is proposed. It is assumed that the elimination of identified inconsistencies (level 5) can "potentially" lead to the prevention of events at levels 1-4. A formula is proposed for calculating the prevented damage (direct and indirect), taking into account the ratio between events of different levels and the level of elimination of identified inconsistencies. Estimated calculations of the total prevented damage to industrial hazardous facilities in the gas industry were performed. The calculations showed the adequacy and practical significance of the proposed approach based on Heinrich-Berd pyramid.


ABSTRACT. Economics drives two major evolutionary trends in networked system design/operation. The first trend is movement towards the boundary of the system capacity/operational region, where all system resources are fully utilized, by matching expected demand with available resources through demand pricing and resource provisioning. The second trend is an increase in the system interconnectivity allowing for enlargement of the capacity/operational region through dynamic resource sharing. This trend is driven by incentive to mitigate both unavoidable variability of the exogenous demand and limited reliability of system components by dynamic load balancing. However, numerous recent systemic failures in various performance-oriented networked systems, e.g., power grids, cloud, and financial systems, have demonstrated that these economic benefits often heighten risk of cascading failures/overloads. These empirical observations pose problem of systemic risk management while maintaining economic viability. We model networked system by a Markov process with locally interacting components, where interactions are due to individual component failure/overload risk transfer to the neighboring components. We analyze this Markov process under mean-field approximation and natural assumption that risk transfer results in aggregate risk amplification. We show that risk propagation is described by a positive operator. We argue that a system operating point should be close to the triple point on the system phase diagram, where the following three regions converge: two regions where the normal (failed/overloaded) equilibrium state is globally stable and a region where these equilibrium states coexist as locally stable, i.e., metastable. We demonstrate that systemic risk of cascading failures/overloads of this operating point can be naturally quantified in term of the Perron-Frobenius, i.e., leading, eigenvalue of the corresponding linearized risk propagation operator and the corresponding eigenvector. This allows us to state and discuss the optimization problem for the systemic risk of cascading failures/overloads management subject to maintaining system economic viability. Solution to this problem yields the Pareto optimal frontier of the feasible systemic risk vs. economic efficiency region. We illustrate our results on an example of spontaneous recovery in networked systems. In the future, in addition to applying the proposed methodology to specific networked systems, we are planning to investigate a possibility of dynamic assessment/management of systemic risk with the ultimate goal of dynamic contagion containment.

Contagion model for multi-layer financial network considering heterogeneous liquid asset

ABSTRACT. The liquid asset of a financial institution consists of cash, loan, realizable stock and bond. In the research of risk contagion caused by the liquid asset, i.e. the counterparty default risk, all the liquid assets are regard as the same kind, therefore the heterogeneity of the liquid asset and its influence for the is ignored (Gai et al., 2010; Elliott et al., 2014). The systemic risk assessing method for multi-layer financial network proposed in recently study could analysis heterogeneity of the settlement time of the liquid asset (Aldasoro and Alves, 2018; Montagna and Kok, 2016; Poledna et al., 2015), however, hardly analysis the heterogeneity that some kinds of asset price may fall with the market price (stock and bond) while the other (cash and loan) would not. In this paper, we propose a model to describe the risk contagion of the multi-layer financial network consist of different liquid asset types. Firstly, we construct a multi-layer financial network with M layer and N financial institutions, each layer represents a kind of asset and every financial institution will trade with other on each layer. Secondly, we describe two kinds of contagion in the contagion model, counterparty default risk and devaluation of the liquid asset. Finally, the difference between contagion result of the proposed model and the model that regard all the liquid asset as the same kind is compared through simulation. We find that the heterogeneity of the liquid asset will increase the contagion extent. The proposed model can analysis the effect of fluctuation in prices for heterogenous assets, and will further support the study of the contagion mechanism in the heterogenous financial network, and offer guidance for making a reasonable macroprudential regulation policy.

11:50-12:30 Session Plenary III: Plenary Session
Location: Auditorium
Extending the Service Life of Civil and Marine Structures: Role of Monitoring, Probabilistic Life-cycle Management and Risk-based Decision Making

ABSTRACT. Structural deterioration can pose tremendous risk to the functionality, serviceability, and safety of civil and marine structures, considerably limiting their service life. To extend the life-cycle of existing structures under deterioration, rational life-cycle management should be conducted accounting for various uncertainties arising from loads, resistance, and modeling. Compared to conventional inspection methods that are sometimes disruptive and costly, structural health monitoring provides a novel and cost-efficient approach to reducing uncertainties and ultimately facilitating the decision-making process for realizing structural longevity. In this plenary lecture, recent accomplishments in the integration of monitoring, probabilistic life-cycle management and risk-based decision making for extending the service-life of civil and marine structures are presented.

12:30-14:00Lunch Break
14:00-15:00 Session Panel V: TC12 on risk analysis and safety of large structures and component
Location: Auditorium
TC12 on risk analysis and safety of large structures and component

ABSTRACT. European Structural Integrity Society (ESIS) is leading association of scientists and researchers who deal with failures, fracture mechanics, structural integrity and related topics like reliability and safety. The ESIS has a number of Technical Committees specialized in different topics, including TC12 on Risk analysis and safety of large structures and component, coordinated by Aleksandar Sedmak & Snežana Kirin, Belgrade, Serbia, José A.F.O. Correia & Abílio M.P. De Jesus, Porto, Portugal and Vladimir Moskvichev & Elena Fedorova, Krasnoyarsk Russia. Ongoing collaborative research covers Fracture Mechanics Applied to the Risk Analysis and Safety of Technical Systems, Degradation Theory of Long Term Operated Materials, Fatigue Evaluation in Offshore and Onshore Structures, Fatigue Analysis in Bridge Structures, Modeling of Offshore Structures, International Symposiaon Risk Analysis and Safety ofComplex Structures and Components, Workshop on Risk based Fracture Mechanics Analysis.


Leksandar Sedmak,, Faculty of Mechanical Engineering, University of Belgrade, Serbia

Snežana Kirin, Innovation Center of the Faculty of Mechanical Engineering, Belgrade, Serbia

Nenad Milošević, Innovation Center of the Faculty of Mechanical Engineering, Belgrade, Serbia

14:00-15:00 Session Panel VI: Reliability & Safety, State of the Art and evolution, incl. Urban Air Mobility and newest Aerospace disruptive challenges
Location: Amphi Jardin
Reliability & Safety, State of the Art and evolution, incl. Urban Air Mobility and newest Aerospace disruptive challenges
PRESENTER: Clement Audard
  • Part 1 – General introduction to Aerospace

ABSTRACT. Reliabiity and Safety are two key words when it comes to fly. Airplanes are intrinsically safe, much more than any other transportation mode and are indeed the safest transportation mean. How do the industry reached that level of engineering expertise to make the dream of Icare becoming an almost daily routine.

Panel head : Yves Morier


  • Part 2 - Urban Air Mobility and newest Aerospace disruptive challenges

ABSTRACT. While Urban Air Mobility projects are booming, the aims of this session is to recall the fundamentals when it comes to carry passengers: safety first ... How could the new players, who do not necessarily have aerospace expertise and so the benefits from the pedigree of century of experience, address those topics incl. newest technologies, such as Lio Ion Batteries, hydrogen, autonomous flights (non-piloted, ...) etc

Panel head : Clement Audard -


Panelists :

14:00-15:00 Session Panel VII: Model-Based Safety Assessment approach: Increase trust in models
Model-Based Safety Assessment approach: Increase trust in models

ABSTRACT. Description

Model-Based Safety Assessment (MBSA) exists since many years now and the advantages of this approach are well known. However, modelers are sometimes confronted with the same old questions: Is the model really valid? Are the results correct? Those questions often take their origins in a lack of knowledge of the MBSA approach but reveal an important issue: how to prove the validity of models and results?

The following topics will be addressed:

1. Modeling process

2. Training & Communication

3. Model representativeness

4. Model-Checking

5. Tools & Software

6. Documentation & Capitalization


The aim of this panel session is to give leads to modelers to prove the validity of their models. An example of modeling process will be proposed. Based on this process, few solutions will be presented:

- Plan MBSA to define inputs needed, objectives and development specification

- Model-Based System Engineering (MBSE) / MBSA coupling initiatives (S2C, S2ML...) and others methods to ensure the model representativeness to the corresponding system

- Model-checking for the verification of the consistency of the model - Validation activities (step-by-step simulation...)

- Provide documentation with model for explanation

- MBSA supporting activities (training & communication) and resources (database, existing patterns...)

- Possibilities offered by MBSA tools & languages to assist modelers


MILCENT Frédéric,, Naval Group

Panel list

BATTEUX Michel,, IRT SystemX



14:00-15:00 Session Panel VIII: COVID-19 pandemic: Risk analytics
Location: Panoramique
COVID-19 pandemic: Risk analytics

ABSTRACT. Description

Since December 2019, the world is confronted with the COVID-19 pandemic, caused by the Coronavirus SARS-CoV-2. The COVID-19 pandemic with its incredible speed of spread shows the vulnerability of a globalized and networked world. The first months of the pandemic were characterized by heavy burden on health systems and severe restrictions on public life within a lot of countries, like educational system shutdown, public traffic system breakdown or a comprehensive lockdown. The focus of the panel is the discussion of risk and safety analytics regarding the analyse of several control strategies or combinations of them, like restrictions, medical care actions and medical prevention activities.


The COVID-19 pandemic continues to this day. The impact of the pandemic continues to influence life in various countries around the world. Methods from reliability and safety engineering can help regarding the estimation of risks and can be a fundament for finding proper actions to control the pandemic.


Bracke, Stefan;, University of Wuppertal, Chair of Reliability Engineering and Risk Analytics, Gausstrasse 20, 42119 Wuppertal, Germany

Structure: Two impulse speeches/presentations, discussion actual trends and risk and safety methodologies

Panel list

Bracke, Stefan;,University of Wuppertal

Van Gulijk, Coen;; University of Huddersfield


15:00-15:20Coffee Break
15:20-16:20 Session TU3A: System Reliability
Location: Auditorium
Operation strategy optimization for two-unit warm standby systems considering periodic active switching
PRESENTER: Senyang Bai

ABSTRACT. Concerning the operation of standby systems in most ground systems, the standby units usually switch to the operating state after the active units fail. However, a periodic active switching strategy that differs from this common strategy is employed for the gyroscope standby system in satellite engineering. For this problem, this paper proposes an operation strategy optimization model for two-unit warm standby systems considering periodic active switching. The model comprehensively considers periodic switching and perfect switching, and based on the virtual age theory, derives the reliability function and mean time to failure (MTTF) of the two-unit warm standby subsystem, which can be applied to units or systems with arbitrary time-to-failure distributions. Then, with MTTF maximization as the optimization goal, the optimal periodic switching interval is determined. Finally, a case study of a gyroscope warm standby subsystem is provided to illustrate the applicability of the model, and sensitivity analysis is carried out to identify the useful conclusions.

NATO Dependability Standard: Recent publications and future works
PRESENTER: Hervé du Baret

ABSTRACT. This paper presents the latest ADMP (Allied Dependability Management Publications) from NATO group AC/327 (NATO Life Cycle Management Group). The last publications are STANREC 41741, ADMP-012, ADMP-023 and ADMP-034. STANREC 4174 is the umbrella document that lists recommended practices regarding dependability for military programs conducted by NATO countries. It mostly refers to IEC standards. ADMP group has elaborated some documents in order to address specific military needs. The last symposium paper on NATO dependability standards dates back to 19935 . Nonetheless, those standards have much evolved since that time. The present paper describes how these standards evolved since the 90ies and presents the most important aspects of STANREC 4174, ADMP-01, 02, and 03. Three documents (STANREC 4174, ADMP-01, and ADMP-02) were issued in 2014 and will thus be reviewed and updated by ADMP group in the coming years. Military systems are increasingly complex, however civilian standardization organizations (IEC, SAE, ECSS...) produce high quality dependability standards. Therefore, an in-depth analysis is continuously conducted to identify military aspects of dependability not covered by civilian standards. This paper describes NATO publications that address this gap for military systems. Some evolutions could be: to improve STANREC 4174 with more standards and comments, to develop the progressive assurance process and the dependability case or the guidance for conducting dependability activities in a contracting authority. STANREC and ADMP are available at They are free of charge. All dependability experts from NATO countries (especially from procurement agencies) and involved in military programs are welcome in the ADMP group.

A seamless Functional Hazard Analysis for a fuel cell system supported by Spreadsheets
PRESENTER: Axel Berres

ABSTRACT. The development of safety-critical systems requires the hazard identification during early development phases. Therefore, Preliminary Hazard Lists or Preliminary Hazard Analysis can be used. Innovative systems can be found by considering those hazards and a smart systems engineering [IN15]. During development, the problem may arise that the information exchange is based on documents only [LP15]. If a document-based development is unavoidable, an approach is required to enable a seamless data exchange among development tools [BM10]. We describe a seamless approach used in FLHYSAFE during the Preliminary System Safety Assessment. Due to the widespread use of Excel, a template was used to perform the Functional Hazard Analysis. It has been shown for the project how the FHA has been integrated in the used model-based development. In addition, results of the seamless lean FHA as well as in system development findings are described.

15:20-16:20 Session TU3B: Risk Analysis and Safety in Standardization
Location: Atrium 2
Manually clamping workpieces - Identification of safety-relevant parameters

ABSTRACT. In most cases, an inadequate workpiece clamping is the cause for released workpieces in machine tools (Kesselkaul Meyer 2016). The standstill clamping force is decisive for the safe machining of workpieces that are clamped with three-jaw chucks. This static clamping force (before machining) is composed of the sum of a) the minimum necessary clamping force (to prevent release due to process loads) and b) the necessary clamping force surcharge to compensate for the loss of clamping force that occurs due to centrifugal forces during a turning process. Time-dependent parameters (due to wear, dirt, etc.) such as the efficiency of the three-jaw chuck are particularly important for safe workpiece clamping (VDI 3106 2004). In addition, design-related parameters but also the operating mode can influence or limit the output clamping force. In this paper, static clamping experiments are realized with critical, but according to the specified range of application permissible parameters. In the clamping experiments, clamping forces were first measured depending on different clamping diameters. This was followed by stiffness measurements to ultimately explain the subsequent measured limit moments of tilt. It is shown that the standstill clamping force varies depending on the operating condition, the operating mode and the clamping diameter, which is relevant for clamping and user safety. In order to identify and evaluate the influencing parameters, statistical evaluation is crucial in addition to measured value acquisition. Based on the statistical evaluation, significant effects could be determined. The strength and direction of the effects are shown in diagrams. The results enable users to predict the maximum achievable standstill clamping force before the clamping process occurs. These clamping force losses can be used for the design or calculation of the required standstill clamping force and a higher clamping and user safety can be achieved. Finally, a method is available for parameter identification of the actual degree of utilization of the three-jaw chuck with regard to the provision of safety-relevant parameters.


ABSTRACT. The structural set-up and key characteristics of regulatory regimes for risk management vary between countries, technical domains and application areas. While sharing the focus on risk-informed decision-making as a primary vehicle for managing risk, its implementation through regulations comes in different forms, ranging from a highly prescriptive (“hard”) approach including explicit requirements on key risk management components, to “softer” approaches requiring risk to be managed without defining how. The former represents a high level of standardization by stating how to achieve something, e.g. by explicitly defining component design, choice of method, plausible assumptions, modelling tools, and evaluation criteria. In opposite, performance-oriented “soft” regulations use functional requirements focusing on what to achieve without prescribing how to achieve it (i.e., a low level of standardization). In practice, however, even softer regulations may refer to prescriptive standards through guidelines or recommendations, retaining a high level of standardization.

Risk regulations are, like other aspects of society, subject to continual development. Increased standardization is observed in diverse risk domains such as land-use planning, terrorism and security risk management, cyber security, and disaster risk management. This development is contested and the question of whether a low versus a high level of standardization is the most beneficial for adequate risk management is debated.

This paper presents an overview of standardization of risk in scientific literature. The literature is identified through a scoping study and the impact of different approaches to standardization in risk regulations is analyzed and assessed. The aim of this paper is to provide insights related to the arguments, effects, and experiences of using standards or standardized approaches for managing risk.

The preliminary results indicate that effects of standardization in risk regulations are not extensively covered in research. In the peer-reviewed publications that are available, different ways of standardization in risk regulations are described and the selected approach in terms of high or low level of standardization is justified. Tentative findings suggest that intellectual reasoning and logical argumentation are the primary means of justification for the level or form of standardization applied in risk regulations. More robust arguments (e.g., empirical evidence in the form of effect measurements) for the selected risk regulatory approach seem scarce. The paper intends to contribute to further development of the pivotal knowledge base for judging the appropriate level of standardization in risk regulations.

Experimental investigation of the kink effect by impact tests on polycarbonate sheets
PRESENTER: Nils Bergström

ABSTRACT. In machine tools, machine guard windows provide an insight into the working process of the machine and protect the user against possible ejection of parts during operation, such as chips, tools and workpiece fragments [1, 2]. To ensure the safety of the machine operator, impact tests can be used to determine or verify the impact resistance of the machine guard. Polycarbonate is the most commonly used material for machine guard windows due to its high toughness compared to other transparent materials. In general, an increase in sheet thickness results in an improved impact resistance. However, the studies of CORRAN ET AL. (1983) [3] show that for an increased sample thickness a reduction of the impact resistance occurs. The authors called this phenomenon Kink Effect. This contribution focusses on the investigation of the Kink Effect for monolithic polycarbonate sheets up to a thickness of 18 mm and a lathe standard projectile with a mass of 2.5 kg. Experiments were carried out to compare the material behavior of polycarbonate sheets under projectile impact for the dimensions of 300 mm (height) x 300 mm (width) and 500 mm (height) x 500 mm (width). The experiments were further evaluated using the RECHT & IPSON (1963) [4] method. Furthermore, explicit dynamic impact simulations were performed to enable the investigation of “close to edge” impacts.

1. DIN EN ISO 23125, Machine tools safety -turning machines, (2015). 2. DGUV Deutsche Gesetzliche Unfallversicherung, Schutzscheiben an Werkzeugmaschinen der Metallverarbeitung, DUGV-Information FB HM-040, (2018). 3. R.S.J. Corran, P.J. Shadbolt, C. Ruiz, Impact loading of plates - An experimental investigation, International journal of impact engineering 1, (1983). 4. R.F. Recht, T.W. Ipson, Ballistic perforation dynamics, Journal of applied mechanics, 384 – 390, (1963).

15:20-16:20 Session TU3C: Degradation analysis and modelling for predictive maintenance
Condition-based maintenance for systems with degradation processes and random shock under warranty

ABSTRACT. A condition-based maintenance strategy is developed for systems subject to two dependent causes of a failure such as degradation and random shock. The degradation threshold shock model for systems have been developed under warranty considering repair service and replacement service. We investigate the relationships between random shock and degradation process which are modeled by a time-scaled covariate factor and the relationship between various degradation processes. Fatal shock which needs replacement service for system immediately and nonfatal shock which needs repair service for failed parts are considered. For a nonfatal shock, there are two direct effects, slowly accelerations and sudden accretion jumps on the degradation levels. There are degradation limits for repair service and for replacement service, respectively. Degradation level may jump to the degradation limit for the replacement service or/ and may increase to the other degradation limit for repair serivce. In this study, we consider not only the degradation process but also random shock model and decision variables are determined for the degradation threshold shock model. Total expected cost is minimized to determine an optimal maintenance cycle and optimal length of warranty period for the warranty cost analysis. Additionally, warranty service time for repair service and replacement service is considered with warranty service time limit to increase customers’ satisfaction. Suppose that the system deteriorates with age, we illustrate the proposed approach using numerical applications and investigate the influence of relevant parameters on the optimal solutions for the maintenance policy.

Modeling multivariate degradation processes with time-variant covariates and imperfect maintenance effects
PRESENTER: Xiaolin Wang

ABSTRACT. This article proposes two types of degradation models that are suitable for describing multivariate degrading systems subject to time-variant covariates and imperfect maintenance activities. A multivariate Wiener process is constructed as a baseline model, on top of which two types of models are developed to meaningfully characterize the time-variant covariates and imperfect maintenance effects. The underlying difference between the two models lies in the way of capturing the influences of covariates and maintenance: The first model reflects these impacts in the degradation rates/paths directly, whereas the second one describes the impacts by modifying the time scales governing the degradation processes. In each model, two particular imperfect maintenance models are presented, which differ in the extent of reduction in degradation level or virtual age. The two degradation models are then compared in certain special cases. The proposed multivariate degradation models pertain to complex industrial systems whose health deterioration can be characterized by multiple performance characteristics and can be altered or affected by maintenance activities and operating/environmental conditions.

Degradation modelling for predictive maintenance under various operating and environmental conditions
PRESENTER: André Cabarbaye

ABSTRACT. This paper covers the expansion of recently presented work [1] on estimating the reliability of components subjected to wear from the results of degradation tests carried out under various stress conditions. Based on non-stationary and accelerated Levy processes (Wiener and Gamma processes), adjusted by hybrid optimisation tools (global / local) capable of overcoming local optima, the reliability is then estimated under various operating conditions of use and environment, considering that the level of degradation remains below an acceptable threshold starting from the one observed in the current state.

This work is currently extended with the use of other Levy processes which are more able to represent the variability of degradation phenomena, such as the Gamma Variance processes, first introduced in the financial industry to model non-monotonic discontinuous stochastic evolutions. However, despite being more flexible, the adjustment of such models of 3 or 4 parameters is not easy because its likelihood function presents many local optima due to the Bessel function needed in its expression. Hybrid optimisation thus appears to be more suitable than the local techniques currently in use.

Likewise, the effect of maintenance actions, considered as a reduction in degradation levels by a rejuvenation factor or by any other means, can be added to the modelling to be able to assess their effectiveness and optimise the period of equipment replacement.

Different cases of application will be presented in the communication.

15:20-16:20 Session TU3D: Prognostics and Health Management: From Condition Monitoring to Predictive Maintenance
Location: Panoramique

ABSTRACT. The new generation of instruments in the field of medical robotics aims to use devices that are less and less invasive for the patient. However, some of these microrobots are in underdevelopment and must undergo several tests in order to obtain the mandatory certifications to be used on patients. Indeed, one of the main tests to be validated is the accurate determination of their reliability and remaining useful life (RUL) in order to ensure optimal performance during the surgical procedure. This work aims to propose an approach for obtaining a degradation modeling for a microrobot dedicated to intracorporeal surgeries. For this purpose, simulated degradation data is collected from a four-bar complaint mechanism that fulfills the same behavior of a flexure hinge. Knowing that a flexure hinge of the microrobot is a critical element and knowing that it is possible to have measures of the evolution of its performance and therefore of its degradation, we propose a data-driven degradation modeling by considering the normal life distribution in order to assess the reliability and the RUL. On the one hand, when knowing the failure time distribution, the reliability at any time can be estimated, on the other hand, the RUL shows the moment at which the system starts to fail, and it no longer works properly. Accordingly, prevention actions could be deployed in advance to prevent that the flexure hinge reaches the failure threshold. To sum up, a data-driven model within the Prognostics and Health Management study for lifetime estimation was for the first time presented for a micro-robot dedicated to intracorporeal surgeries. The proposed approach is based on the microrobot degradation simulated data and some basic probabilistic and statistical tools in order to select a degradation model. This work opens a window for future studies on real degradation analysis when the tests are carried out with the proposed sample, as well as involving more advanced approaches to estimate the remaining useful life of the microrobot dedicated to intracorporeal surgeries along with similar devices.

State of Health Estimation for Lithium-ion Battery by Incremental Capacity Based ARIMA - SVR Model
PRESENTER: Akash Basia

ABSTRACT. With the increase in the use of Lithium (Li)-ion batteries for Electric Vehicles (EV) applications, it is imperative to think about using it sustainably. The circular economy suggests to reuse or re-purpose the End of Life(EoL) EV batteries in less demanding applications. The State of Health (SoH) is an essential indicator in making decisions while reusing or repurposing EoL Li-ion batteries of EVs. Conventional SoH estimation often requires capacity measurement from the battery’s full charge to the cut-off state, which is quite challenging. Incremental capacity analysis can improve estimation efficiency by extracting features to estimate SoH. In this paper, we propose an Incremental Capacity (IC) based SoH estimation system for Li-ion batteries. The model employs a Kalman filter and a finite differencing method for measurement noise attenuation. A novel method that combines Support vector regression (SVR) and the Autoregressive Integrated Moving Average (ARIMA) model is utilized to model the relationship between IC and the SoH. Once we evaluate the online capacity, more accurate SoH prognostics, and Remaining Useful Life can be made. A model database is created using SVR to add robustness to the SoH estimation algorithm. The basic block diagram of the model is shown in Fig.1. A use case is created on the NASA AMES open-source battery data (Saha and Goebel, 2007). The case study shows that compared to the past published methods, the proposed model can obtain more accurate SoH prediction results.

Defining Degradation States for Diagnosis Classification Models in Real Systems based on Monitoring Data

ABSTRACT. As the complexity of modern engineering systems increases, data-driven approaches have become valuable tools to aid maintenance decision-making. However, raw data collected from monitoring sensors require a comprehensive and systematic preprocessing to separate healthy from faulty states before their use in data-driven models. Frequently, anomaly detection models implemented for this purpose are based on statistical relationships or rule-based thresholds, rather than on information provided from maintenance logs related to the internal operation of the system. In this work, we propose a framework to establish a link between the recorded sensor data behavior and the system's degradation processes. In particular, this framework aims to obtain a labeled degradation dataset from raw monitoring data to train a diagnosis classifier for a system with multiple failure modes. A dataset obtained from two years of sensor monitoring and reported failure logs of a copper mining process line is used to exemplify the framework. Different machine learning classifiers are presented for each failure mode, individually and combined. Results show that the degradation labeling procedure is effective, and classifiers obtain up to 95% accuracy for the detection task for a two-class problem. Cross-comparison of the classifiers per failure mode allows the identification of problematic classes, showing the benefits of addressing each failure mode individually rather than for the entire system simultaneously.

15:20-16:20 Session TU3E: Risk management for the design, construction and operation of tunnels
Location: Amphi Jardin
Barriers and Drivers for Safety Related Innovation Within the Norwegian Tunneling Industry
PRESENTER: Henrik Bjelland

ABSTRACT. Working on innovative processes in the tunneling industry is interesting but demanding. This paper presents the outline of a regional program for developing new solutions and becoming a national and international knowledge Centre on tunnel safety. The Capacity Boost Tunnel Safety (KATS) aims to increase competitiveness and value creation in the Norwegian regional tunnel safety industry. The project is a cooperation between private and public businesses, authorities and research communities, building on a strategy of developing and making use of research-based knowledge to improve tunnel safety, both nationally and internationally. However, safety, especially fire safety related to major incidents is challenging to communicate. Norway has not experienced major accidents. In fact, no one has been killed from heat loads or smoke intoxications in Norwegian road- and railway tunnels. The growth in tunneling have led to a greater need for expertise, particularly within tunnel safety. Rogaland is developing some of the most complex tunnel systems in the world. New solutions must therefore be found, and new chains of values understood. This paper analyses the background state of knowledge of KATS and the three years of activities to assess barriers and drivers for developing innovative solutions in the tunnel safety business. The core of safety considerations is part of the analysis, in which we introduce prerequisites for active involvement by the important stakeholders in the tunnel industry.

Capacity Boost Tunnel Safety – Using the SSM Approach to Increase Impact
PRESENTER: Tone Iversen

ABSTRACT. Capacity Boost Tunnel Safety (KATS) is a capacity enhancing project that aim to increase competitiveness and value creation in the Norwegian tunnel safety industry. KATS builds on an acknowledgement that improving tunnel safety is a task characterized by considerable complexity. A major goal for the project is to make lasting improvements and we strongly believe that developing new joint activities is a means for achieving this. Developing R&D applications for project funding with relevant partners is one example of joint activities. Although quite specific, this task has the potential of being quite complex. We argue that this can be considered as a messy situation due to many involved actors with possibly conflicting goals and difficulties with defining both what the problem is as well as the solution. This led us to Soft Systems Methodology (SSM) which was developed as a learning tool, to make better sense of complex, messy situations where the search for a clear-cut problemand the single optimal solution is futile. In this paper, we report on our experience of exploring SSM as a tool to gain better understanding of what KATS should prioritize the coming three years. Our exploration is tested by trying to improve the problematic situationof developing research and development concepts that improves the industry’s position for future projects as a case. We conclude that creating rich pictures contribute to an increased understanding of our knowledge gaps. Furthermore, SSM was useful in this context and worth developing further where KATS (as a whole) is defined as the problematical situation that needs improvement.

An Overview of Asset Management Best Practices, Challenges and Risk in the Norwegian Oil & Gas and Tunnel Industry

ABSTRACT. Road tunnels are going through major digital transformation as the technology evolves. Asset Management is relatively a new concept in the tunnel industry when the application is concerned, where the Norwegian Oil & Gas (O&G) industry is considered more mature in adopting this concept. According to the National Road Database, over 1200 road tunnels exist in Norway (Statens Vegvesen, 2021). Marginal improvement in Asset Management practices may significantly enhance safety, cost, and time management. This huge potential triggers the exploration of best practices in Norway’s O&G and Tunnel industry, aiming to transfer knowledge and strive for possible improvement. This paper is designed to (a) review the literature by exploring best practices in O&G and tunnel industry in Norway and globally, (b) compare the local and European laws, standards, and regulations in both sectors, and (c) present the challenges faced in the past by both industries. The conclusion is presented by addressing gaps and opportunities, leading to further exploration as part of future research. The review shows best practices for tunnels are largely documented under the World Road Association and the Conference of European Directors of Road, with limited input from the Norwegian authorities. Norway’s tunnel industry faces numerous challenges, such as nonconformance to minimum safety requirements and maintaining equipment in extreme climate conditions. In contrast, the O&G industry is working on best practices extensively for quite some time, which are well documented and discussed regularly at Oil & Gas conferences. The O&G industry’s greatest challenge is the fluctuating Oil prices, which drive innovation to optimize asset performance at the lowest cost. Pressure regarding environmental concerns is in serious consideration, which is an ongoing challenge for the O&G industry. To conclude, both industries are exposed to unique challenges, where they share similarities, the main difference lies in their operating context. The O&G industry is directly related to cost, and therefore, practices are forced to develop. On the other hand, the tunnel industry’s performance is measured against the level of public service. Overall, the best practices from both industries hold great potential, and adopting such practices can greatly enhance Asset Management performances.

15:20-16:20 Session TU3F: Probabilistic vulnerability estimation, lifetime assessment and climate change adaptation of existing and new infrastructure
Quantitative assessment of the impact of climate change on creep of concrete structures

ABSTRACT. Although creep of concrete structures mainly is a serviceability problem, it can lead to severe consequences. For instance, the collapse of the Koror–Babeldaob Bridge in Palau in 1996 can be, at least partly, attributed to excessive creep deformations (Bažant et al., 2011). The progressive collapse of Roissy Charles de Gaulle Airport is another example where excessive creep deformations have contributed to more severe consequences affecting structural safety (Daou et al., 2019a; 2019b; El Kamari et al., 2015). Recent studies of climate related risks and impacts to infrastructure, e.g. Nasr et al. (2019), indicate that climate change can affect the creep of concrete structures in the future. The current study demonstrates how this effect can be quantitatively assessed. For this purpose, the Eurocode creep model (CEN, 2004) (i.e., fib Model Code 1999 model) is used under considerations of both the historical and future climatic conditions in Skåne, Sweden. In addition, a common practice approach where the historical relative humidity is considered and the effect of temperature is neglected (i.e., the default temperature implicitly assumed in the Eurocode creep model, i.e. 20 Co, is used) is also considered. This assessment considers the uncertainties originating from climate modelling as well as creep modelling for three different Greenhouse gas (GHG) emissions scenarios; RCP 2.6, RCP4.5, and RCP8.5. The results of the assessment in this article show that although climate change affects creep of concrete structures, this effect is overshadowed by the large uncertainties resulting from creep modelling.

Keywords: Climate change, Long-term deformations, Creep, Creep coefficient, Infrastructure safety.


1. Z. P. Bažant, M. H. Hubler, and Q. Yu. ACI Structural Journal, 108(6), 766–774 (2011). 2. CEN. Eurocode 2: Design of concrete structures – Part 1-1: General rules and rules for buildings, EN 1992-1-1. Part 2: Concrete bridges, EN 1992-1-2. (2004) 3. H. Daou, W. Salha, W. Raphael, and A. Chateauneuf. Case Studies in Cons. Mat., e00222,10 (2019). 4. H. Daou, W. Raphael, A. Chateauneuf, and F. Geara. Procedia Str. Int., 22 (2019). 5. Y. El Kamari, W. Raphael, and A. Chateauneuf. Case Studies in Eng. Failure Analysis, 3 (2015). 6. A. Nasr et al. Sus. and Resilient Infrastructure, DOI:10.1080/23789689.2019.1593003 (2019).

Influence of concrete’s mechanical properties on the cracking of concrete dams
PRESENTER: Adrian Ulfberg

ABSTRACT. Analytical methods of structural stability assessment of concrete dams are often too simple and thus conservative in their predictions. Without the actual foundation geometry, capacity for some rigid body failure modes are underestimated. This is problematic when deciding upon remediation activities of a dam that is considered unstable and may divert the restoration activities from where they are most impactful. In a previous study by Sas et al. 2019 where a section of an existing dam was scaled down and tested experimentally, the model indicated that several areas were experiencing large stresses, potentially leading to failure. This raised the research question whether another type of failure would occur for different material properties. Therefore, this paper delves into a probabilistic numerical approach, through finite element analysis, to evaluate dam stability based on randomization of a number of material properties such as modulus of elasticity, tensile strength, compressive strength, and fracture energy. The variation of the aforementioned material properties did not impact the failure mode, which was consistent among a broad range of material strengths.

The indirect impact of flooding on the road transport network: A case study of Santarém region in Portugal

ABSTRACT. The indirect impacts of flooding on transportation networks include, among others, consequences of the service disruption for the users. Indirect impacts are of a wider scale and with a longer incidence in time than direct impacts. The key aspect for the quantification of indirect impacts of flooding is the assessment of the disruption of the transportation service, with social and economic consequences. In this work, a traffic model for a pilot zone is constructed for accurate quantification of the functionality of the network after the failure of infrastructure components such as road segments and bridges. A mesoscopic simulation, which is capable of building a road network model, assigning trip paths with the impact of road closures, and evaluating travel time and vehicle volume redistribution in a given disruption scenario, was used to identify the traffic disruption in the face of flood events. Modelling outputs from a case study in the Santarém region of Portugal indicate which roads are more congested in a day. A comparison between the baseline and a flood scenario yields the impacts of that flood on traffic, estimated in terms of additional travel times and travel distances. Therefore, simulating and mapping the congestion can largely facilitate the identification of vulnerable links.

15:20-16:20 Session TU3G: Asset management
Location: Atrium 3
PRESENTER: Oscar Aranda

ABSTRACT. With the continuous evolution of production systems towards Cyber Physical Systems and the existence of completely digital production systems, i.e., banking and ecommerce, support codding production systems are born to satisfy the need for new code that keeps their systems updated and competitive in an ever-evolving context. IT projects for digital factories are managed through Agile methodologies that allows to manage volatile requirements (D.J. Reifer, 2002). As mentioned by Vladimir and Nikita (2018), due to the nature of agile methodologies to assign a small amount of time for Grooming and Planning, it is a challenge to provide with a reliable estimation and planning for large scale high-tech projects, and thus, as said by Heikkilä et al. (2015) and mentioned by Boehm and Turner (2005) and K Diekter et al (2016), there is an obvious gap in the research of release planning in large-scale agile software development organizations. This situation makes it difficult to obtain an accurate production planning, which difficult the quantification of the consequences of a change in the production priorization, and thus, the company does not have all the information to make strategic decisions. To address such challenge, a methodology is presented in which the development teams of a same company are interpretated as a production system, noting their similarities and differences with a physical production system. Afterwards, the main reasons for development production detention are presented, reasons that are determined through several “development teams” interviews. Finally, performance indicators are introduced to describe functioning of the system in a global manner (Grubessich T. et al., 2019), and an example is presented to forecast code release dates and the probability to achieve such, thus making available the information needed for a company to prioritize IT resources to optimize the digital products release, and thus earnings related to the digital systems.


ABSTRACT. Recent developments in sensor technology and systems for connecting digital and physical systems, often associated with the terms Industry 4.0 and cyber-physical systems, are expected to bring substantial changes to how maintenance and asset management will be conducted in the coming years. Most of the research related to Industry 4.0 and maintenance have focused on technical aspects, and less attention has been given to how to organize and manage maintenance in order to take advantage of the new possibilities offered by the fourth industrial revolution. While many claims have been made about the potential improvements related to maintenance that can be achieved from implementing Industry 4.0, empirical studies suggest that industry practitioners are struggling to realize these improvements. There are also signs that there exists overall a poor understanding of how to implement Industry 4.0. The contribution of this paper is to address these socio-technical challenges with a multidisciplinary framework for the implementation of Smart Maintenance. The framework is divided into three levels: strategic, tactical, and operational, and is influenced by lean production, systems engineering and maintenance management.

Visual Inspection Performance in Aircraft Maintenance

ABSTRACT. Introduction The continued safety of air transport relies heavily on maintenance technicians being able to observe any defects during planned visual inspections. These defects include mechanical damage or disrepair, as well as any loose objects in the aircraft. These inspections are highly proceduralised with the exception of one aspect; evidence for how the fundamental task of visually observing aircraft defects should actually be conducted.

Method In order to investigate the accuracy of visual inspections, N=100 aircraft maintenance technicians were recruited. Under randomised controlled trial conditions, N=48 control participants conducted aircraft pre-flight inspection tasks using their normal custom and practice. Two types of fixed wing aircraft were used as well as one helicopter. All these aircraft were in constant use and maintained by the aviation organisation involved. The number of observable defects on each aircraft under analysis had been pre-identified in order to obtain the percentage of defects identified by each participant. The aircraft used contained the type of defects that maintenance technicians would routinely come across. This included researcher simulated hazards such as unsecure work tools or deliberately loosened screws or split pins. This ensured that real world pre-flight inspection visual behaviour, was investigated.

N=52 experimental participants were randomly selected and trained in the use of a novel visual search behavioural algorithm which the researchers call systematic visual search. This method promotes a meticulous and exhaustive search of the aircraft under analysis by precisely proceduralising visual search strategy. This is achieved by using an iterative eye scan pattern applied to all areas of the aircraft with a fixed order of application. In addition, the effect of practicing systematic visual search was investigated by users additionally repeating inspections a further three times. In doing so, evidence for a visual performance reliability ceiling was arrived at.

Results It was demonstrated that by using systematic visual search, the percentage of defects observed in the first trial increased from a mean 35.70% achieved by the N=48 control group participants, to a mean 55.55%. This increase in defects observed by the experimental group was highly significant and represented a large effect size. A further noteworthy finding was that time taken to conduct inspections for the control group was a mean 26 minutes and 27 seconds which increased to a mean 49 minutes and 24 seconds. This indicated a greater cognitive effort and visual diligence during their inspection task. In order to investigate the effect of practicing systematic visual search, the N=52 experimental participants were tasked with conducting an additional three visual inspection trials using the method. After the third trial, the N=52 systematic visual search users observed 69.78% of defects which dropped slightly to 68.10% after the fourth trial (N=18 due to Covid). These results suggest that practicing the method produces a rapidly learnt visual search behaviour with a plateauing effect of just under 70% for visual inspection performance.

Conclusions and Implications This research has revealed that current human performance when tasked with observing defects for aircraft safety has limitations, which can be improved by using systematic visual search. Up until now, observation rates for in-situ defects has not received sufficient academic attention. This aspect of human performance and reliability when visually identifying all observable safety critical hazards and defects, remains of crucial importance for continued airworthiness. In addition, conducting inspections for observable hazards is a widespread and daily practice in the safety community. Adequately observing hazards during visual inspections has now been empirically demonstrated to be an error prone task. Therefore, improving visual performance will be beneficial whenever safety professionals are required to conduct inspections.

15:20-16:20 Session TU3H: Railway Industry
Location: Cointreau
STPA-based Safety analysis of virtual Coupling Scenarios
PRESENTER: Yi-ming Yang

ABSTRACT. Virtual coupling comprehensively uses technologies such as computer, communication, and automatic control, and needs to consider multiple levels of conditions such as lines, trains, signals, and transportation, and break the original constraints by increasing perception and other means to finally complete functions and performance indicators. Compared with common urban rail transit systems, virtual coupling scenarios have higher requirements for safety. Leading indicators can give early warnings when any aspect of the system starts to deviate from the right track, so as to prevent it from having a major impact on the safe operation of the system. At present, the research on leading indicators of rail transit systems is still immature. The System-Theoretic Process Analysis (STPA) method searches the cause of the accident based on the extended accident causal model. Based on the causal scenario and safety constraints, propose relevant leading indicators. It also proves that the leading indicators can effectively observe and prevent risks, and solve the problem that the current risk analysis results are difficult to use directly.

A Framework for Definition of Operational Design Domain for Safety Assurance of Autonomous Train Operation

ABSTRACT. In recent years, the research of next-generation railway focuses on enhancing the autonomy of unmanned train with AI techniques to identify, analyze and safely handle uncertain environments and emergencies. Due to the limitation of sensors and machine learning algorithms, it is essential to clearly and completely specify the operation condition under which the train control system is designed to work and ensure trains are always safely operating in this domain. This domain is well-defined as operational design domain (ODD) for autonomous vehicles firstly, but is often limited by the uncontrolled environment and the complicated traffic situation. In the field of urban rail transit, a clear definition can be assured by the closed operating environment and organized operation. Thus, this article gives the definition and structure of the ODD of rail transit to describe the safety constrains and assumptions. The factors that are relevant to the identification of ODD have been described and a framework for defining hazardous scenario of train operation based on ODD semantic is proposed. Further, a case of ODD identification and analysis for the typical scenario in unmanned train control system is used to demonstrate its contribution to safety assurance.

Formal modeling of a new On-board Train integrity System ETCS Compliant
PRESENTER: Insaf Sassi

ABSTRACT. Railway signaling systems are in continuous progress to cope with the evolution of the railway industry and needs. Among the objectives is to increase the capacity of the European rail network, in particular by enabling moving block operation in a cost-effective way. To this aim, integrity monitoring trackside equipment (track circuit, axle counters) have to be substituted by on-board modules which must ensure that the train is moving safely and integer during its journey, i.e., no wagon is lost. In fact, a lost vehicle may lead to train collision scenarios. Therefore, continuous supervision of the train integrity is needed. Using an on-board control-command system for the train integrity functionality, transfers more responsibility, in terms of train operation safety, from infrastructure managers to railway operators. In this context, a new on-board train integrity (OTI) function, compliant with the European Train Control System (ETCS), is thus proposed in X2RAIL-2 and X2RAIL-4 projects to help tackle the new challenges. The functional specifications of the OTI are established in a dedicated deliverable [1] as a list of requirements expressed in natural language and semi-formal models (high level UML State Machines (SM) and a set of Sequence Diagrams (SD)). However, their lack of formal semantics opens the way to some ambiguity in terms of interpretation, which may lead to safety issues. The contribution, presented in this paper, is related to the development of formal models of the new OTI system to ensure the completeness and correctness of the specifications. Formal verification techniques, which are highly recommended for the engineering of safety critical systems [2], can then be employed for analysis of the specifications. In this context, Model checking is brought into play to check various types of properties automatically, such as safety properties. This automatic formal verification technique allows for exhaustively checking the system behavior based on the timed automata notation [4] supported by the UPPAAL tool [3], and provides safety evidence.

References: 1. Deliverable4.1 (2020). Train integrity concept and functional requirements specifications. Technical report, Shift2Rail, X2RAIL-2 WP4 project. 2. Mohamed Ghazel, Formalizing a Subset of ERTMS/ETCS Specifications for Verification Purposes,Transportation Research Part C - Emerging Technologies, Elsevier, vol. 42, pp. 60-75, 2014. 3. Gerd Behrmann, Alexandre David, and Kim G. Larsen, A Tutorial on Uppaal 4.0, Update of November 28, 2006. 4. Rajeev Alur and David L. Dill, A Theory of Timed Automata, Theoretical Computer Science No.126, pp. 183-235, 1994.

15:20-16:20 Session TU3I: Model Based Safety Assessment
Location: Giffard
"K6 Telecom", a component library for dynamic RAMS analysis of communication networks
PRESENTER: Anthony Legendre

ABSTRACT. The paper presents the new library “K6 Telecom” developed for the KB3 platform. This program, developed in 2020, is a prototyped version, which enables RAMS (Reliability, Availability, Maintainability and Safety) analysis on communication networks. We can adjust or add components according to the specifications from the system of interest. K6 Telecom emerges from: • a first prototype made in 2017 by Thomas Chaudonneret and José Sanchez Torres [1], • experience acquired during the development of the K6 2.0 library allowing the analysis of critical electrical networks [2], • experience feedback acquired following its use in operational studies over the last 5 years. In particular, this new tool brings new components making it possible to cover many use-cases on communication networks. The library provides the foundations for communication networks modeling (internal company telecommunication network, VoIP, WAN network, etc). Our MBSA (Model Based Safety Analysis) approach is major for EDF business in Electrical Systems and now in communication networks. Our analysis allows quantitative analysis on dynamics model. Results provide elements such as: evaluation of the likelihood of undesired events; the contribution of failure sequences for an undesired event; and the estimation of the benefit of architectural solutions on reliability and availability. In order to illustrate the contributions and the limits of the K6 Telecom library, we will present a case study created from a real study: a telecommunication system between communication services transmitter and receiver. The steps done for this study are: 1) modeling of the communication network on KB3; 2) configuring the components 3) definition of the dynamic reconfiguration (when components can state-change according to failure/communication/repair occurrences in the system). 4) Validating the basic behavior by a step-by-step simulator using a dedicated visualization defined in the library. 5) Finally, quantifying the reliability, the availability and identifies all the failure sequences for chosen undesired events by using the Figseq tool [3]. The results obtained by propagating in Markov graph (thus generated by KB3 platform) make it possible to get a list of failure sequences including dynamic reconfigurations related to active component.

Keywords: Component Library, Electrical networks, Communication networks, RAMS, Failure sequences.

References 1. J. Sanchez-Torres and T. Chaudonneret, « Reliability and Availability of an undustrial wide-area network », LambdaMu21, Reims, 2018. 2. A. Legendre and al, « Interest of the K6 2.0 tool for the RAMS analysis of critical electrical networks », LambdaMu22, 2020. 3. M. Bouissou, Y. Dutuit and S. Maillard, «Chapter 7 Reliability analysis of dynamic phased massion system: comparison of two approaches», Modern Statistical and Mathematical Methods in Reliability, vol. 10, pp. 87-104, 2005.

Benefits of graphical animation of advanced AltaRica 3.0 models

ABSTRACT. Most system engineers today use graphical representations to represent their system models. This is important to better understand and to communicate not only on the system, but also on the model itself. AltaRica 3.0 is a high level formal modeling language dedicated to probabilistic risk and safety analyses of complex technical systems. It is the combination of a powerful mathematical framework, Guarded Transition Systems, and a set of structural constructs stemmed from object-oriented and prototype-oriented programming languages. Models written in AltaRica 3.0 are textual. So according to the following adage “a good image is worth a thousand words”, graphical representations of these models are necessary for good communication; but the reference remains the text. Moreover, beyond simple graphical representations, the graphical simulation of models can be used to perform virtual experiments on systems, helping to better understand the system behavior. In this paper, we show how to model graphical representations using the high level modeling language GraphXica. GraphXica has the same structural constructs as AltaRica 3.0. It enables to describe graphical primitives (lines, rectangles, circles, etc.) and their animations (color change, scale, rotation, translation, etc.). Moreover, we illustrate how the graphical models of GraphXica can be coupled with AltaRica 3.0 models, and how their animations communicate with the stepwise simulator ofAltaRica 3.0. Finally, we demonstrate, by means of an example, the benefits of graphical animation of the textual AltaRica 3.0 models in order to perform virtual experiments by visualizing the incident or accident scenarios.

PRESENTER: Pavel Krcal

ABSTRACT. The way in which we model a system affects the way in which we can perform its dependability analyses. Even though the system under study is the same one, one gets rather different results from a fault tree model or from a Markov chain or Petri nets or differential equations. The other way around, the purpose of an analysis and perhaps the focus on particular aspects of system behavior point towards a specific modeling formalism. If we have a description of components, their failures, behavior and interactions with other components (a knowledge base) then building a model and defining analyses become two orthogonal tasks. Modeling includes selecting system components and connecting them according to the actual dependencies. Defining an analysis means describing the properties of interest and the set of behaviors and/or interactions that shall be included in the analysis. As long as these behaviors are specified in the knowledge base, the model does not need to change. This structure of encapsulating possibly complex dependability relevant functioning of components and their interactions into knowledge bases and using these pre-defined components in models of specific systems is used in some tools, such as RiskSpectrum ModelBuilder (KB3), with a graphical interface for modeling, complemented by import possibilities, and a support for various analysis tools. We describe the exact mechanism allowing us to decouple modeling from analysis and exemplify its power on several cases related to scenarios occurring in the industrial praxis. We also illustrate on some of the examples how different aspects of the same system can be captured in a single model. An analyst can afterwards decide which features or aspects of the real-life system shall be considered in the analysis and which ones can be screened away or abstracted by a simplifying mathematical description that enables using a more efficient analysis algorithm. We include: - A spent fuel pool system with a special role of repairs under a long mission time. We use this system also to illustrate modeling of deterministic failure and grace times that arise from physical aspects of the system. - A production system where productivity is as important as reliability. We show how stand-by components and different repair strategies can be modeled and how can they be taken into account in dependability calculations. - A system that cannot be repaired during its mission-time, where cold spares are essential for safety. - A thermohydraulic safety system model enabling analyses beyond standard fault trees.

15:20-16:20 Session TU3J: Seismic reliability assessment
Location: Botanique
Methodology on the combination of seismic correlation coefficient for probabilistic seismic risk assessment

ABSTRACT. Seismic fragility assessment has a procedure to combine random variables of response and capacity to produce the relationship between a failure probability and a seismic intensity. In order to evaluate the probabilistic seismic safety of a critical system, the probability of failure to the system is calculated from the seismic fragility of components according to the accident scenario. The evaluation of the failure probability of simultaneous multiple failure for more than two components has assumed that the failure probability of each component is independent. However, there should be a correlation because several random factors have same cause such as similarity of seismic properties. The multiple failure probability can be different depending on the correlation. It used to be unconservative without considering seismic correlation. Therefore, a practical methodology for fragility assessment with seismic correlation and the correlation coefficient for each random variable should be evaluated. In nuclear power plant, to prevent core damage, two or more major safety-related components that perform the same function are equipped to ensure redundancy. These components will have correlations with each other and the correlations will change with the random variable. In this study, the components inside the auxiliary building of the example nuclear power plant was selected in order to evaluate the seismic correlation coefficient. And several random variables related to the seismic response were selected for numerical evaluation of the seismic correlation coefficient which can be estimated by numerical analysis. The correlation coefficient of each random variable was estimated by evaluation of the floor response spectra at the component locations. The entire correlation coefficient was combined by correlation coefficients of each random variable and it was compared with the method without separating variables. As a result, the method to evaluate correlation coefficient by combination procedure was validated and the effect of modeling fidelity was discussed.

PRESENTER: Lukas Bodenmann

ABSTRACT. Earthquakes can cause widespread damage to the built environment, disrupting the function of many residential buildings to provide safe housing capacities and thus, potentially inducing severe long-term societal consequences. While performance-based seismic design of new buildings and risk-informed seismic retrofitting of vulnerable buildings form the backbone of long-term risk mitigation strategies, adequate post-earthquake decision-making is a crucial enabler of rapid post-earthquake recovery. Rapid recovery significantly improves the short-term resilience of communities after an earthquake. However, decision-making in the aftermath of earthquake events is difficult: there is intense time pressure to act and scarce information on the severity and the spatial distribution of damage to support the decision on what to do first. Early damage estimates produced using regional earthquake risk models and rapid earthquake intensity data are useful. These damage distributions are derived from estimates of the spatial distribution of instrumental or macro-seismic intensity measures, and typological building seismic vulnerability functions. While the precision of the former depends, amongst other issues, on the density of seismic network stations and the region-specific geological knowledge, the typological classification of buildings often involves attribution models that are built upon correlations between exposure data, such as building height, age and value, which are available in public databases, and typological seismic fragility and damage classes. As such, typological attribution models are approximate and may suffer from limited applicability within the region hit by an earthquake. Hence, typological attribution locally adds to the uncertainties resulting from the average representation of multiple buildings forming a building class and the simplified regional building vulnerability models. Employing probabilistic machine learning tools, the framework presented in this study leverages the continuous inspection data inflow to dynamically update the initial regional earthquake risk predictions by updating simultaneously the functions that govern typological attribution and the damage state of buildings. Hence, while completing rapid visual inspection of all affected buildings may take several weeks, the limited information becoming available in the first days following an earthquake event helps to constrain the regional damage model uncertainties. This leads to more reliable rapid estimates of building losses in terms of habitably and repair effort, and their respective spatial distribution. The presented framework is demonstrated on a case-study region in Switzerland subjected to a fictitious earthquake scenario.


ABSTRACT. Civil structures’ life-cycle may entail performance degradation, due to point-in-time events such as earthquakes and/or continuous aging, as well as the enforcement of maintenance and/or retrofitting policies. The current performance-based approach to the life-cycle analysis requires consistent modelling of uncertainty involved in the degradation and healing. Recent research efforts, based on the theory of age- and state-dependent stochastic processes, has explored modelling structural deterioration[1] and structural recovery;[2] only a few were devoted to a unified approach addressing the two jointly.[3,4] The presented study, focussing on the case of structures subjected to a possibly-damaging major seismic event (i.e., the mainshock), addresses a discrete-time discrete-state Markovian process to model both damage accumulation during the aftershock sequence and the damage restoration; i.e., the resilience of the structure. To this aim, the paper first discusses the considered phenomena acting on the structure, especially the peculiarities of the recovery, as observed after recent major seismic sequences; then, it shows how a Markov-chain, already adopted to model seismic damage accumulation,[5] can be adapted to also describe the resilience curve. Finally, a single transition matrix is developed to describe the combined effects of both damage progression due to aftershocks as well as recovery. An illustrative application, calibrated on data from the Italian L’Aquila seismic sequence of 2009 (mainshock magnitude 6.3), and referring to a code conforming reinforced-concrete building, shows the capabilities of the holistic model.

References 1. Iervolino I, Giorgio M, Chioccarelli E. Gamma degradation models for earthquake-resistant structures. Struct Saf 2013;45:48–58. 2. Sharma N, Tabandeh A, Gardoni P. Resilience analysis: a mathematical formulation to model resilience of engineering systems. Sustain Resilient Infrastruct 2018;3:49–67. 3. Tao W, Lin P, Wang N. Optimum life-cycle maintenance strategies of deteriorating highway bridges subject to seismic hazard by a hybrid Markov decision process model. Struct Saf 2021;89:102042. 4. Iervolino I, Giorgio M. Stochastic modeling of recovery from seismic shocks. 12th Int. Conf. Appl. Stat. Probab. Civ. Eng. ICASP 2015, 2015 5. Iervolino I, Giorgio M, Chioccarelli E. Markovian modeling of seismic damage accumulation. Earthq Eng Struct Dyn 2016;45:441–61.

15:20-16:20 Session TU3K: Accelerated Life Testing & Accelerated Degradation Testing
Location: Atrium 1
Life Prediction and Test Verification of Bearings Based on Wiener Degradation Model and Bayes Method

ABSTRACT. Bearings are core components of space moving parts, and their working accuracy and performance may gradually degrade under long-term continuous operation, which will affect the stable operation of the spacecraft. Therefore, it is important to predict the life of bearings to guarantee the reliability of spacecraft. In this paper, a life prediction method of bearings based on data-driven strategy is proposed. Firstly, the degradation model based on Wiener process is constructed, and expressions of probability density function and mean value of residual life are obtained. Secondly, a prior estimation method of super parameter based on EM (Expectation-Maximum) algorithm is proposed, and the Bayes method is used to estimate and update the parameters of the degradation model and residual life in real-time. Finally, a test rig is designed and a life test of bearings is carried out to obtain the failure time of various specimens. The life of the specimens are predicted through the method presented in the work, and the error is analyzed using the test results, which eventually verify the effectiveness of the method.

Parameter estimation of accelerated lifetime testing models using an efficient Approximate Bayesian Computation method
PRESENTER: Mohamed Rabhi

ABSTRACT. Accelerated lifetime testing (ALT) is a common way to estimate the reliability under use stresses through the extrapolation of the failure data collected from severe stress levels. The task of estimating the model parameters from experimental observations is of paramount importance to make good predictions. In this context, the ALT models parameters estimation has been approached in the literature with various classical methods. The graphical method is arguably the easiest and the most straightforward, but it presents several shortcomings. The most commonly used method for parameters estimation is the maximum likelihood method (MLE) because it has several desirable properties. However, in some circumstances, the likelihood function is intractable and cannot be formulated even in a closed form or fail to converge mainly when the sample size is small. This is always the case in ALT, because of the high cost of tests and testing time. Furthermore, using root-finding algorithms, the estimated parameters are highly dependent on the initial values and risk the local optima issue. Particularly for the ALT models, which involve numerous parameters, the MLE computation becomes memory consuming. To overcome these issues, the likelihood-free methods called also Approximate Bayesian Computation (ABC) were developped by replacing the evaluation of the MLE in the Bayesian Inference by other features and metrics. Nevertheless, in its basic form, the ABC algorithms still suffer from a low acceptance rate of sampling. To overcome this issue, a new variant of the ABC based on an ellipsoidal Nested Sampling technique (ABC-NS) is empolyed. It ensures important speed ups and provides a good approximation of the posterior distributions. In this paper, a brief introduction of the ABC-NS method and its algorithmic implementation are given. In the second part, the ABC-NS is applied to infer the ALT models using real data. The obtained results are discussed.

Solid State Device (SSD) End to End (e2e) Reliability prediction, characterization and control
PRESENTER: Mohd Azman Latif

ABSTRACT. Reliability failures often occurred at the last stage of new product introduction (NPI) development requires significant man-hours and cost to work on the solution in task force mode that disrupt the product qualification schedule and priorities. The failures are due marginality in design, components and assembly process, and if it were discovered late in the schedule, the solutions are limited to an assembly process because design and components change requires major re-qualification. A highly technical team, from all the eminent stakeholders is embarking on reliability prediction from beginning of NPI development, identify critical to reliability parameters, perform full-blown characterization to embed margin into product reliability and establish control to ensure the product reliability is sustainable in the mass production. The paper will discuss a comprehensive development framework, comprehending SSD end to end from design to assembly and will be able to predict and to validate the product reliability at the early stage of new product development. Predictable product reliability at early product development will enable on-time sample qual delivery to customer and optimize the product development validation, effective development resource and avoids forced late investment to bandage the end of life product failures. Understanding the critical to reliability parameters earlier allowing focus on increasing the product margin that will increase customer confident to product reliability.

16:30-17:30 Session TU4A: System Reliability
Location: Auditorium
Evaluating the application availability of Intelligent Optical Networks based on the Network Evolution Model

ABSTRACT. Due to the rapid development of Internet and the emerging of 5G application scenarios, Intelligent Optical Networks (IONs) is proposed as the backbone transport network structure which can establish and remove application connection dynamically and gives the optical transport network flexible recovery capability. The IONs can provide different restoration performance classes of application within Service Level Agreements (SLAs) to make sure that the higher the class the more available the application. Thus, availability evaluation of the applications is an integral part of ensuring the quality of the application and avoiding penalties for violating the SLAs. However, the availability assessment of the applications in IONs is complex due to their dynamic behaviors and heterogeneous features. The current availability evaluation models fail to make effective characterize the attributes of intelligent optical network application and they also fail to establish the dynamic correlation mechanism between applications and network infrastructure resources. In order to quantitatively evaluate the application of IONs and to provide reference for the service providers, this study proposed a new application availability evaluation model based on network evolution from the following four aspects, namely, modeling the evolution objects containing both the network physical and the application layer, generating the evolution conditions for capturing the transition process of the network components state, and designing the evolution rules reflecting the dynamic mechanism of protection and restoration strategy for applications, lastly counting the cumulative outage duration of application in the evolution process to calculate the availability of the application. Taking the ASON framework-based network and two types of application within actual SLAs as a case study, the verification and validation of the model is analyzed versus the results of exact methods firstly, then discussing the efficiency for evaluating the availability of large-scale network and multiple applications. The results show that the model is not only accurate for estimating the availability of traditional static application in small networks, but also effective for acquiring the availability of IONs application with dynamic behaviors, especially for large-scale networks with different SLAs’ applications. Furthermore, the model is versatile for the availability analysis in that the components of the network can be failure-prone with arbitrary failure probability distributions and maintainability schemes, and the applications’ protection schemes can be described as any combination of network evolution rules.

Adaptive faults diagnosis and reasoning method based on MFM
PRESENTER: Huizhou Liu

ABSTRACT. Multilevel Flow Modeling(MFM)is an abstract hierarchical model of the complex process industrial system based on the system goal and the functional subject to achieve the goal. In a specific MFM model, there are two different modes of influence between function and objectives. In the first mode, the failure of a component or function will immediately lead to the failure of the final objective, while the second mode is the opposite. Namely, the importance of sub-goals and components to the overall or specific goals is not same. In this paper, we purposed an adaptive fault diagnosis and reasoning method combining MFM and improved Analytic Hierarchy Process(AHP). Firstly, the MFM model is established, which already has the function of preliminary fault diagnosis according to the signal and process parameters. Secondly, the improved AHP algorithm is introduced to calculate the weights of components and functions under typical failures. Finally, the importance of components and functions are sorted, and the MFM modeling method expands the ability of quantitative analysis, which is of great significance for assisting faults diagnosis and risk management. To better illustrate the structure and execution process of the proposed method, a case study on a lubrication system of a plunger pump is analyzed and displayed, which verifies the feasibility of this method for fault diagnosis and quantitative evaluation.

16:30-17:30 Session TU4B: Mathematical Methods in Reliability and Safety
Location: Atrium 2

ABSTRACT. In modern reliability theory, there have been many different approaches to modeling stochastic deterioration for optimising maintenance. Up until the 1990s, maintenance models were usually based on the lifetime distributions for the components. However, this approach has the disadvantage of being binary, in the sense that it only tells us whether a component is functioning or not. In order to remedy this, failure rate functions were introduced. Failure rate functions model ageing in a more satisfactory way than lifetime distributions. However, failure rates cannot be observed for a single component, see Noortwijk, and are therefore not tractable in practical applications. To mitigate this, a theory for modeling deterioration via stochastic processes developed. Various processes have been suggested, such as Brownian motion with drift and compound Poisson processes (CPP). Since CPPs have a finite number of jumps in any finite time interval, they are appropriate for modeling usage and damage from sporadic shocks, see Noortwijk. To model gradual ageing, the gamma process is currently the most commonly used stochastic process in maintenance modeling, see e.g. Grall et al. and Newby and Dagg.

However, none of these processes are able to capture jump clustering. To allow for clustering of jumps (failure events), we suggest an alternative approach in this paper: To use self-exciting jump processes to model stochastic deterioration of components in a system where there may be clustering effects in the degradation. Self-exciting processes excite their own intensity, so large shocks are likely to be followed by another shock within a short period of time. Hence, self-exciting processes allow for jump clustering. Furthermore, self-exciting processes may have both finite and infinite activity. Therefore, we suggest that these processes can be used to model degradation both by sporadic shocks and by gradual wear. We illustrate the use of self-exciting degradation with several numerical examples. In particular, we use Monte Carlo simulation to estimate the expected lifetime of a component with self-exciting degradation. As an illustration, we also estimate the lifetime of a bridge system with independent components with identically distributed self-exciting degradation.


ABSTRACT. Empirical results from fatigue data, particularly on steels, ceramics, and titanium alloy, suggest that specimen tested under a particular stress level are unlikely to fail. This limiting stress level is called ‘fatigue limit’ or ‘threshold stress’. When fatigue limit exists, S-N curve, a log-log plot of cyclic stress (or strain amplitude) S versus the median fatigue life N, often exhibits curvature at lower stress levels. Moreover, the standard deviation of fatigue life is often modeled as a monotonic decreasing function of stress. Thus, in the presence of fatigue limit, the relationship between fatigue life and stress can be better modeled by including a fatigue-limit parameter in statistical models (Pascual and Meeker 1997, 1999). Nevertheless, it is also important to note that fatigue behavior is extremely complex, and multiple failure modes in fatigue test may be present because of the influence from mechanical, structural and environmental factors. It is no longer appropriate for fatigue data to be described by a single distribution. In this paper, we propose a mixture fatigue-limit model to expand the general fatigue-limit models and improve their performance by identifying the potential multiple failure modes from observations. First, we follow the basic idea of formulating the log fatigue life as a linear function of the difference between stress and fatigue limit (in log scale). Then we assume the fatigue life is modeled as a mixture distribution at each stress level. Finally, we use some EM (Wu 1983) algorithm-based steps to estimate model parameters. In E-step, we update the posterior probability of each observation. In M-step, we firstly estimate the fatigue-limit through optimizing the likelihood of a parametric survival regression model and then the rest of parameters. The customized EM steps are repeated until the convergence criterion is achieved. For the simulated datasets, we study the convergence of log-likelihood at each iteration, and the effects that test length and sample size have on the estimation. For the nickel-base superalloy data, we found that the mixture fatigue-limit model with 2 components is superior to the situation when assuming a single failure mode by comparing their AIC values. Also, we explore the lower and upper confidence bounds for 0.05 quantile of fatigue life, which is of interest for engineers. The analyses for both cases demonstrate that the proposed model is effective and robust, and can provide some engineering insights.

16:30-17:30 Session TU4C: Maintenance Modeling and Applications
PRESENTER: Jackson Lee

ABSTRACT. The aging is the inherent nature of mechanical system, such as aging wind turbine [1]. Thus, maintenance is performed to return the generator to its optimal functioning. Analyzing an aging distributed generation system (DGS) is valuable to onlooking stakeholders where the economic value is of greater importance. This paper applies a Wiener process with unit-to-unit variability to model generator’s degradation path [2]. Then, the aging DGS is simulated via a power flow model and outputs the energy not supplied and operational costs [3]. A maintenance model is developed using difference of the real and expected operational costs, which is then optimised to determine the best maintenance interval that gives the lowest long-run cost rate (LRCR). The optimal interval is decided firstly by two widely used evolutionary algorithms - GA and PSO, as the benchmark. To avoid running the aging DGS for per interval, the problem of optimizing LRCR is converted into a model that can be solved by recent reinforcement learning algorithms [4]. For the reinforcement learning model, the environment is the uncertain renewable energy resource and load, the reward is the LRCR and the policy is the maintenance interval. The solutions determined are for a DGS in Corvallis, Oregon, USA. Fig. 1 shows the relationship between LRCR and maintenance interval from GA and PSO. The optimal interval for above DGS was around 2.73 years to return the lowest LRCR. The results will be compared to these of GA and PSO to show why the built reinforcement learning based decision system is superior.

A disassembly path planning method for mechanical products in narrow passages based on improved RRT-connect algorithm
PRESENTER: Yuning Liang

ABSTRACT. The narrow installation space and complex assembly relationships of mechanical products bring difficulties to disassembly and maintenance. However, during product design process, it is difficult for designers to intuitively judge whether there is a product disassembly path in virtual environment. For such high-dimensional configuration space path planning problems, sampling-based algorithms such as Rapidly-exploring Random Tree (RRT) and its variants are widely used. However, due to the uniform sampling strategy of these algorithms, there are two problems: the planning performance is significantly reduced because of the low sampling probability in narrow passages, and the direction of products are random. Therefore, we improve the RRT-connect algorithm for disassembly path planning in narrow passages. First, we propose a narrow-biased sampling strategy to improve the sampling density in narrow passages. We use the bridge test to identify narrow passages, and modify the extend function of RRT-connect algorithm to make the tree extend to narrow passages with higher probability. We also propose a rotation constraint method to facilitate the disassembly and carrying of products. The rotation parameters are represented by unit quaternions and are changed only when necessary. When the sampling point is in wide space, the rotation parameters of the point are not changed. And when the sampling point is in narrow passage, the rotation parameters are changed to the direction perpendicular to the bridge (which means parallel to the narrow passage). Our method improves the efficiency of path planning in three-dimensional narrow space, and reduces unnecessary rotation of the product during random sampling.

PRESENTER: Wang Luohaoji

ABSTRACT. Critical urban infrastructures such as energy grid and transport network have the following characteristics: 1) These network-based systems have to fulfil continuous and stable transmission demand. The network-based systems are complex with many nodes and arcs. 2) Either system break or maintenance will lead to regional supply shutdown or shortage. In this paper, we consider a network-based system with single-origin and single-destination. The degradation of the arc is caused material aging and degeneration. When some components are failed, the load that they took has to be shared by the other components to hold the contracted demand. The early/potential failures are hidden until some consequences are evident to be detected/monitored. On one hand, the maintenance activities take some regional shutdown. On the other hand, the system performance should be maintained at a given threshold. Due to the arcs locate in different region of the whole networked system, and then the maintenance sequence is important when the maintenance time duration is considered. Eventually the sequence of components waiting for maintenance should be optimized with the aim of restoring the system performance within given time, which means the demand of each OD pair can be met in given time. In this paper the network average traffic objective is established to optimize and arrange maintenance sequence based on two rules: 1) capacity rule and 2) maximum network flow change (MNFC) to assign priority to critical infrastructure. Then the different sequences will be compared with the contribution to the network performance. Eventually this paper will model real infrastructure network and simulate maintenance procedure to verify the feasibility of maintenance scheme.

16:30-17:30 Session TU4D: Prognostics and Health Management: From Condition Monitoring to Predictive Maintenance
Location: Panoramique
Robust Sensor Fault Detection for Linear Parameter-Varying Systems using Interval Observer
PRESENTER: Thomas Chevet

ABSTRACT. This paper proposes a new interval observer for continuous-time linear parameter-varying systems with an unmeasurable parameter vector subject to unknown but bounded disturbances. The parameter-varying matrices are assumed to be elementwise bounded. This observer is used to compute a so-called residual interval used for sensor fault detection by checking if zero is contained in the interval. To attenuate the effect of the system's uncertainties on the detectability of faults, additional weighting matrices and different upper and lower observer gains are introduced, providing more degrees of freedom than the classical interval observer strategies. In addition, a $L_{\infty}$ procedure is proposed to tune the value of the observer gains, this procedure being easy to modify to introduce additional constraints on the estimation algorithm. Simulations are run to show the efficiency of the proposed fault detection strategy.

Embedded Feature Importance Determination Technique for Deep Neural Networks in Prognostics and Health Management

ABSTRACT. In the last ten years, deep learning-based methods have proven to be suitable for prognostics and health management (PHM) by achieving promising results in tasks such as diagnostics, prognostics, remaining useful life (RUL) estimation and anomaly detection. Despite this fact, there is still hesitance by companies to adopt these kinds of models. One of the reasons behind this hesitance comes from the black-box nature of neural networks. There is no straightforward way of interpreting the results a neural network achieves. In the context of PHM, this is a key aspect. A model that accurately predicts failure states and also explains how and/or why these states are reached is necessary to determine whether the algorithm is biased or not, to build trust, and ultimately learn from it. To address this issue, we propose a technique for feature importance determination embedded in deep neural networks (DNN). The objective is that, after training, the DNN informs not only its performance values but how relevant each of the input features are within the model. To do this, we add a locally connected layer between the input layer and the first hidden layer, whose weights determine the importance of each feature. This layer is trainable jointly with the rest of the network. Thus, importance values are adjusted during training to minimize the loss function. We demonstrate this approach using a dataset with vibrational data from a test rig with a ball bearing and compare with three other techniques by ranking the features from most to least relevant and evaluating performance with the n-best features. Results show that the presented approach affects performance positively, that it is able to recognize irrelevant features, and that it reaches high performance with less features than the other evaluated techniques.

16:30-17:30 Session TU4E: Socio-Technical-Economic Systems
Location: Amphi Jardin
PRESENTER: Dana Prochazkova

ABSTRACT. The aim of risk management at the technical facilities operation is the technical facilities integral safety which ensures their co-existence with their vicinity throughout their life cycles. For finding the way, the original database of technical facilities accidents and failures for the world was compiled (it contains 7829 events) [1. Detail database study [2 shows that causes of technical facilities accidents and failures belong to categories shown in Figure 1.

Fig. 1. Basic categories of risk sources associated with the technical facilities operation which lead to the failures of the coexistence of technical facilities with surrounding areas during their operation; IS = information system; PSH = personnel safety and health.

The evaluation of accidents´ and failures´ studies shows [2 that important factor is correct performance of responsibilities on different management levels. The obtained knowledge shows that for prevention of accidents and failures, it is necessary to avoid to: large mistakes in risk prevention; and origination of small mistakes, the realization of which in short time interval is dangerous. Due to dynamic world and technical facility development, it was in agreement with [3 developed tool “Risk Management Plan”.

1. CVUT, Database on World Disasters, Technical Entities Accidents and Failures – Causes, Impacts and Lessons Learned. Praha: CVUT 2020. 2. D. Prochazkova, J. Prochazka, Risk management and settlement at technical facilities operation. 3. IRM. A Risk Practitioners Guide to ISO 31000: 2018. London: IRM 2018.


ABSTRACT. Simulation based training is a common way of training for effective learning for high-risk contexts. COVID19 changed large parts of the education and training of safety- and risk management at Nord University in Norway. The training and education have been based on theoretical lectures prior to simulated practical exercises at the university’s emergency preparedness laboratory, NORDLAB. Here, academic staff, mentors, facilitators, and the students cooperated prior to, during and after exercises in order to provide an optimal learning context. However, this cooperation required close contact which suddenly ended when COVID19 hit, due to infection control. This resulted in challenges including how to uphold the learning outcome for the students based on the theoretical foundation of Kolb’s (2014) experiential learning, while changing the form of training on a short notice. Kolb’s theory is the foundation NORDLAB is based on. The learning context was changed to net-based training and exercises using the software zoom, with all participants geographically spread all over Norway.

Thus, our research question was: Which challenges in use of simulation and lab exercises in safety management education during COVID19 are central, and how could they be solved?

The challenges identified were 1) student’s lack of ability to actively participate in face to face learning activities 2) mentors’ lack of technological flexibility 3) inability to share the lab’s simulation technology with zoom 4) novice students difficulties of forming and interacting in digital teams 5) zoom fatigue 6) the need for increased administrative and technological support 7) low body language feedback 8) lack of visualization of injects. Solving the challenges were defiant and elements we used in this case were 1) on-boarding 2) table top exercises 3) video recorded lectures 4) flipped classroom 5) gaming simulated exercises 6) podcasts 7) shorter training sessions

We would like to discuss how and if the solutions matched the challenges for safety training in regard to the expected learning outcome for students who were to enter practical emergency preparedness and safety management.

16:30-17:30 Session TU4F: Probabilistic vulnerability estimation, lifetime assessment and climate change adaptation of existing and new infrastructure
Stochastic river flow forecasting using a Markov-switching autoregressive model
PRESENTER: Bassel Habeeb

ABSTRACT. Over time, the climate has changed considerably due to a wide variety of natural processes. During the last century, climate change has been highly influenced by human economic activities, which translate mainly into greenhouse emissions. Climate change has impacted infrastructure in different ways. In the particular case of bridges, it has added large variability to river flow and bridge scouring accelerating the deterioration process and reducing their lifetime. Therefore, design and operation require making predictions that include the continuously increasing changes in river flow due to climate change. This paper presents a stochastic Markov-switching autoregressive model to improve river flow predictions. The proposed model was built based on a historical database for the Thames river in the United Kingdom. The database was used to predict and estimate the expected shortfall which is the average amount of river flow lost over the predicted period, assuming that the loss is greater than the 99th and 95th percentiles. The results of the model were compared with actual data achieving an estimated R^2 value of 85.45 %. The forecasting accuracy for river flow has a value of 88.7 %. Concerning severe river flow values, the stochastic model provides R^2 value of 99.8 %. These results indicate that the stochastic Markov-Switching autoregressive model can be used with advantage to forecasting the climate change effects on river flow.

Sensitivity and reliability analysis for reinforced concrete structures subjected to cyclic loading using a polynomial chaos

ABSTRACT. Fatigue is one of the main causes of failure of reinforced concrete structures subjected to cyclic loading. Several mechanical models based on damage theory have been developed to represent the behavior of reinforced concrete structures subjected to cyclic loading. The use of these models requires knowledge about the model parameters which could be determined from experimental tests. However, there is few information about the more influencing parameters for probabilistic lifetime assessment. In this paper, we propose a methodology based on polynomial chaos expansion (PCE) to propagate uncertainties in a damage mechanical model. The PCE will also be used to perform a sensitivity analysis of the input parameters of the model and to estimate the failure probability of a reinforced concrete component. The methodology is applied to a reinforced concrete beam subjected to cyclic loading. The results obtained were first compared with those of the experimental tests to validate the proposed methodology. Good agreement indicates that our approach is capable of propagating uncertainties. The sensitivity analysis allow us to identify which are the most influencing parameters for lifetime assessment.

16:30-17:30 Session TU4G: Maritime and Offshore Technology
Location: Atrium 3
Empirical Analysis of Ship Anchor Drag Incidents for Cable Burial Risk Assessments
PRESENTER: Andrew Rawson

ABSTRACT. Subsea cables are critical infrastructure for both international telecommunications and power transfer between continents or offshore wind farms. The cost of repairing a telecommunications cable is between $1 to $3 million, but the economic damage due to lost internet connections is perhaps incalculable (Veverka, 2013). It is therefore essential that any hazards that might threaten these cables are adequately risk assessed and that appropriate mitigation measures put in place. Yet, there are estimated to be between 100 and 150 cases of cable damage reported each year (ICPC, 2009). One key threat to cables is ship anchors, accounting for between 15% (ICPC, 2009) and 35% (Green and Brooks, 2010) of incidents. Anchors might be deployed in an emergency to arrest movement or an anchored vessel might drag her anchor in adverse weather conditions. These anchors weigh several tonnes and in a soft seabed might penetrate to depths of greater than 5m (ICPC, 2009), causing significant damage to any subsea infrastructure. The effective planning and risk assessment of cable routes and cable burial/protection strategies can help to minimise these occurrences.

To quantify the risks of cable strikes, the Cable Burial Risk Assessment (CBRA) methodology was proposed by the Carbon Trust (Carbon Trust, 2015). It builds upon earlier work by Mole et al. (1997) to develop a simple Burial Protection Index (BPI) but includes a more probabilistic and holistic approach to cable risk management. In summary, the probability of anchor strike is the function of time vessels are at risk, a scenario modifier and a causation probability. Whilst the traffic exposure can be calculated using historical movement data and the scenario modifier, such as depths, can be estimated, the causation probability has significant uncertainty. What is the likelihood that a vessel would drag its anchor in given conditions? Estimations have been given such as in the region of 1x10-4 to 1x10-6 (Doan et al. 2016) or 4.2x10-5 (Allan, 2006). However, these are unsubstantiated and therefore the accuracy of the assessments could be improved by empirically validating this causation probability. This paper sets out to achieve this.

Within this paper, we analyse a year of Automatic Identification System (AIS) data across the United States and extract all data for vessels at anchor, representing over 3 million hours of anchoring activity. Data mining is conducted of the movement characteristics of these vessels that analyses successive positions to determine whether there is a reverse trajectory that might indicate anchor dragging. Using these drag candidates, the contributory factors can be analysed associated with each event and non-event, such as wind speed, wave conditions, vessel type and bathymetry to determine which factors are most important. Finally, probabilistic causation probabilities are developed for different drag distances that can be used in future CBRA studies.

The results of this analysis contribute a greater appreciation of the causal factors behind anchor dragging incidents and a probabilistic framework to strengthen future CBRA studies. This in turn contributes to the safe and efficient routeing and protection of subsea cables. Several areas of further work are identified and explored, including the potential use of machine learning to create a decision support tool to monitor the relative risk of vessels at anchor near to offshore cables.

References: Allan, R. (2006). Non-Natural Marine Hazards Assessment. Available at: Carbon Trust (2015). Cable Burial Risk Assessment Methodology. Available at: Doan, H. Macnay, L. Savadogo, A. and Smith, K. (2016). Offshore Cable Burial Depth Using a Risk Based Approach. Journees Nationales de Geotechnique et de Geologie de L’Ingenieur. Green, M. and Brooks, K. (2010). The Threat of Damage to Submarine Cables by the Anchors of Ships Underway. ICPC (2009). Submarine cables and the oceans: connecting the world. Available at: Mole, P. Feathersone, J. Winter, S. (1997). Cable Protection – Solutions through New Installation and Burial Approaches. Chelmsford: Cable and Wireless Marine. Veverka, D. (2011). Under the Sea. Shipping and Marine: The Magazine for Maritime Management. Issue 8.

Stakeholders network analysis for safe LNG storage and bunkering at ports

ABSTRACT. Modern societal demands for environmental protection have led many industries to adopt more environmentally friendly fuels. Liquefied natural gas has the potential to replace traditional marine fuels and is hence considered as the fuel of the future in the shipping industry. Despite of the benefits LNG offers, there is a significant risk when storing and bunkering it at port areas that should not be considered. The main purpose of the current paper is to present first results of the project entitled "TRiTON", funded by the Greek Ministry of Education, which aims at addressing safety issues of LNG at ports. A stakeholders’ network analysis is performed for investigating the processes of detection, analysis, response, and monitoring of events that can lead to accidents, during storage, transport, and supply of LNG at ports. The national (Greek) and international regulatory framework for LNG safety has been analysed, so as to identify relevant stakeholders and established relationships between them (Aneziris et al., 2020). Among the most important regulations which are analyzed are the following: a) the Greek Presidential Degree 64/2019 for implementing the Regulation for safe fueling of ships with LNG , based on the European Directive 2014/99 for the deployment of alternative fuel infrastructures, b) the European Directive 2012/18/EU on the control of major-accident hazards involving dangerous substance (Seveso III), and c) the Greek Ministerial Decision Γ1/20655/2897, adaption of the European Agreement on the International Carriage of Dangerous Goods by Road (ADR). Τhree stakeholders networks have been created for the most widely used methods for LNG storage and bunkering, namely: a) fixed-tank storage and tank to ship bunkering, (b) truck to ship bunkering, and (c) ship to ship bunkering. Statistics and metrics of the networks have been calculated, such as density, betweenness centrality, closeness, clustering coefficient and modularity. Finally, the most important stakeholders for LNG safety at ports have been identified with the help of the open source software called Gephi-The Open Graph Viz Platform (Bastian et al., 2009). These are the Ministry of Shipping and Island Policy, the Port Authority, and the qualified person in charge of bunkering.

References 1. O. Aneziris, I. Koromila and Z. Nivolianitou, A systematic literature review on LNG safety at ports, Safety Science, 124 (2020). 2. M. Bastian, S. Heymann and M. Jacomy. Gephi: an open source software for exploring and manipulating networks. International AAAI Conference on Weblogs and Social Media (2009)

PRESENTER: Thor Myklebust

ABSTRACT. Digitization may provide increased access to and more efficient use of real-time and historical data, internally as well as externally in an organization. However, when information from industrial control systems (ICS) becomes more available in office IT-systems and in the "cloud", ICS systems may become more vulnerable and attractive targets for cyberattacks. We have investigated data safety in ICS in the Norwegian offshore sector when data is processed from ICS to the office network. The work is mainly based on document review and nine interviews with selected oil companies, rig companies and service providers of operational data. The paper addresses strengths and threats related to data safety with emphasis on (1) Data sources and data flow (2) Safety and security of data, (3) Data cleaning and processing, (4) Contextualization, (5) Validation and (6) Quality assurance. We also discuss shortcomings in current standards for functional safety such as IEC 61508 and IEC 61511 and standard series for security IEC 62443. It is a major challenge for the industry that there are no good international standards and guidelines that define the relevant terminology across IT-systems and ICS. Future work should address data safety challenges when applying artificial intelligence and machine learning in ICS systems.

16:30-17:30 Session TU4H: Aeronautics and Aerospace
Location: Cointreau
PRESENTER: Gianpiero Buzzo

ABSTRACT. CIRA (Italian Aerospace Research Center) has been performing for 3 years experimental flight test campaigns by means of FLARE, a flying platform based on a TECNAM P92 – Echo S. FLARE is equipped with an experimental set-up used for different experimental purposes. In order to satisfy a specific request for a reliability assessment by the Italian Civil Aviation Authority (ENAC), CIRA has started investigating the modelling techniques that better suit the FLARE configuration made up of both hardware and software, homemade equipment and CotS (Commercial off the Shelf). To define such an approach, it has been decided to go through an incremental process to iteratively include all the system components, starting by one specific FLARE subsystem, belonging to the On-board Data Communication System. Based on the current configuration of the selected subsystem, a database has been designed and filled in with reliability data, in terms of MTBF (Mean Time Between Failure), MTTR (Mean Time To Repair), failure rate and repair rate, these data coming from both literature research, data gathered from standards, mainly [1], and suppliers’ data when available. A first reliability evaluation has been derived implementing a classic reliability technique, i.e. Reliability Block Diagram (RBD), and a global reliability figure has been derived for the subsystem. In order to have a deeper understanding of the potential failure paths leading to the loss of Data communication system functionalities, a second reliability evaluation has been implemented by means of Monte-Carlo simulation, using the same collected reliability data as input. The choice has also been done to easily derive minimal cut sets for identifying the most critical components, the potential mitigations and potential recommendations to design changes. This paper shows the results of the comparison between the obtained reliability figures of the Data Communication Subsystem of FLARE facility, paving the way to enlarge the analyzed system up to include the complete on-board experimental set-up.


ABSTRACT. Real-time health monitoring of flight control actuators usually involves the comparison of measured signals either with numerical models or with statistical data. As the external loads experienced by the system influence the operation of most actuators, such loads are a useful quantity to compare with the actuator output and perform on-board fault detection [1]. In common flight controls, the actuator load is not directly available as a measured signal, due to the reliability and complexity penalties often associated to the installation of dedicated sensors and transducers. In this work, we discuss the use of distributed sensing of the airframe strain to infer the aerodynamic loads acting on the flight control actuator. We address a specific sensing technology based on Fiber Bragg Gratings (FBGs) as it combines a good accuracy with minimal invasivity and low complexity [2]. Specifically, we combined a structural and an aerodynamic model to collect a database to train data-driven surrogates intended to map from strain measures to actuator load. Figure 1 displays the information flow of the proposed process. Preliminary results are promising and show a good correlation between the aerodynamic load and structural strain measurements.


ABSTRACT. Optical sensors have recently gained interest due to the many advantages they offer over traditional electrical sensors commonly used in aerospace applications. In particular, their total insensitivity to electromagnetic interference (EMI), the ease of multiplexing of different signals on a single line, the excellent resilience to hostile environments, the very compact dimensions, and the considerable overall weight savings resulting from the signal cables reduction, make technological solutions based on optical fibers a compelling alternative to traditional sensing elements [1]. In this work, authors consider optical sensors based on Fiber Bragg Gratings (FBGs), which can reflect a very narrow band of wavelengths, called the Bragg wavelength, but are almost transparent for the other signals. This behavior is obtained by realizing local variations of the refractive index of the FBG core [2]. The Bragg wavelength, nominally defined in the production phase by the grating etching process, can vary as a function of physical changes in the sensor itself or environmental conditions (e.g., physical stresses applied to the grating or variations of temperature or humidity) [3]. The correlation of the Bragg wavelength variation with the physical variations of the sensor is essential to guarantee satisfactory levels of accuracy and reliability. In particular, using FBGs as mechanical strain sensors, it is crucial to estimate with proper accuracy the disturbance generated by environmental conditions and conceive an effective compensation method. Hence, this work studies the effects of environmental temperature and humidity variations on measurements, examining possible non-linear, time-dependent phenomena arising from the FBGs bonding [4]. For this purpose, the authors developed a dedicated test bench to simultaneously detect the various physical measures (FBG deformation, temperature, humidity, Bragg wavelength variation), analyze their correlations, and formulate the said compensation strategy.

16:30-17:30 Session TU4I: Foundational Issues in Risk Assessment and Management
Location: Giffard
PRESENTER: Paolo Bragatto

ABSTRACT. Safety is an essential requirement to ensure sustainability and competitiveness for European industry. In Europe, there are many national programs to fund industrial safety research, sometimes funded by the mandatory insurances against accidents at work or carried out by national institutes doing research by themselves or in collaboration with local universities. The topic of industrial safety has some overlap with occupational safety and security, and in some countries this research topic falls under these headings. Even though the issue of industrial safety is adequately considered in the national funding programs, the coordination between the various countries is poor. Furthermore, EU research framework programs have never included “industrial safety” among the key priorities. The SAF€RA project was created to strengthen the link between European researchers (FP7-ERANET, Grant Agreement no 291812) which between 2012 and 2015 financed the creation of a mechanism to coordinate the research investment among EU countries. Projects of major importance, for which transnational and multidisciplinary collaboration was essential, have been funded. After the end of the ERANET in 2015, a formal Partnership has been created and it now gathers 16 organizations from 12 different European countries. It manages funds for research on industrial safety or allied topics. Since 2013, six SAF€RA calls for projects have been launched, fully funded by the SAF€RA Partners. To select the topics, SAF€RA organizes symposia, inviting renowned experts from industry and academia. Proposals are evaluated by an independent panel, the funding arrangements used involve little bureaucracy. 22 projects have been funded to date, with 4 more projects at the starting line. The projects have involved 66 operating units from 17 different European countries, covering many of the topics of "industrial safety", and involving the most renowned European research teams, but also emerging ones. The purpose of the article is to analyze in detail the main scientific outputs obtained from the projects and present a roadmap for future strategic research. The article will use a series of evaluation indicators, also evaluating the effectiveness of the funds spent from a cost/benefit perspective. After eight years of hard work to promote transnational collaboration, SAF€RA leadership wants to assess the results obtained and discuss them with the industrial safety research community, to decide the direction to be given to the consortium in the coming years.

A risk perspective on Common Operational Pictures: A case study of the Swedish Counties’ Coordination Office for the Covid-19 response
PRESENTER: Henrik Hassel

ABSTRACT. The Covid-19 crisis has led to widespread impacts on society, not only in terms of high death tolls and extreme pressure on health care, but also in terms of cascading effects for essentially all societal sectors. The cascading effects are both caused directly by the disease, such as down-prioritization of preventive health care and disruptions in vital societal functions due to lower personnel availability, and indirectly by the measures taken to reduce the spread of the disease, such as closing of borders, shut-down of non-essential activities leading to economic impacts for many sectors. A range of actors, on different societal levels, have become involved in the response to the pandemic. To take appropriate actions, these actors need good understanding of what is going on, i.e. they need a good situational awareness. One way to create such situational awareness, especially suitable in a multi-actor setting, is to develop Common Operational Pictures (COPs), where information is collected from actors with relevant, although partial, insights and then compiled into an aggregated common picture that can be shared among relevant actors. This paper presents a case study of the COPs compiled by the Swedish Counties’ Coordination Office (SCCO). SCCO has had a role in the Swedish response since early spring 2020 as they bi-weekly collect regional COPs from all County administration boards and compile them into national operational pictures, with the main target organization being the Government Offices of Sweden. This paper argues that in order to become a useful decision support, COPs should not only contain information about “the present” but also contain forward looking-information (predictions, prognoses, etc.). One justification for this is that since there is a significant time lag between the collection of information to the delivery of the COP, including information about "the future" can facilitate proactive decision-making. Adopting a forward-looking perspective means that knowledge from risk science can be applied. The risk science focuses on the systematic description and prediction of potential future events and consequences, as well as associated uncertainties; and limited work has been conducted on the nexus between COP and risk science. Our research questions are: 1) to what extent and how are forward-looking information included in SCCO’s COPs? 2) How can risk science add value to the work of COPs? For the second research question, we will primarily focus on the SCCO, but we will also reflect on these issues from a more general perspective. We will also discuss the pros and cons of adopting a risk perspective in COPs since practical constraints in terms of time pressure, need of simplicity, etc. may restrict the suitability of an increased use of such a perspective.

Information- and Cyber-Security Practices as Inhibitors to Digital Safety

ABSTRACT. Every year experience increased focus on cyber- and information -security (CIS) from government, official organizations, private industries, and research and academia. As awareness on CIS improves in society, requirements to stakeholders become more numerous and more detailed. The natural response has been to develop CIS as a specialized field, resulting in digital safety and digital security separating into more distinct silos. This manifests through establishing separate roles for handling safety and security in organization, often mandated through standards, rules and regulations, as well through universities establishing security programs, and research following suit. Where critical infrastructures are involved, the difference between the practices becomes clear. Within safety it is customary to share hazards and risks between projects and companies, and even across specific domains through initiatives such as databases and registers [1]. Within CIS concerning critical systems and infrastructures, or when one is dealing with information of a classified nature, the way of working and the involvement of peers and coworkers are fundamentally different. Examples can be that ‘safety’ people have to leave the room when the same information is discussed under the guise of security, and redaction of access to safety reports due to security concerns [2]. Even though approaches to combine safety and security in e.g., the development lifecycle of systems exists [3], it is the experience of the authors that the topics are not sufficiently intermingled. The paper presents the authors experiences from safety and security analysis and development activities across last 20 years from a range of critical domains with focus on successes and challenges in addressing safety and security. Shared experiences indicate that CIS as a topical area has become both more important, but also more distinct in that it is addressed isolated. The identified challenges are discussed and a set of mitigations to prevent the (self)-alienation of security in safety are provided. The result is a list of best practices to navigate the projects where CIS concerns prohibit safety. To be able to fully integrate the safety way of working, i.e., information sharing and an inclusion culture, a revolt of current practices and ways of working within security is suggested.

Keywords: Safety, Information security, Cyber Security, Critical infrastructures, Critical systems development

References 1. OECD NEA, Computer-based Systems Important to Safety (COMPSIS) Project, 2005-2011, available: 2. Gran, B.A., Egeli, A., Bjerke, A. 2016. Addressing security in safety projects – experiences from the industry. Fast abstracts at International Conference on Computer Safety, Reliability, and Security (SAFECOMP), Trondheim, Norway 3. Christian Raspotnig and Andreas Opdahl. 2013. Comparing risk identification techniques for safety and security requirements. J. Syst. Softw. 86, 4 (April, 2013), 1124–1151. DOI:

16:30-17:30 Session TU4J: Seismic reliability assessment
Location: Botanique
PRESENTER: Mabel Orlacchio

ABSTRACT. The present paper deals with the analytical assessment of state-dependent seismic fragility curves for Italian building classes, that constitute one of the results of the ongoing research project RISE (Real-time earthquake rIsk reduction for a ReSilient Europe). State-dependent fragility curves are required to calculate the seismic structural reliability when it is possible for structural failure to be reached progressively; i.e., due to the cumulative effect of multiple earthquakes. The structures under consideration are taken from the outcomes of the SERA project (Seismology and Earthquake Engineering Research Infrastructure Alliance for Europe) and refer to existing Italian buildings classified in different structural typologies that are defined in accordance with the GEM taxonomy1. The state-dependent fragility curves are evaluated via back-to-back incremental dynamic analysis (back-to-back IDA)2 using equivalent-single-degree-of-freedom systems. The analyses consider four damage states and are performed with the DYANAS software.3 A structure is considered to enter a damage state when the transient maximum inelastic displacement exceeds a threshold that has been defined for each structure within the SERA project. This study also addresses some issues that significantly affect the state-dependent fragility assessment; i.e. the choice of a suitable intensity measure and the identification of the optimal number of ground motion records for the execution of the back-to-back IDA. To address the first issue, several alternative ground-motion intensity measures are compared. On the other hand, the number of records to use for performing nonlinear dynamic analysis is defined by means of a quantitative criterion based on the statistical inference concept of estimation uncertainty, trying to balance computation costs against precision of the fragility assessment. Finally, the lognormal assumption for the state-dependent fragilities is also discussed.


ABSTRACT. The Sapphire building located in Istanbul, Turkey is one of the tallest buildings in the country, which has 55 stories above ground level. The structure has been monitored by the Department of Earthquake Engineering of Kandillli Observatory and Earthquake Research Institute, Boğaziçi University since 2012. The response of the structure to several earthquakes is recorded by a 37-channel permanent accelerometer network, installed on the basement (9th and 1st), 9th, 14th, 25th, 36th, and 52nd floors. In this study, we determined the dynamic characteristics of the structure using the vibration data recorded by those instruments. The vibration data used in the analysis belong to 5 different earthquakes that occurred between the years 2014 and 2020. The dynamic response of non-instrumented floors is estimated based on the mode shape-based estimation (MSBE) technique. Then the structure is modeled as a cantilever beam based on either Bernoulli-Euler or Timoshenko beam theories. The response of the structure to larger earthquakes is derived using the recursive filter methodology, which uses the corner frequency of the small earthquake and magnitudes for scaling. The maximum inter-story drift ratio (MIDR) is calculated under the effect of various PGV levels.


ABSTRACT. The analytical evaluation of seismic structural behavior under repeated earthquake shocks, that can potentially cause failure due to damage accumulation, often makes recourse to sequential dynamic analysis of a numerical model. One such dynamic analysis strategy is the so-called, back-to-back incremental dynamic analysis (B2B-IDA), whereupon one accelerogram is scaled in such a way as to bring the structure to a specific conventional damage state and is then followed by a second accelerogram which is scaled in amplitude over a wide range of shaking intensity levels, forcing structural response to span the entire range of possible damage states. The need to effectively capture record-to-record variability of response for seismic reliability analyses, means that B2B-IDA is typically applied using a multitude of ground motions representing both the first damaging shock as well as the second shock in the sequence that affects the damaged structure. The present study uses a variety of such SDOF inelastic structures to explore a series of practical issues that arise in running B2B-IDA and post-processing the results. This investigation uses the DYANAS graphical user interface, which was previously developed with the contribution of the authors as a tool that can be also used for streamlining this type of analysis. The first issue that is addressed in this study is the number of records used to represent both the first and the subsequent seismic shock affecting the structure. Previous research has shown that the statistical inference concept of estimation uncertainty can be used as a tool to quantify the effect of the record sample size, used in single-event dynamic analysis, on the accuracy of the results obtained. The present article picks up on that methodology and seeks to extend it in the context of B2B-IDA. A second practical issue considered is the implementation of a hunt-and-fill algorithm in order to minimize the number of runs needed to efficiently represent a single B2B-IDA curve. Such an algorithm can allow the rapid transformation of B2B-IDA curves from one IM to another when combined with appropriate interpolation techniques. Finally, this article briefly addresses updates in the DYANAS software that were explicitly implemented to facilitate the extraction of results from B2B-IDA for the purpose of obtaining so-called state-dependent fragility functions, that is models providing the conditional probability of a structure transitioning from one damage state to another, given shaking intensity.

16:30-17:30 Session TU4K: Accelerated Life Testing & Accelerated Degradation Testing
Location: Atrium 1
Accelerated test design of aero generator based on Text Mining
PRESENTER: Jinyong Yao

ABSTRACT. In order to solve the problem of information collection in aero generator accelerated test, this paper proposes a text mining based fault analysis method for Aero generator products. In order to improve the efficiency and accuracy of fault analysis, text mining technology is used to process the weak links and fault modes of complex mechanical and electrical products. Firstly, based on Python software, a text information preprocessing method based on Jieba word segmentation system is developed. Then, aiming at the problem of unstructured information such as faults and weak links in mechanical and electrical products fault analysis literature, TF-IDF and LDA model are used to extract fault information features, which can fully describe the fault information and obtain the feature weight of text information; Finally, a text mining method of fault information based on support vector machine (SVM) is proposed. This method processes the fault information of mechanical and electrical products in the form of classification, and obtains the information of weak links and fault modes, which provides information support for accelerated test. The analysis method proposed in this paper can greatly improve the comprehensiveness of fault analysis. Finally, a typical generator product is taken as an example to verify the method.

Design Optimization for the Step-Stress Accelerated Degradation Test under Tweedie Exponential Dispersion Process

ABSTRACT. The accelerated degradation test (ADT) is a popular tool for assessing the reliability characteristics of highly reliable products. Henceforth, designing an efficient ADT has been of great interest, and it has been studied under various well-known stochastic degradation processes, including Wiener process, gamma process, and inverse Gaussian process. In this work, Tweedie exponential dispersion process is considered as a unified model for general degradation paths, including the aforementioned processes as special cases. Its flexibility can provide better fits to the degradation data and thereby improve the reliability analyses. For computational tractability, the saddle-point approximation method is applied to approximate its density. Based on this framework, the design optimization for the step-stress ADT is investigated. Under the constraint that the total experimental cost does not exceed a prespecified budget, the optimal design parameters such as sample size, measurement frequency, and test termination time are determined via minimizing the approximate variance of the estimated mean time to failure of a product/device under the normal operating condition.

A new Estimation Method for Automotive Multidimensional Metrics
PRESENTER: Abderrahim Krini

ABSTRACT. In general, failure data is obtained in the automotive industry during the warranty period. If these contain the expression of a service life characteristic for each failure, statements can be made about the reliability and availability of the systems. For this purpose, estimation methods are used to adapt empirical lifetime distributions to theoretical lifetime distributions. By means of the distribution characteristics, a prognosis of the reliability and availability to be expected is also possible beyond the observation time. In addition to the data basis to be investigated, which is available completely in test bench and experimental trials or re-censored in the field, various methods can be used for estimation. In the past, several methods have been used, such as the estimators according to ECKEL [1], Kaplan-Meier [2], Johnson [3] or the estimators according to VDA [4]. In general, a field failure is subject to several stresses, which can be described by expressions of several lifetime characteristics. For this application case, which occurs in the automotive industry, the known estimation methods, [1], [2], [3], cannot be used. In this paper, the necessity of multidimensional estimation methods will be introduced first. Then, a new estimation method for multidimensional metrics is presented. In a further step, mathematical proof is given that the new method can provide realistic results. Finally, two example data sets from bench testing and field are presented. Furthermore, some recommendations for the use of the new method are concretized in this context.