View: session overviewtalk overview
Plenary session: Reliability of Telcom Networks Yan-Fu LI, Tsinghua University ; Li, liyanfu@tsinghua.edu.cnHonqiang BAO, Huawei R&D-France;
09:30 | Multidimensional Risk Assessment Through Impact Space Analysis PRESENTER: Walaa Bashary ABSTRACT. Hazards and threats, especially natural hazards and cyber security threats are becoming an increasing concern for operators of critical infrastructures as well as companies operating e.g. process plants. Therefore, many risk assessment methodologies have been proposed over the decades. Within the quantitative or semi-quantitative risk assessment approaches, a generally accepted one for quantifying the risk is describing it as a function of the probability of occurrence of an event and the consequence, i.e. impact of this event on the system. In this way, the usually applied methods generally do not allow for multidimensional impact consideration. In fact, the impact on the system is represented by only one dimension, often visualized in a risk matrix that combines probability of occurrence and consequence. In this work, it is suggested to assess the impact of an event on various dimensions (e.g. health, environment, business continuity) where each dimension reflects a different value to allow improved accounting of competing values. In an attempt to represent the interconnectivities of competing values within a complex system, a multidimensional approach to risk analysis is proposed. This is done by introducing and evaluating - what in this work is called - the “impact space”. The resulting space allows the evaluation of all relevant dimensions, thus representing the impact of the studied event comprehensively as well as underlying competing values and their importance. In a first step, the scenario space is defined, where the - realistic and conceivable - scenarios are assessed. The scenario space includes those events that may lead to incidents and/or accidents. This is done by analyzing and assessing the corresponding system fragilities and vulnerabilities, i.e. the probability of events leading to incidents. This allows - besides hazard and threat probabilities - the generation of the multidimensional-risk assessment combining the aforementioned probabilities with a multidimensional impact assessment. In a following step, mitigation and protection measures can be implemented in order to enhance the safety and security of the system and therefore the impact space dimensions should be reevaluated. Finally, a case study is presented in which the multidimensional risk assessment is performed on the basis of an impact space reduced to three selected dimensions. A set of mitigation and preventive measures is then suggested and the impact space is evaluated before and after the implementation of these measures. In conclusion, the methodology presented in this paper aims to allow the decision-maker to assess the consequence of one (or a set of) measure on different aspects of the system without neglecting other important ones, and thus allowing competition between values and not treating risks separately. |
09:50 | A three-stage stochastic programming model for optimizing the limited risk-related resource scheduling considering secondary risk PRESENTER: Fei Zuo ABSTRACT. Risk response is an integral part of project risk management. Secondary risk can, however, be triggered, especially when responding to project risks characterized by significant uncertainty. Then in these situations, preventing and responding to occurrence of secondary risks is an important to reduce the cost and delay both in project risk management and project time management. In addition, a reasonable scheduling of the limited risk-related resources needed by the risk response actions is fundamental to ensure that the selected actions can actually be implemented. In this context, the paper presents a stochastic optimization model that integrates risk response strategies selection and resource-constrained project scheduling, with consideration of secondary risk. A scenario-based three-stage stochastic optimization model is proposed to describe the relationships between primary and secondary risks under uncertainty. A flow network model is proposed for the resource-constrained project scheduling problem, with stochastic project duration caused by the uncertainty. In this way, the risk event and uncertainty can both be considered, so that a practical risk response strategy and the corresponding scheduling can be obtained. The proposed model is applied to a case study for validating applicability and investigating the relationships among the parameters. The results show that secondary risks have a considerable effect on the optimal risk response strategy and the corresponding project scheduling, and the response to secondary risks is based, to a large extent, on the level of uncertainty in the project. |
10:10 | Analyzing Hazards in Process Systems Using Multilevel Flow Modelling: Challenges and Opportunities PRESENTER: Ruixue Li ABSTRACT. Process safety is complex since the production process is a unified system comprised of mass, energy, and information, in which the effects of failures are interconnected and transmitted, leading to a chain reaction throughout the process. Multilevel Flow Modeling (MFM), as a methodology in functional modeling, performs well in supporting situation awareness and fault diagnosis, through modeling process systems and reasoning about hazard scenarios. Applying MFM to hazard identification and deviation analysis and developing a computer-aided HAZOP tool can potentially improve the efficiency of the conventional HAZOP study. MFM is not sufficient for computer-aided cause and consequence analysis, due to the fact that individual deviations are examined independently in terms of their causes and effects. Because of this limitation, risk can be underestimated because of a lack of consideration for the "duplicate effect" of deviations, resulting in an insufficient barrier configuration. By reviewing the methods of risk assessment and the development of MFM in process safety, this article discusses the opportunities and challenges that can be encountered when combining hazard analysis with "duplicate effect" and the representation and analysis of barriers in MFM. The authors aim to develop an approach to enhance the hazard analysis and barrier representation through the extension of MFM, which will facilitate the identification and optimization of safeguards. |
10:30 | Failure Mode and Observability Analysis (FMOA): An FMEA-Based Method to Support Fault Detection and Diagnosis PRESENTER: Gilberto F. M. Souza ABSTRACT. Ensuring that physical assets are available and in an optimal state of health for operation is essential as well as challenging for organizations. For this to happen, maintenance management not only needs to coordinate the preventive maintenance activities but also monitor and mitigate any degradation in their engineering systems through a fault detection and diagnosis (FDD) process. Establishing an FDD process requires the integration of several activities to effectively support maintenance management in the organization. Thus, its setup is of importance and should not be neglected. However, due to a lack of supporting tools, organizations may struggle during this stage. In this context, this paper proposes the Failure Mode and Observability Analysis (FMOA) to support fault detection and diagnosis in asset management. The proposed method is a variation of the Failure Mode and Effects Analysis (FMEA) that intends to support the setup of the FDD process in organizations. It complies with two main sections: identification of the potential failure modes and correlation of these failure modes with monitored parameters. In this paper, the proposed method is demonstrated through a case study based on a Brazilian hydroelectric power plant. This plant has been undergoing several studies for asset management improvements. The results obtained show that the proposed method is able to systematically structure the expert knowledge that can support unsupervised data-based fault detection and diagnosis process for engineering systems. Accordingly, this article is expected to contribute to physical asset management research and maintenance practitioners as the proposed method can support a condition-based maintenance strategy. |
09:30 | RISKS CONNECTED WITH METRO STATION RECONSTRUCTION PRESENTER: Lenka Střelbová ABSTRACT. The Praha metro belongs to the critical transport infrastructure. Therefore, great care is taken to ensure its safety at all stages of the life cycle. The submitted communication follows the reconstruction of the metro station, which lies at the intersection of two metro lines. To protect the relevant element of transport infrastructure, a protection zone is established by legislation. The outer boundary of the protection zone is a vertical surface guided at a distance of 31.5 m from the outer contour of the building structure. The metro station, including the protection zone, is threatened by many causes of risks that originate from the natural, technical, organizational sources and sources associated with the human factor. The operation of the monitored station showed that some sources of risk were not sufficiently addressed during the design; e.g. the flood in 2002 caused the station inundation, causing the major material damage. For the sake of completeness, it should be noted that this flood completely flooded a total of 18 stations, almost 20 km of tunnels and two metro trains. The cost of the restoration exceeded CZK 7 billion. The flood in question revealed not only the low robustness of the building, but also errors caused by the human factor, such as the imperfect maintenance of the station closures (steel gates separating the connecting tunnels between the transfer stations, between the escalator tunnels and the platforms, and the station from the tunnels that could not be closed), poorly made bushings around the cables in places where the cables pass through the pressure wall, etc. At that time, the water rose almost two meters higher than was calculated in the design and response flood plans. During the restoration of the monitored station, an effort was made to ensure that the water, if the level rose again, would not get close to the stations if possible and the hydrostatic pressure of its buoyancy from below could not be applied there. Twenty years ago, firefighters pumped out two and a half billion liters of water from a flooded subway. Water flowed underground for two days, they got it out for three months. The investigation after flood also revealed that nobody was responsible for the metro station closure, the launch of the protection system and its counterflood control. Based on these lessons learned, it is necessary at the reconstruction, to cope with all sources of risk, partly by prevention measures and partly by locating the technical fittings that enable qualified response. In connection with the reconstruction itself, they are followed: construction-technological and design risks associated primarily with project documentation and construction; financial and technical risks associated with the actual technical devices and equipment; the quality of the materials used; the behavior of people; and the maintenance of the station itself. Based on current knowledge and lessons learned, the right principles of safety culture must be set that lead to risk management in favor of safety, both in the design and implementation of the actual reconstruction and for the subsequent operation. Based on the experience with the current operation of the station and the analysis of newly created scenarios for risk sources determined by the All-Hazards approach, possible losses and damages to public assets and assets of the monitored metro station and metro as a whole under normal, abnormal and critical conditions are determined. On their basis, requirements for the terms of references for the reconstruction of the metro station are gradually being created so that the construction works have appropriate impacts on both, the public assets and the metro operator. Additional protective barriers, systems and technical elements will be built into the station building to enable a quick and effective response to critical situations and they will be defined tasks that must have clearly defined responsibilities from the point of view of station safety management. |
09:50 | RISK MANAGEMENT OF SELECTED ELEMENTS OF CRITICAL TRANSPORT INFRASTRUCTURE PRESENTER: Jan Prochazka ABSTRACT. The transport network is one of the most important infrastructures ensuring the basic functions of the State, hence the basic needs of humans for their survival, and therefore, it belongs to the critical infrastructure of each State. The article shows the method of risk management of selected elements of transport infrastructure in favor of integral safety, which ensures their safety and coexistence with the environment throughout the lifecycle of the element. It contains the results of a systematic risk study for bridges, tunnels, railway stations, airports and centers for the management of communication and information systems in transport. It identifies sources of risk and classifies them into: natural disasters; shortcomings in the project; shortcomings in construction; deficiencies in operation; human errors; and deficiencies in traffic management. A critical analysis of the failures of the monitored elements of the transport infrastructure showed that some causes of failures are often repeated, such as traffic accidents, lack of maintenance, low quality of repairs and upgrades. Their common root cause is a lack of safety culture for road users, their lack of training and motivation aimed at safe work and safe behavior. Ignorance of the conclusions from already investigated accidents and accidents is also significantly manifested. Based on the concept that each monitored element is a socio-cyber-physical (technical) system and the consideration of the principle of responsibility, which is common in Europe (which in this case means that the responsibility for the safety of each monitored element of the transport infrastructure has according to the phase of life, both the designer, the construction manager or the operator, and the public administration, which has the duty of supervision in the public interest), the risk decision-making tools are compiled for each element. The instruments in question take the form of a decision support system and consider the aspects which they assess: how risks and their sources are considered; the level of safety achieved in a given design of the monitored element; the technical level of the measures introduced; material and energy performance; speed of implementation of measures; staff requirements; information requirements; demands on finances; liability claims; as well as the management requirements of all involved (i.e. both the management of the monitored element and the management of the surrounding territory). The article mainly shows the results of research of centers for the management of communication and information systems in transport, where automation is increasingly used, in which feedbacks are of fundamental importance, on the basis of which control systems adjust the operation of the whole according to information from controlled systems. Positive feedbacks support the results of controlled processes, while negative feedbacks weaken them. Control systems have algorithms that give commands and trigger some operations. The control system ensures that the specified physical quantities are maintained at predetermined values. In the process of regulation, the control system changes the conditions of the controlled system by acting on the action quantities so that the required state is achieved. The safety management system (SMS) must always be equipped with measures to minimize damage in the event that security measures and safety systems fail or an unidentified danger occurs. Minimizing the damage can take the form of warning and cautionary signals, training, instructions and procedures for behavior in dangerous situations, or isolation of hazardous facilities from populated centers. For automation reasons, the SMS must have cyber security built into the design; the paper gives the proposal of such configuration. |
10:10 | Combinatorial Optimization of Empirical Heuristics for Water Distribution Networks Restoration PRESENTER: Shunichi Tada ABSTRACT. Because water is an essential resource for numerous activities, a water distribution network (WDN) is one of the most important lifelines; therefore, considerations must be made to prepare for the restoration of WDNs during post-disaster periods. To gain knowledge for the restoration and utilize them for training, we are developing an agent-based simulation that can reproduce the restoration processes of the WDNs and the behavior of agents in three subsystems of the city: “civil life”, “industry” and “lifeline” during the post-disaster periods. Since the simulation model has complex interdependencies within and among the three subsystems, the order of repairing damaged pipes and the assignment of the repair squad must consider those interdependencies. However, the combination of the restoration order and the task assignment are large, therefore we need a method for optimizing the restoration plan. In the previous research, we proposed two optimization methods for the restoration plan of the WDNs: the genetic algorithm-based method and the heuristics-based method. The heuristic-based method uses seven empirical rules for prioritizing the damaged pipes and for assigning the repair squad to each pipe. The results showed that the optimization performance of the GA-based method was slightly better than the heuristic-based method. On the other hand, the GA-based method took longer time for the optimization and its results were hard to interpret. To utilize the simulation and the optimization for practical use, low computational cost and high interpretability are preferable. In that sense, the heuristic-based method is more useful and practical than the GA-based method. However, there also exist problems in the heuristic-based method. One problem is that the computational cost of the heuristic-based method increases as the number of empirical rules increases and the rules themselves becomes more complex, since the best order of rules to apply was obtained by the exhaustive search in the current method. Another problem is that the current heuristic-based method does not consider the situational change according to the progress of the restoration and the existence of situation-dependent rules, which will lead to low interpretability of the results. In this research, we propose a combinational optimization of empirical rules to consider situation-dependent rules according to the restoration progress. For this optimization, we applied the genetic algorithm in which a chromosome represents an order of rules to be applied in different phases. By comparing with two optimization methods in previous research, we found our new method exhibits higher optimization performance than the previous heuristic-based method but slightly lower than the previous GA-based method. Also, we found that our new method can express more complex application of empirical rules, thus the interpretability was improved from the previous heuristic-based method. Regarding the computational cost, the calculation time of the new method was smaller than the previous GA-based method because of the reduction in the size of problem space. |
10:30 | Vulnerability analysis of interdependent energy infrastructures with centralized and decentralized operator models PRESENTER: Andrea Bellè ABSTRACT. Energy infrastructures (EIs) are large systems which provide essential energy commodities, such as electricity, gas, or heat, to people. As EIs are often interdependent on each other, integrated analysis and optimization are needed. When performing analysis and optimization of interdependent EIs, the behaviour of independent operators should be taken into account. Independent operators might display a decentralized and competitive behaviour, when they interact through the prices of energy commodities in a market-based environment, or a centralized and collaborative behaviour, when they aim at maximizing their combined performance. In this paper, we investigate the impact of centralized and decentralized operators models in the vulnerability analysis of interdependent EIs. Using interdependent power and heat networks (IPHNs), we show that these two classes of models lead to different results in terms of cost and performance. These preliminary results represent the first step in defining a decision-making framework which accounts for the two different behaviours of independent operators: decentralized in normal conditions, and centralized in conditions of disruptions. |
09:30 | Identification of features of rare risk events in oil refineries using Natural Language Processing PRESENTER: July Bias Macedo ABSTRACT. Accidents in the process industry can be prevented and their consequences mitigated by performing proper risk analyses to inform decision making. Quantitative risk analysis (QRA) is one of the main frameworks used for risk assessment and the identification of all hazards which a plant is exposed to are crucial tasks to ensure the comprehensiveness of QRA; these tasks are carried out by the examination of a set of engineering documents by experts during structured meetings. The hazards identified are stored as textual data in documents, which retain valuable information about the risks related to the analyzed system. Natural language processing (NLP) techniques has emerged as a way for extracting, organizing and classifying relevant information from text. However, a challenge arises when we are interested in addressing catastrophic accidents that are rare and for which, thus, only limited information is available. Some accidents are actually postulated as possible in principle, but there are no historical occurrence records. Developing techniques to characterize features about such rare events is a challenging task, yet quite useful for QRA, whose outcomes would guide designing preventive measures and supporting the related decision making. In this paper, we compare two approaches to address the rare event issue: data augmentation (DA) and zero-shot learning (ZSL). DA is applied to obtain a balanced and sufficiently large training set by replacing some words by synonyms, to preserve the content and generate new sentences. ZSL builds recognition models without accessing any sample of the unseen categories during training; this is achieved with the help of transferring knowledge from previously seen categories and auxiliary information, which may include textual descriptions, attributes, or vectors of word labels. The final aim of this work is developing a model capable of characterizing relevant features about rare or unseen accidental scenarios in support to hazard and operability study (HAZOP) and preliminary hazard analysis (PHA). |
09:50 | Problem, Remedy and Item Identification from Maintenance Long Texts PRESENTER: Jordan Makins ABSTRACT. Historical maintenance work order (MWO) records contain unstructured data about loss of asset function problem and the maintenance actions required to restore or retain function remedy of equipment or its components item. These MWO records also provide structured fields to record, using codes, information of interest to reliability engineers such as the ‘problem’, ‘cause’, and ‘remedy’ for each MWO. However, these codes are often missing or recorded inconsistently by the operators, maintainers, and planners generating the MWOs. This leads to issues when they are later used by reliability engineers for data analysis, troubleshooting, and maintenance strategy effectiveness. This project evaluates the suitability of Named Entity Recognition (NER) to infer structured codes for problem, and remedy, and identify the maintainable item from the unstructured MWO texts, in a consistent way. NER is part of a Technical Language Processing pipeline (TLP) for computer-driven manipulation and interpretation of human language from engineering texts. The TLP pipeline involves collaborative annotation of the 449 unstructured long texts in MWO records to map words to named entities related to semantic concepts in the context of maintenance. This data set is used to train an NER model which is then used to label unseen records from 30,341 historical maintenance work orders from a water utility. Classification accuracy, as measured by F1-score, is 92.5\% for item, 75\% for problem and 87.5\% for remedy. These results confirm the suitability of NER as part of an TLP pipeline to either identify or correct these structured fields automatically, and at scale, on historical work orders. The impact is to improve data quality and hence the productivity of reliability engineers. |
10:10 | An ensemble learning methodology for predicting medical micro-robots degradation classes PRESENTER: Paul Cardenas-Lizana ABSTRACT. The new generation of medical devices for surgical operation involves to develop equipment on the scale from micrometers to millimeters in order to perform more precise microsurgical procedures. Such kinds of devices must fulfill several tests according to medical standards to be commercialized. Hence, it is necessary to model the micro-robot degradation in order to ensure the optimal performance limits during surgical acts. The work aims to predict a micro-robot degradation class using a machine-learning-based methodology and consists of classifying the degradation into three classes: healthy, degraded, and out of service. Firstly, the degraded data are collected by using a four-bar complaint mechanism. This mechanism allows obtaining relevant attributes for the micro-robot degradation behavior. Secondly, a data preprocessing analysis and feature engineering are conducted to generate representative attributes that provide a better learning representation for the machine learning (ML) algorithm. Then, non-linear supervised learning algorithms are trained to construct the prediction. Random forest outperforms other algorithms in terms of predicting the remaining useful life (RUL) while gradient boosting generates the optimal decision boundary for classification using the RUL and features generated by autoencoders in presence of noise. Finally, a pipeline for the classification of the micro-robot degradation state is provided. This methodology ensures a procedure that evaluates whether or not the ML model can represent the underlying system in presence of noise. |
10:30 | Forecast Accuracy Assessed by Now Risk ABSTRACT. In an engineering risk analysis, where variable data are being used to measure a product’s failure time, a statistical model is often used to predict future failures. The Weibull distribution is frequently used as an appropriate model for this function. There has been considerable study in the estimation of Weibull parameters relative to the known value, but perhaps more important in this case is to assess how well we believe the model can predict for the event of interest. We may evaluate whether the model predicts the number of failures we have observed. We must measure this correctly for the model and determine if the model is adequate or in need of adjustment. If the model does not predict what we see happening right now - the Now Risk, it is assumed likely to not accurately predict future failures. This presentation explains and examines the Now Risk calculation and derives some of its important properties with an emphasis on the Weibull distribution as the failure time model. The results of Monte Carlo simulations used to derive various properties of this statistic are presented. A real example is shown for demonstration of calculation and use of the statistic. Best practices for use of the Now Risk calculation are also shared with additional insight offered into what estimation methods are best to use for this type of analysis. Finally, using the Now Risk as a goodness-of-fit (GOF) method, its value will be assessed for forecasting accuracy of risk. |
09:30 | Fast reliability computation of stable multi-state coherent systems PRESENTER: Eduardo Saenz-De-Cabezon ABSTRACT. Let S be a multi-state coherent system with n components and let s=(s1,..,sn) a tuple of states of the components such that the system is performing at level j, i.e. s is a j-working state for S. S is said to be a stable system if the state s'=(s1, ..., si+1,..,sj-1,..sn) is also a j-working state for S, and any I<j between 1 and n. In this contribution we propose a fast method for the computation of the reliability of stable multi-state systems. The approach used is the algebraic reliability framework based on monomial ideals and the main tool are involutive divisions. Some examples and a computer implementation of the method are presented. |
09:50 | Integrating component condition in long-term power system reliability analysis PRESENTER: Håkon Toftaker ABSTRACT. The electric power system is a critical infrastructure in which power transformers play a key role in linking together generation and end-use of electricity in the system. Transformers are large, complex and expensive components, and the consequences of transformer breakdown can be significant and expensive. Aging transformers have a higher probability of failure. A proper strategy for renewal is therefore key to maintain high reliability of the power system. At the other hand, increasing the time transformers are kept in operation is beneficial both in terms of economic and environmental sustainability of the power system asset management strategy. The overall aim of this work is to account for the technical condition of power system components (such as transformers) in long-term power system reliability analyses to better inform power system development and asset management decisions. Technical condition will develop over time. This happens due to deterioration but also due to maintenance or replacement, both preventive replacement and corrective replacement after wear-out failures. The work presented in this paper builds upon previous research integrating a transformer health model with an existing power system reliability analysis. In this modelling framework, condition is modelled by a health index, and probability of failure is given by a lifetime distribution evaluated at the corresponding apparent age as presented in [1]. The component can transition from a functional state to an outage state either by a condition dependent failure or a random failure. Alternatively, the component can be replaced, bringing the component condition back to "as good as new". It was shown that the component functional state follows a semi Markov-process, which then under certain conditions can be approximated and formulated as a Markov model. This is necessary to integrate component condition information in the power system reliability analysis methodology, which is based on analytical minimal cut set techniques. Analytical power system reliability methods have benefits in terms of computational efficiency and analytical transparency. However, the employed analytical approach also has drawbacks that lead to the analysis horizon being limited to approximately one year into the future. The hypothesis is that extending the existing analytical modelling framework to longer time horizons could lead to unacceptable inaccuracies in the results. For informing asset management decisions, a long-term analysis horizon of five years or more would be preferable. Possible approaches for extending the analysis horizon include i) renewal process theory (an analytical approach), ii) (semi) Monte Carlo simulation approaches whereby component condition and failure time series are simulated to estimate failure frequency, and iii) full-fledged sequential Monte Carlo simulation approaches for power system reliability analysis integrating component condition information. This paper will illustrate challenges and possible methods to account for component condition in long-term power system reliability analysis. The methods are applied to a realistic test system [2], and assessed with respect to accuracy, computational complexity and the utility value of calculated results. The applicability is explored in a renewal planning problem, and the advantages of the different methods are shown. It is illustrated how the computational efficiency of the analytical approach is advantageous in evaluating many alternative solutions to the planning problem. Furthermore, it is illustrated how simulation based approaches provide higher accuracy and information about the probability distribution of reliability indices, whereas analytical approaches are limited to expected values. References: [1] J. Foros and M. Istad, ‘Health Index, Risk and Remaining Lifetime Estimation of Power Transformers’, IEEE Transactions on Power Delivery, vol. 35, no. 6, pp. 2612–2620, 2020, doi: 10.1109/TPWRD.2020.2972976. [2] I. B. Sperstad, E. H. Solvang, S. H. Jakobsen, and O. Gjerde, ‘Data set for power system reliability analysis using a four-area test network’, Data in Brief, vol. 33, p. 106495, Dec. 2020, doi: 10.1016/j.dib.2020.106495. |
10:10 | Experimental Analysis of Decision Diagrams Used to Represent Structure Function of Series-Parallel Multi-State Systems PRESENTER: Michal Mrena ABSTRACT. Structure function is a key part of a model of any system analyzed by reliability engineers. This function defines the dependence of the state of the system (functioning, partially functioning, failed, etc.) on the states of the components that the system is composed of. An important issue is the efficient representation of this function in the case of systems composed of many multi-state components. For this purpose, various approaches based on decision diagrams have found great applicability recently. In this paper, we focus on an experimental comparison of two of them. The first one is based on the use of a single decision diagram to represent the structure function. The second one uses a series of decision diagrams for this purpose. The experiments described in this paper indicate that a series of decision diagrams has much better properties than a single diagram in the case of series-parallel multi-state systems. |
10:30 | EXACT AND ASYMPTOTIC RESULTS FOR THE AVAILABILITY OF CONNECTED (2,2)-out-of-(m,n):F LATTICE SYSTEMS PRESENTER: Christian Tanguy ABSTRACT. There has been a recent interest about the availability of connected (r,s)-out-of-(m,n):F lattice systems. Computing its exact value has been deemed a numerically complex task by Nashwan (2018) and Zhao et al. (2011). This calculation could be accomplished with less effort only in some special cases, as shown in Nakamura et al. (2018). Exact results have been proposed at ESREL 2021 by Malinowski for (2,2)-out-of-(m,n):F lattice systems, using recursive procedures and finding the exact system availability for m = 2, 3, 4. In the present work, we propose an alternate derivation of these results that clearly demonstrates the recursive nature of the problem, and lends itself to symbolic computation. The recursion relations, as well as the associated generating functions, have been obtained for m up to 10. As n increases, the general solution exhibits an essentially power-law behavior, making numerical calculations very quick (in O(1)) and accurate. From the obtained expressions when 2 <= m <= 10, we can propose improved upper and lower bounds to the true availability for arbitrary m and n. Furthermore, we have also deduced an analytical, asymptotic expression for the availability of (2,2)-out-of-(m,n):F lattice systems in the case of large values of m and n. We have checked the accuracy of this formula, even when the availability of each element of the system is not very close to 1. |
09:30 | Probabilistic Risk Analysis of Human Failure on Fluid Penetrant Inspection via CAPEMO Causal Model. PRESENTER: André Andriolo ABSTRACT. In any industry branch, there are situations where the failure of a critical component can have dramatic consequences. Fluid Penetrant Inspection relies so much on the cognitive, skill, and attitudinal aspects of human performance that the risks of process failure are very high. Previous studies were focused on identifying risks in Fluid Penetration Inspection, but none explicitly focused on the Human Reliability Analysis. The purpose of this paper is to use the CAPEMO (Causal Model for Probabilistic risk analysis in Manufacturing Operation) to conduct a probabilistic analysis of the chances of a human error during the inspection process. An in-depth literature review on Human Reliability Analysis was conducted as a methodological approach to identify risk factors. An online survey was submitted to specialists for risk factors validation. Bayesian Network was utilized to assess the risk factors that contributed to the pivotal event. Goal-Tree Success-Tree was used to define additional barriers to the Non-Destructive Testing process. As result, this study confirmed that implementing systematic barriers to mitigate risks of human errors significantly reduced the probability of such Risk. Actions proposed to mitigate the risks caused by human factors are connected to Environment Control, Organization Factors, Skills & Capacity. The conclusion is that the Model proved to be adequate to reduce the probability of an Operator failure significantly. As a contribution, the proposed method can be used by Non-Destructive Testing professionals, engineers, and decision-makers to identify the Risk of human errors that can impact the results of the FPI inspection. |
09:50 | Human reliability study in manual clamping of turning workpieces PRESENTER: Max Engelmann ABSTRACT. In many cases, technical problems or failures are not the primary causes for workpiece ejection during a turning operation. Often, clamping errors lead to those hazardous incidents. Therefore, one important question is whether the machine operator uses the clamping device correctly to reach the required clamping force. To answer this question, we conducted a between-subject study with 23 qualified machine users, with the available tool as an independent variable (conventional cuck key vs. electronic torque wrench). The task consisted (1) of checking the clamping system for possible errors before use and (2) of clamping a clamping force measuring device and a workpiece, either with a cuck key or an electronic torque wrench. The results show that a conventional chuck key is only suitable to a limited extend for applying a defined clamping force, in comparison with an electronic torque wrench. Consequently, the clamping safety, especially when high clamping forces are required, can be significantly increased by using an electronic torque wrench. Furthermore, the results show that the participants are rarely able to set the required clamping force with a conventional chuck key and are only able to assess their own performance correctly to a limited extent despite many years of professional experience. |
10:10 | Screening Method of Eye Movement Parameters for Unsafe Behavior of Drilling Operators PRESENTER: Chuangang Chen ABSTRACT. The drilling operation is characterized by strong process intersections, high physical labor intensity and high operation risk. According to the statistics, among the various causes of drilling accidents, human unsafe behavior factors account for more than 80%. At present, eye-tracking technology has been widely used in the fields of medical treatment, education and driving, etc. The statistical and analytical method based on the qualitative selection of eye movement parameters has been also formed. However, due to the multi-factor coupling characteristics of the unsafe behavior of drilling operators, it is difficult to be characterized with a small number of qualitative eye movement parameters. Along these lines, to solve the difficulty of real-time monitoring the traditional drilling operators’ unsafe behavior, in this work, the eye-tracking technology was introduced and the method of eye movement parameter correlation analysis, law and sensitivity analysis was adopted to realize the screening of eye movement parameters. Firstly, the subjects were divided into two parts, namely unsafe drilling operations and normal drilling operations. The well-controlled drilling control simulation device was used to simulate the drilling operation, and the eye movement data of the participants were recorded. Secondly, the Pearson correlation analysis was employed to preliminarily screen the eye movement parameters of the same type of drilling operators' unsafe behavior. Thirdly, through the regular comparative analysis and the correlation analysis of the eye movement parameters regarding the unsafe behavior of the normal drilling operation and drilling operation personnel, the applicability of the preliminarily screened parameters was systematically explored. Finally, the parameter importance of the eye movement parameters was analyzed based on the Morris, Sobol and EFAS methods, whereas the most suitable characterization parameters for the unsafe behavior of the drilling operators were selected. Compared with the traditional statistical analysis methods of qualitative selection from a small number of eye movement parameters, the proposed scheme can effectively complete the screening of the eye movement data. Besides, the real-time monitoring of unsafe behavior can be realized without affecting drilling operators. Meanwhile, the feasibility of the method is proved since it can effectively support the intelligent early warning of unsafe behavior of the drilling operators. |
10:30 | Changing the perception of safety from a compliance to an enabler point of view PRESENTER: Lais Veloso Lara Castro ABSTRACT. Safety theories have progressed within the past century moving from learning only from failures and measuring safety success as absence of incidents to a more proactive mindset, where learning happens from successful adaptation and understanding of normal work. Bearing this in mind a proactive-fairness safety approach has been developed in order to create healthy workplace environment, in which worker’s voice is heard and safety is seen as an enabler. |
09:30 | Probabilistic calibration of design safety factors and reliability-based optimal maintenance of Flax-fibre reinforced polymers used for reinforced concrete beams strengthening PRESENTER: David Bigaud ABSTRACT. The main topic of this article is the probabilistic-based calibration of environmental reduction and safety coefficients for the design of reinforced concrete civil engineering structures repaired or strengthened by flax fibre reinforced polymers (FRP). In view of the poor feedback on the durability of flax fibre composite materials used in civil engineering and the impossibility of conducting performance tests over several years, a two-factor accelerated test campaign based on the stimulation of composite degradation by increasing both temperature and relative humidity conditions was conducted. Flax-FRP (with a bio-based epoxy matrix) laminates and Flax-FRP-strengthened concrete slabs have been exposed over a period of three years to natural ageing (climate of Lyon, FR) and to various couplings of temperature (from 20°C to 60°C) and relative humidity (from 50% to 100%), according to an asymmetrical Hoke design of experiments. Series of tests (more than 320 tests) aimed at monitoring the evolution of the degradation of mechanical characteristics (tensile capacity, tensile modulus, shear strength, pull-off strength) directly associated with the possible modes of failure of structural elements repaired by composites are carried out. A degradation model of these mechanical characteristics considering the competition of two mechanisms - inducing non-monotonic degradation-, and the influence of the temperature and the relative humidity - through the Eyring model -, is developed and finally extended to consider the variability of the degradation under different climatic zones. On this basis, a reliability-based approach is adopted to supply environmental reduction and safety coefficients calibrated for Flax-FRP found within four international design guides (Fib bulletin 40, ACI-440, AFGC and TR55 guides) and to propose a reliability-based optimal maintenance policy. It will then be shown the necessity to consider adaptable coefficients and optimal maintenance strategy according to the climatic conditions in which the Flax-FRP elements will be installed. |
09:50 | Reliability Demonstration of the Entirety of Cells from an HV Battery Using Prior Knowledge of Degradation Simulations PRESENTER: Alexander Grundler ABSTRACT. Validation and reliability demonstration of new technologies is a challenge. Less historical data from predecessors can be relied on. This intensifies the conflict of objectives in reliability test planning. The example of a cell of a high-voltage battery of an automobile is used to illustrate how the reliability of this component can be demonstrated. In this case, the cell has two failure modes, which is why the system reliability of the component must be verified. To decrease the testing effort and to estimate the failure behavior of the cell, a degradation simulation and tests on a predecessor are used. The degradation simulation maps the field load to the capacity loss, whereas the tests of the predecessor provide failures of one failure mode only. Using these two sources of information, prior knowledge is available for both failure modes, which is to be used additionally by means of Bayes' theorem. In order to determine the necessary test sizes of the endurance tests which addresses both failure modes in the form of a success run (failure-free tests), the prior knowledge is taken into account as well. Since the two failure modes are equally responsible for the failure of the cell, the results of the tests must be combined accordingly. To calculate the proper confidence level of the aggregate reliability, the confidence distributions are combined according to Boole using either the method of moments or a Monte Carlo simulation. In order to make use of the degradation simulation, a two-step bootstrap approach is proposed, which makes use of the tests, that have been used for the parametrization of the degradation model. The general procedure for preparing existing prior knowledge and taking it into account using Bayes' theorem so that it can be used to plan success runs tests for system reliability demonstration is shown. The results show a noticeable advantage in contrast to basic, conventional success-run planning with prior knowledge for a single failure mode. This is not only shown in the reduction of the test volumes, but also in the holistic view of the system reliability with confidence, which enables the proper planning of the success runs. Due to the combination of the failure mechanisms into a system reliability with confidence, the sample size in vehicle endurance testing can be significantly reduced. If fewer endurance vehicles are set up, the costs are thus also reduced. Since the vehicles are not moved in parallel but also one after the other, the predefined reliability target can also be demonstrated earlier thanks to the reduced fleet size. |
10:10 | Hyperspectral imaging of steel to assess corrosion severity in a remote inspection regime PRESENTER: Bahman Raeissi ABSTRACT. Modern societies are dependent on the integrity of large numbers of complex steel structures. Typical examples are maritime and offshore vessels, bridges, electrical power transmission towers, which are all continuously exposed to varying weather conditions such as wind, rain, salty water, and even etching chemicals. In order to maintain the physical integrity of such objects regular monitoring and inspection is necessary. Corrosion is one of the most destructive defects in steel structures, and various detection and measuring methods are currently used to assess the condition of the overall structure. The traditional, and still most common method is manual inspection and rating of the sample by a trained specialist. Studies have, however, shown that this procedure is subject to a high degree of variability of ratings due to subjectivity among the inspectors. Most non-destructive assessment methods are based on estimating the amount of material (e.g. steel) lost due to corrosion. Our study indicates that detection of breakdown or end-of-life of protective painting, normally used in such structures, or detection of corrosion amount and type, are good indicators for maintenance decisions. However, inspection of remote locations of steel structures in harsh environments is risky and can pose significant safety and logistical challenges. Inspectors are exposed to various risks, such as fall accidents, oxygen shortage and poisonous gases (e.g. inside ship tanks). Remote drone-based inspections are considered as an attractive and efficient alternative to the current practice within the inspection business, in particular when accessing tall structures requiring climbing or other dangerous operations. Hyperspectral imaging is a relatively novel approach, where a so-called hyperspectral camera creates a spatial map of pixels with spectral information e.g. in the NIR (near infrared) part of the electromagnetic spectrum. Hyperspectral cameras can detect different chemical products and can also identify different corrosion material. In this study we have tested how well different supervised and unsupervised hyperspectral analysis methods can be used to distinguish between chemically different corrosion products, and estimate the relative amount of corrosion on a sample. In locations with coating breakdown or coating degradation, the detection and assessment of corrosion could be challenging with traditional methods as corrosion signature may be in combination with coating or not clear to the naked eye. Considering the spatial resolution of hyperspectral camera as well, this means that the pixels contain the signatures of more than one material. However, to increase the reliability and robustness of the corrosion severity assessment, we have tested the discrimination capabilities of hyperspectral unmixing algorithms in combination with traditional statistical image analysis methods. The unmixing algorithm is capable of distinguishing different materials in each pixel and then give a more reliable estimate of the amount and type of corrosion product in each pixel. There are several types of corrosion products with different spectral signatures, and each one of them has different stability and aggressiveness. Detecting these products could help defining how aggressive or stable the corrosion could be. Combining the information about the amount and type of corrosion product will improve decision robustness about corrosion severity of the structure. |
10:30 | Time to Failure Prediction with Dependent Censoring PRESENTER: Alise Danielle Midtfjord ABSTRACT. The field of reliability theory is continuously using more advanced statistical methods when performing reliability analysis, for example when predicting the remaining useful life of components. However, due to the consequences of system failure, reliability data can often be prune to high percentage of censoring. It is also common that the censoring mechanism is dependent on the failure times, which makes many statistical tools less effective as they often assume independent censoring and might provide biased predictions. To handle this problem, we propose a boosting model which allows dependent censoring and enables machine learning methodology to be used on censored reliability data. The model is based on the accelerated failure time model and takes advantage of the Clayton copula when modelling dependence between the failure times and the censoring mechanism. Both on simulated data and the motivating example, related to airplane landings, our proposed model provides excellent results, outperforming the classical methods. |
09:30 | Design of Impact Tests for Polycarbonate Panes and their Deterioration by Cooling Lubricants - Part 1: Models and Limitations of Measurement PRESENTER: Heinrich Moedden ABSTRACT. As is well known, the actual objective of the impact tests defined in the standards is to determine the impact resistance of various materials used for guards. Commonly used for this purpose are steel sheets and transparent plastics for the vision panels, the latter being of great importance for the safe use of the machine: The vision panels in machine tools serve two purposes: they allow the machining process to be observed and, in the event of an accident, they protect the operator from ejected tool fragments, for example. Polycarbonate is used as a material for such vision panels because of its excellent impact resistance. However, polycarbonate is subject to aging processes that lead to a reduction in its impact resistance. Cooling lubricants in particular accelerate these aging processes as Uhlmann et al. [1] pointed out in a preliminary study "KSS-PC", but the effect of modified cooling lubricants on polycarbonate sheets in impact tests is not entirely clear due to the relatively short exposure time. Therefore, funding for a comprehensive follow-up project "KSS-PC-Plus" was obtained via the German Machine Tool Builders' Association (VDW) in order to substantiate the statements of the preliminary study within a larger time frame. The total scatter of the determined impact resistance adds up from numerous individual effects, which can lead to a considerable random measurement error, as described in a separate paper [2]. Only when a model for this measurement error is available can the impact resistance of the polycarbonate sheets under investigation be determined indirectly from the velocity measurements of the projectile with some degree of "accuracy". This is because the impact resistance of polycarbonate sheets, considered in isolation, also scatters itself to a not inconsiderable extent, as is generally known for plastics. For this purpose, modeling with Gaussian bell curves is useful [3], [4]. With the methods of inferential statistics, a well-founded design of the envisaged experiments can then be carried out. For example, with regard to the question of the necessary sample size (factor 1) in terms of the standard error for the mean (SEM) estimation function, in order to determine - with the limited research resources available - the possibly most reliable results for the relevant cooling lubricants (factor 2) and their effect on the impact resistance (reduction). In addition, there is the question of how long the exposure time of the polycarbonate sheets in the cooling lubricants to be investigated must be at least, so that the expected reduction in the impact resistance can be read out clearly enough from the test data and no longer lies in the range of the scatter, which is explained in a separate ESREL 2022 paper [2]. As well as the question of the number of supporting points on the time axis (factor 3), e.g. two supporting points might be sufficient. Because in a first approximation an Exponential distribution over time seems to useful here, with which usually temporal diffusion processes can be described with only one prominent parameter (λ, lambda), see Figure 1 as density function. This is also the state of the art in the product safety standards for machine tools [5], see Figure 2. Apparently the following two supporting points could be suitable: I. once at the point with minimum distance to the origin of the coordinate system (analytically definable via λ) and II. in the area of saturation, as far back in time as possible, as it is feasible by the two-year project duration in the context of artificially accelerated aging. In order to clarify above questions, the results of Uhlmann et al. [1] are to be analyzed by means of hypothesis testing. In this context, a Monte Carlo simulation can be used to estimate, with the current interim results from the preliminary study, how large the age-related decrease in impact resistance must at least be in order to be statistically significant (e. g. p-value < 0.05), given the considerable scatter of the complex impact test technique and material imponderables of the test samples. It is expected that the complementary findings during the progressing research work will lead to an iterative adjustment of the tests. In this process, hypothesis testing based on Monte Carlo simulation will also be applied recursively for the intermediate results [6]. At the end of the investigations, the multiplication of factors 1, 2 and 3 will result in a considerable amount of impact tests (factor 4). This paper attempts to provide some insight in a reasonable backward planning for the two years of research with an estimation of factor 4. Keywords: Safety of machinery, machine guard, polycarbonate, aging, hypothesis testing, Monte Carlo simulation, design of experiments 1. Uhlmann, Eckhart; Haberbosch, Kai; Thom, Simon; Drieux, Sophie; Schwarze, Alex; Polte, Mitchel (2019): Investigation on the effect of novel cutting fluids with modified ingredients regarding the long-term resistance of polycarbonate used as machine guards in cutting operations (KSS PC). In: Proceedings of the 29th European Safety and Reliability Conference (29), pp. 2944–2952. 2. Uhlmann, Eckhart; Polte, Mitchel; Bergström, Nils; Mödden, Heinrich (2022): Analysis of the Effect of Cutting Fluids on the Impact Resistance of Polycarbonate Sheets by means of Hypothesis Test, ESREL 2022 3. Meister, Fabio; Mödden, Heinrich et al. (2017): Probabilities in Safety of Machinery – Hidden Random Effects for the Dimensioning of Fixed and Moevable Guards, 15th International Probabilistic Workshop 2017 in Dresden (Germany) 4. Landi, Luca; Mödden, Heinrich, Meister, Fabio et al. (2017): Probabilities in Safety of Machinery – risk reduction through fixed and moveable guards by standardized impact tests, part 1: application and consideration of random effects. (ESREL 2017) 5. ISO 16090-1:2016 follows former EN 12417, Machine tools safety — Machining centres, Milling machines, Transfer machines — Part 1: Safety requirements, Berlin, Germany 6. Guttag, J. V. (2016), Introduction to Computation and Programming Using Python, 2nd edition, MIT, ISBN 978-0-262-52962-4 |
09:50 | Design of Impact Tests for Polycarbonate Panes and their Deterioration by Cooling Lubricants - Part 2: Proposals for Aging Period and Sample Size ABSTRACT. As is well known, the actual objective of the impact tests defined in the standards is to determine the withstand capacity of various materials used for guards. Commonly used for this purpose are steel sheets and transparent plastics for the viewing windows, the latter being of great importance for the safe use of the machine: The viewing windows in machine tools serve two purposes: they allow the machining process to be observed and, in the event of an accident, they protect the operator from ejected tool fragments, for example. Polycarbonate is used as a material for such vision panels because of its excellent impact resistance. However, polycarbonate is subject to aging processes that lead to a reduction in its properties. Cooling lubricants in particular accelerate these aging processes as UHLMANN et al. [1] pointed out in a preliminary study "KSS-PC", but the effect of modified cooling lubricants on polycarbonate sheets in impact tests is not entirely clear due to the relatively short exposure time. Therefore, funding for a comprehensive follow-up project "KSS-PC-2" was obtained via the German Machine Tool Builders' Association (VDW) in order to substantiate the statements of the preliminary study within a larger time frame of 2 years [2], project start is in August 2022, right after the ESREL conference in Dublin. The total scatter of the determined impact resistance adds up to numerous individual effects, which can lead to a considerable random measurement error, as described in a separate paper [3]. Only when a model for this measurement error is available can the withstand capacity of the polycarbonate sheets under investigation be determined indirectly from the velocity measurements of the projectile with some "accuracy". This is because the impact resistance of polycarbonate sheets, considered in isolation, also scatters itself to a not inconsiderable extent, as is generally known for plastics. The total scatter of the determined impact resistance adds up from the individual effects as follows: a) normative tolerance ranges for the impact velocity measurement, b) a limited repeatability in the velocity measurement resulting from a very complex test technique, c) significant deviations in repeat tests for the exact impact conditions and the impact conditions. The overlap of these scatter effects causes an inherent measurement error of the whole measurement apparatus. For this purpose, a distribution function is to be determined (estimated) as a model for the first time using the means of descriptive statistics. The interaction of the scattering effects is assumed to be random and symmetrical. Because of the numerous random individual effects in b) and c), a (first) Gaussian bell curve can be assumed in the sense of the Central Limit Theorem. Thus, it is necessary to determine its characteristic value “mean value” and "standard deviation”. |
10:10 | An innovative smart system for the safety of workplaces with mobile machines with remote command PRESENTER: Alessandra Ferraro ABSTRACT. The recent development of remotely controlled mobile machines for outdoor applications, where the operators stand nearby them using wireless commands, has raised the outcome of some risks, such as running over or hitting the workers and the collisions between machines, which must be carefully assessed and managed. This hazardous operative scenario is particularly critical in agriculture and forestry sectors which are characterized by hazardous peculiarities such as low visibility and adverse weather conditions. The current state of the art of mobile machines with remote control provides only some partial technical solutions for risk mitigation. In this paper, we present the technical and operative features of a novel smart integrated system based on the paradigm of Industry 4.0, which is able to effectively manage the safety in this particular sector. Once all possible obstacles present in the work ground are tagged with multiple passive UHF-RFID tags, the machines can detect them thanks to a RFID reader and estimate their distance. Once a hazardous situation is highlighted due to, for example, too narrow risky distance, the operators are informed via wearable devices by the on-board communication device. Then, the operators managing the mobile machine can stop the hazardous manoeuvre. A backend software overviews and manages this smart system. A proof-of-concept is under development to demonstrate the effectiveness of the proposed smart system. |
10:30 | Survey on the way of practice for safety demonstration of DI&C in different industries PRESENTER: Xueli Gao ABSTRACT. This paper presents some results and findings from the seven interview meetings performed with the purpose of identifying good practices on how safety demonstration is performed in other than nuclear application sectors from which the nuclear can benefit. A set of 13 questions within the topics: differences in safety demonstration approaches, review strategies, how evidence are organised and presented for the reviewer, etc., are used for the interviews. To ease the understanding of different concepts within safety demonstration in other industries than nuclear, some background information [1-9, etc.] applied in the questions or answers are introduced in the paper. Main observations include: a) The safety demonstration can refer to several things in the whole safety lifecycle. Most of the industries follow the lifecycle approach with defined detailed template for specific safety analysis report, etc. which makes the safety demonstration more structured and systematic. b) The safety demonstration has stricter requirements on the critical safety functions. It requires the safety demonstration be more systematic and structured on the critical safety functions. c) Different safety demonstration approaches have both pros and cons side, it is a common understanding that a combination of approaches should be used, depending on different factors, e.g. requirements from client, complexity or novelty of a system. d) Regulators need to pay more attention on safety demonstration, e.g. on how to demonstrate the safety in a more structured way, any structured argumentation tools need to be applied, what is the application domains for different demonstration methods or tools. References 1. IEC 61508: “Functional safety of electrical/ electronic/ programmable electronic safety-related systems”, rev. 2.0, 2010. 2. IEC 61511: “Functional safety: Safety instrumented systems for the process industry sector”, 2020. 3. OLF Guideline 070: “Application of IEC 61508 and IEC 61511 in the Norwegian Petroleum Industry”, The Norwegian Oil Industry Association, rev. 03, October 2018. 4. Handbook for Acknowledgement of Compliance (AoC), Norwegian Shipowners’ Association, Revision 05, August 2015. 5. NORSOK standard Z-013, Risk and Emergency Preparedness Analysis, Norwegian Technology Centre, rev. 03, October 2010. 6. https://www.ptil.no/en/regulations/all-acts/?forskrift=634, accessed: 15.08.2020. 7. CSM RA https://orr.gov.uk/rail/health-and-safety/health-and-safety-laws/european-railway-safety-legislation/csm-for-risk-evaluation-and-assessment, accessed: 15.08.2020. 8. EN 50126, Railway Applications - The Specification and Demonstration of Reliability, Availability, Maintainability and Safety (RAMS), 2017. 9. EN 50129, Railway applications - Communication, signalling and processing systems - Safety related electronic systems for signalling, 2018. |
09:30 | Discussing issues in simulation-based uncertainty quantification. The case of geohazard assessments ABSTRACT. Models are mainly used for understanding the performance related to a system, predict its output, and assess relevant impacts. The models link the output to some quantities and events on a more detailed system level. To describe the impacts, the uncertainties of these quantities and events need to be assessed. Uncertainty quantification helps determine how likely the responses of a system are when some quantities and events in the system are not known. Using models, system’s responses can be calculated analytically, numerically or by random sampling. Given the high-dimensional and spatial nature of geohazard events and associated quantities, sampling methods are frequently used because they result in a less expensive and more tractable uncertainty quantification in comparison with the analytical and numerical methods. In the sampling procedure, specified distributions of the input quantities are sampled, the respective outputs of the model are recorded, and then the process is repeated as many times as may be required for the desired accuracy. Eventually, the distribution of the outputs can be used to calculate probability-based metrics, like an expectation or probabilities of critical events. Focused on uncertainty quantification in terms of probabilities and considering the usual constraint of very limited data in geohazard assessments, in this paper, we analyse the related issues and challenges in quantifying uncertainty using models. We conclude that, despite the availability of options and sophistications for quantifying uncertainty using models, these options and sophistications are hard to be justified and therefore cannot be used in full. Doubts are also raised concerning the increased accuracy provided by these sophistications. Further, in practice, if some of the sophistications can be implemented, this will only reflect some aspects of the uncertainty involved. All this call for more thoughtful approaches based on a general framework for quantification of uncertainty. We illustrate the points raised by means of discussing a geohazard assessment informed by a model-based uncertainty quantification. |
09:50 | Resilience of healthcare systems in natural disaster - A case study in Henan rainstorm PRESENTER: Yixin Zhao ABSTRACT. Resilience assurance of a healthcare system, including pre-disaster planning, emergency response and post-disaster recovery, plays an important role in saving lives and reducing severe injuries in sudden natural disasters. Taking the Henan rainstorm in China, on July 20, 2021, as an example, this paper analyzes the adverse impacts of sudden natural disasters on the healthcare system and clarifies the concept and importance of the healthcare system resilience. This paper also presents the challenges the healthcare system faces, such as those in preparation and emergency planning, and emergency supplies to reserve healthcare capability. Strategies and recommendations to improve the resilience of the health system are then proposed. Finally, a resilience assurance framework for healthcare systems in a natural disaster is developed. This paper serves as a call and as a reminder that preventive measures and preparatory investment toward resilience is more than important for the entire healthcare system and well-being in consideration more extreme weather events in future under climate change. |
10:10 | Extreme Discharge Uncertainty Estimates for the River Meuse Using a Hierarchical Non-Parametric Bayesian Network PRESENTER: Guus Rongen ABSTRACT. Statistics of extreme discharges along the Meuse are needed for reliability analysis and design of flood defenses. These are often obtained through a series of models that generate long synthetic time series, in which the extremes should be represented. In this work, extreme discharge are generated from measurements and statistical models based on observed discharges and geographical characteristics. The statistical model is based on a Generalized Extreme Value distribution for each catchment, and a Non Parametric Bayesian Network (NPBN) that correlates the discharges. We used hierarchical graph configurations of the BN, in which latent variables group catchments based on location and catchment characteristics. The hierarchical configuration of the Bayesian Network did not represent the dependence structure better than the `conventional' direct graph structure, but a combination of direct and hierarchical did. The model forms a flexible and refreshing alternative to conventional hydrological models. |
10:30 | The relevance of Good Practices to improve Disaster Risk Management in Multi-Hazard Risk Scenarios in the field of civil protection PRESENTER: Boris Petrenj ABSTRACT. Numerous disasters that have occurred over the last two years, especially over the course of the COVID-19 pandemic, have emphasized the growing importance of building resilient societies. Indeed, the pandemic has revealed many operational and political gaps in the single-hazard approach to Disaster Risk Management (DRM), which becomes significantly weakened or even useless when another one or more disasters strike at the same time, that is, in a multi-hazard risk scenario. Multi-hazard events combine events of natural and/or anthropogenic origin, including those of biological origin (e.g. an infectious disease such as COVID-19), that overlap in time and space. The simultaneous presence of multiple hazards shows the need to revise existing DRM approaches and to focus more on what it means for DRM to deal with multiple hazards. Drawing from a literature review on the last two years of crises and disasters, the paper aims to fill the gap in the existing literature on how to improve disaster risk management (DRM) in multi-hazard risk scenarios. With the purpose to contribute to the ongoing discussion on how to establish an efficient DRM system when disasters strike simultaneously or as a domino, this paper discusses the “good practice” approach. Good Practices (GPs) are generally understood as methods or techniques that are applied to solve existing problems producing effective results and bringing benefits to the users. More specifically, the paper focuses on the need to identify, collect and disseminate successful practices, stories and lessons learnt to make them readily available and usable to the communities and practitioners active in DRM. This would further increase the understanding of DRM solutions, in compliance with the United Nations’ Sendai Framework for Disaster Risk Reduction 2015-2030. Since DRM approaches under multiple-hazards have to be adapted in all phases of the DRM, the present paper covers the following, interrelated, topics: • Use of a common and clear terminology concerning multi-hazard risk management; • Key challenges of multi-hazard risk scenarios which require an evolution in current DRM approaches, with examples during COVID-19 pandemic; • The need to identify and disseminate good practices (GPs) – their definition, importance, characteristics, implications on the ground, challenges; • Different types of GPs and the methodological approach to finding them; • A few examples of GPs in scenarios combining pandemic and natural hazards; • A high-level overview of ongoing efforts to collect and systematise good practices related to different aspects of emergency and disaster management, both scientific and practical. |
PANEL: Risk perception & Risk hotspot. Organised by Lloyd’s Register Foundation Institute for the Public Understanding of Risk, National University of Singapore.
9.30 – 9.35: Welcome and introduction
9.35 – 9.50 Risk perception gaps: What they are, why they matter and our research approach Dr Olivia Jensen, Lead Scientist (IPUR, NUS) & Dr Carolyn Lo, Research Fellow (IPUR, NUS)
9.50 – 10.05 Risk blindspots and hotspots: Preliminary perspectives, snapshots and echoes Dr Carolyn Lo, Research Fellow (IPUR, NUS) & Dr Olivia Jensen, Lead Scientist (IPUR, NUS)
10.05 – 10.20 Global public risk perceptions: Insights from the 2021 World Risk Poll Dr Sarah Cumbers, Director of Evidence and Insight (Lloyd’s Register Foundation)
10.20 – 10.50 Open discussion
11:10 | Digitalisation and its implications for customs and border control agencies’ functions – emphasis on risk management ABSTRACT. Digitalisation has been described as the newest technological revolution, the most powerful technological trend that is transforming societies, business life, and that has various implications for customs and border control agencies and their main functions. Indeed, some European customs and border control agencies have gone through organisational restructuring due to digitalisation. Digitalisation has been defined in various ways, depending on the context. It can simply refer to the use of digital technologies. Or the definition can embrace also implications of digitalisation: “Digitalisation refers to the development and implementation of ICT systems and concomitant organizational change…” (Gebre-Mariam and Bygstad 2019). Definitions can take into account also the preconditions of digitalisation, the purposes it is used, and the effects of digitalisation. By digitalisation it is often referred to positive implications, such as the ability of Customs and Border Control Agencies to better perform their control tasks and to enhance the smooth and flexible cross-border movement of goods and travellers. At the best digitalisation enhances e.g., better management of borders, by exploiting electronic borders, as well as common European database, such as Automated Fingerprints Identification System (AFIS) that helps to control the mobility of non-EU residents who are without visa. However, digitalisation with increasing reliance on ICT systems brings with the cyber-security concerns with implications for societal safety and security. Therefore identifying and management of risks of increased digitalisation is important. The objective of this study is to shed light on digitalisation and its implications for customs and border control agencies from risk management perspectives. The data consists of literature review on digitalisation in customs and border control context, documents regarding digitalisation strategies and interviews with customs officers in one or two countries in Europe. The research questions are as follows: How have digitalisation related benefits, disadvantages and risks been identified? What type of risk management approaches and principles the customs and border control agencies have been used? What could be the holistic understanding of risks and effects of digitalisation? The expected research results will provide better understanding of the digitalisation related risks, as well as currently used risk identification and risk management methods and principles in customs and border control contex. The study will provide ideas how to improve identification and management of digitalisation related risks in customs and border control agencies. Reference: Gebre-Mariam, M., Bygstad, B.: Digitalization mechanisms of health management information systems in developing countries. Inf. Organ. 29(1), 1–22 (2019) |
11:30 | To what extent is the ISPS Code relevant for mitigating current and future security threats along the Norwegian coastline? PRESENTER: Richard Utne ABSTRACT. Internationally, ships carry 90 percent of the world's goods. Virtually everything we relate to depends on maritime activity that binds the continents together. Events that affect the flow of goods between the world's ports can therefore have major consequences for NATO's Article 3 on resilience and thus affect geopolitical balance. Norway’s coastline being the second longest in the world, makes risk regulation a vital element in building resilience towards current and future security threats. Due to that history often shows security measures reactively introduced, we find it interesting to examine whether the International Ship and Port Facility Security (ISPS) Code is relevant for mitigating current and future security threats along the Norwegian coastline. |
11:50 | The role of the World Customs Organization in reaching the Sustainable Development Goals: What does Facebook tell us about? ABSTRACT. With the aim of promoting socio-economic development and providing security for people, key Customs areas have been identified as strategic priorities to guide the work of the World Customs Organization (WCO). WCO also endorses the 17 Sustainable Development Goals (SDGs), and it is emphasized that the Customs goals and activities contribute to the attainment of each of these 17 SDGs. Reaching these goals requires collaboration among different stakeholders, and a public communication strategy is fundamental for reaching this purpose. To assist in such collaboration, public safety agencies and organizations are increasingly using social media, which has a potential to facilitate an arena for a two-way communication. The purpose of this study is twofold. Firstly, the aim is to explore whether and how the WCO uses Facebook to communicate activities aimed at the realization of the 17 SDGs. Secondly, the purpose is to gain a wider understanding on how the WCO uses Facebook to communicate with various stakeholders and the general population. To investigate this, a total of 682 Facebook messages posted by the WCO for a one-year period (2021-2022) were identified and collected for further examination. By conducting a content analysis of the messages, several main themes were identified: supporting other countries, strengthening regional cooperation, capacity building, gender equality, challenges of digitalization, and security, among all. Within each theme, there were identified several subthemes. Furthermore, the themes were discussed in relation to the 17 Global Goals, with certain areas receiving more attention from WCO, compared to other areas. As of February 2022, the WCO has in total over 20 000 followers on Facebook, including both organizations and individuals, and this number is growing rapidly. By strategically communicating about its goals and activities on social media platforms, the WCO can better engage stakeholders and the public, enhancing collaboration and access to timely and relevant information. Likewise, social media is a good arena for a two-way communication, where comments, discussions and reactions to messages may give the WCO valuable insights needed to reach desired goals. |
12:10 | Reforming the customs officer education in Norway. ABSTRACT. As of 2021 The University of Stavanger became the first Scandinavian institution to provide a standardized bachelor’s degree in Customs and Border control, in cooperation with the customs department in Norway, with the joint aim of educating customs officers. This reform of the customs officer’s education from an inn-house education to an academic three-year degree has resulted in many new challenges for the Customs Department, both as a public organization, and as a workspace. Among these are changes in the requirements for and recruitments of customs officials, changes in the current workforce including cultural readjustments, ways of socializing and “naturalising” new members of the cultural communities of the various workspaces. How is the Customs to get the most out of new competences of new recruits with academic backgrounds, while ensuring the transfer of silent knowledge from experienced personnel to new personnel with new background? Also, to ensure recruitment of personnel with bachelor's degree from University of Stavanger to The Norwegian Customs, including the Customs more remote locations in Norway. In order to be at the forefront of how the new bachelor’s degree necessities a new form of recruitment changes the organization enabling the Norwegian customs to develop strategies to meet the challenges listed above. This paper approaches this subject through addressing some of the above challenges. The bachelor's degree at the University of Stavanger includes two periods of practice for students with somewhat limited interactions between students and travellers in the first period of practice. This paper is based on conversations with students in their first period of practice, observations in field, and interviews with customs officials involved in the students first meetings with the role and day-to-day work of customs officials, and from these extrapolate some postulates that relate to the above challenges. The paper outlines an agenda for research and a research design based on the postulates. |
12:30 | Customs and Border Control: Challenges and Implications for Future Research PRESENTER: Arvind Upadhyay ABSTRACT. Customs and Border Control are an integral part of global trade. There is an increase in the cross border trade due to various reasons. A large number of vessels cross the international border all the time to satisfy the demand of customers. The global resources are limited, and we are on the verge to exceed the planetary resource capacity; thus, to achieve a sustainable future, economies are constantly seeking emerging innovative models, approaches, and frameworks (Dutta et al. 2021). This research work focuses on understanding the challenges faced by customs and border control. A mix of systematic and integrative review is appropriate for this research as it will explore the published literature in the cognate area. First, we extract the critical challenges through a thematic literature review. After that, we focus on an integrative review of emerging research streams in customs and border control areas, like the role of technology, counterfeit trade, a transition from linear to a circular model, sustainability, among others. Our research work finds the literature gap. We finish with a proper synthesis of existing relevant literature and suggest future directions for research. References: • Dutta, P., Talaulikar, S., Xavier, V., & Kapoor, S. (2021). Fostering reverse logistics in India by prominent barrier identification and strategy implementation to promote circular economy. Journal of Cleaner Production, 294, 126241. • Urciuoli, L., Hintsa, J., and Ahokas, J. (2013). Drivers and barriers affecting usage of e-customs- A global survey with customs administrations using multivariate analysis techniques. Government Infirmation Quarterly, vol. 30, pp 473-485. |
11:10 | APPLICATION OF COLLABORATIVE GOVERNANCE AND INTEGRATED RISK-RESILIENCE BASED POLICIES TO IMPROVE THE RISK MANAGEMENT OF SMART CITY LIGHTHOUSE PROJECTS PRESENTER: Konstantina Karatzoudi ABSTRACT. Smart City Lighthouse projects represent a unique European innovation tool for deploying and replicating Smart City and energy solutions on a large scale to serve the EU-mission of creating one hundred climate-neutral cities by the year 2030. This cross-country and multi-disciplinary project setup embraces innovation, but it also leads to complexity. Risk management represents a key approach for handling this complexity and meet the various types of risks that can occur in smart city lighthouse projects. A review of current risk management practices in smart city lighthouse projects has been conducted including all the existing seventeen lighthouse projects. The review has revealed that the risk management in most lighthouse projects is in line with common standards as described in ISO 31000 and the Open PM2 Project Management Framework highlighting identification, analysis, evaluation, and treatment of project risks. However, the occurrence of several high-profile cybersecurity and privacy related vulnerabilities has uncovered the need to expand the risk management beyond these standards. In case of smart city lighthouse projects, proper risk management need to consider the multi-stakeholder and interconnected nature of these projects and all the system interdependencies to determine which processes and functions to apply. In the paper, we investigate how the risk management can be improved through collaborative governance, highlighting stakeholder participation and involvement to meet these challenges. In addition, the events occurred and vulnerabilities identified point also to the need for implementing resilience-based policies. In the current work, we also investigate how such policies can be better integrated with the traditional risk management activities to improve the overall handling of the risks and vulnerabilities. A specific smart city lighthouse project is used to illustrate the discussion. |
11:30 | Smart Technologies for integrated NaTech risk management in major hazard industrial plants PRESENTER: Alessandra Marino ABSTRACT. Recent events outlined the relevance of the interactions between industrial and natural hazards (NaTech) particularly for what concerns seismic risk. EU regulation, namely Directive 2012/18/EU, explicitly requires risk analysis for NaTech events. The development of a risk assessment methodology for major hazard industrial plants allows the individuation of the critical elements of a plant in seismic-prone areas. The implementation of smart technologies (sensors, actuators, innovative systems for seismic protection) to the critical elements allows for a relevant reduction of major hazards and related consequences. The “smart” application of NaTech management technologies, from early warning to active protection, allows to upgrade the safety conditions of existing industrial plants implemented as a retrofit solution, avoiding heavy and expensive structural actions. Furthermore, taking into account that the earthquake affects the entire plant and the safety systems (as extinguishing water supply and power lines) at the same time, smart technologies allow the simultaneous monitoring and control toward seismic events. Hence, it is evident that smart technologies can play a relevant role in NaTech management. Systems like EW and Active protection systems can be effectively used to reduce the NeTech risk and therefore to improve the resilience of a major hazard industrial plant. |
11:50 | Development of the Risks from the design to the implementation of the E-LAND solution PRESENTER: Coralie Esnoul ABSTRACT. The goal of the E-LAND project is to allow renewable energy actors to minimize the cost of their energy production and consumption. One such actor is the energy islands, isolated communities that produce and consume a part or all of their energy. The E-LAND project is developing a toolbox, which computes an optimal scheduler based on user’s data (e.g., consumption and production capacities, infrastructures) and external parameters (as forecast, market price) to enable the smaller actors of the energy market to be more efficient in their energy management. The project started in 2019 has developed the solution and has reached the implementation phase of the toolbox components to the different pilots in the project. The solutions developed in the project introduce new protocols, new functionalities and new risks into already existing infrastructures and energy systems. In order to make the solution accepted by the energy islands the project has assessed, documented and communicated the safety, security and privacy of the product, and of the pilots, in order to empower the partners and pilot site owners to manage and address their own risks. The new functionalities of the toolbox introduces new, non-trivial risks to the pilot sites. For example, the data handling regime and data protocols changes from locally stored data to non-local data, and a set of new processes and procedures are needed for the pilot sites. For them to have control of their own information assets, the gap of competences must be bridged and awareness raised. Therefore existing risk assessments are not sufficient for the future users to handle the risk ownership. This paper compares how risks were understood at the beginning of the project, how risks have developed through the project, with the difficulties of physical interaction at the pilot sites through the pandemic, and which risks have been sufficiently mitigated or closed. While details on the risk analysis process and results were described in the papers published at ESREL in 2020 and 2021, this paper is focusing on the evaluation of the risk process thus far and if the risks foreseen for the pilot sites were accurate; how did the pilot sites experienced the implementation and what were the tools and processes required for them to handle the solution? The paper presents a set of risks in the projects and the measures undertaken to empower risk owners to handle new risks and take ownership of these. The development of the risks is presented as well as the measures taken. Their efficiency is discussed with regard to which risks were successfully managed and mitigated versus which risks had actually occurred. As a step in this, risks are compared from the design phase to the implementation in order to evaluate the performance risk management strategy put in place, with an evaluation of the tools, the protocols, mitigation actions and communication plan throughout project. |
12:10 | Safety risk analysis of train obstacle detection system based on Bayesian network PRESENTER: Weina Song ABSTRACT. The traditional risk analysis method can not use the updated probability to update the system risk, which has many limitations in application. Based on the fault tree analysis method, this paper constructs the dynamic Bayesian network of the train intelligent obstacle detection system, refers to the relevant data of the reliability of each unit of the system, uses GeNIe software to simulate and reason the system, outputs the posterior probability of each root node in case of system failure, and analyze the changing trend of system failure rate including maintenance factors. The results show that the components that have a great impact on the system failure rate are the camera and Millimeter-wave Radar. At the initial stage of system operation, the failure rate of the system increases rapidly, but with time, the rate of increase of the system failure rate gradually decreases and finally tends to be stable. The dynamic Bayesian analysis method does not need to rebuild the network. It can update the risk of the system according to the updated probability. It can help the maintenance personnel analyze the weak links of the system and is suitable for the dynamic risk analysis of large-scale systems with complex structure and probability updates. |
11:10 | Risk Assessment in Ultrasonic Testing of Critical Parts via Bayesian Belief Networks and Analytic Hierarchy Process PRESENTER: Italo de Souza Oliveira ABSTRACT. This paper discusses the framework for identifying risks in the Ultrasonic Testing (UT) of critical parts, based on the Analytic Hierarchy Process (AHP) and Bayesian Belief Network (BBN). Potential risk factors and typical Ultrasonic Testing scenarios have been investigated based on the most current literature on the subject and on a case study conducted in an aero-engine repair station. Affinity Diagram was used to categorize the risk factors. Bayesian Network combines the risk factors that contributed to an inspection failure, and AHP prioritizes the impact of risk categories. The combination of probability and impact identifies the most significant risk categories. As a result, the method can reveal the most significant risk factors in the Ultrasonic Testing of critical parts, and actions can be proposed to respond to the risks. The conclusion is that the model is adequate to significantly reduce the risk of hardware failure. As a contribution, the proposed method is an invaluable source of information for safety engineers and decision-makers in companies. It augments their knowledge and helps identify risks in UT of critical hardware and implement actions to avoid critical parts failure and improve the safety in the inspection of these parts. |
11:30 | Prediction of Light Emitting Diodes Luminous Flux Degradation with Interval Regressions Models PRESENTER: Roberto Rocchetta ABSTRACT. Uncertainty affects the service life of new LED systems because of randomness in manufacturing and assembly processes, variability in the operations and environments, and poor understanding of failure modes and degradation patterns. Nevertheless, new LEDs lamps and luminaries require precise quantification of the expected service lifetime and reliability. In this regard, the estimation of the service life $L_{70}$, corresponding to 70$\%$ depreciation of the initial light output, is a central problem. The industrial standard IES TM-28 gives a method to extrapolate the service life from an exponential model that best fits (in the least-square sense). Parametric model assumptions and extrapolation methods are needed because modern LEDs are durable, and end-of-life events are (almost) never observed during the accelerated testing phase. However, deterministic regression models cannot characterize the uncertainty that unavoidably affects the degradation process. This work investigates innovative statistical models to predict LEDs luminous flux degradation and, within the scope of the AI-TWILIGHT: ‘\textit{AI-Powered Digital TWin for LIGHTing infrastructures}’ project, tries to overcome limitations of traditional methods. We first propose an overview of statistical tools for longitudinal flux data modelling and LEDs lifetime prediction. Then, a new interval regression approach is introduced to bound the variability of flux degradation paths while quantifying the uncertainty in the LED service life distribution. We test the applicability of the proposed method on a data set of fast degrading LEDs for which $L_{70}$ observations are available for validation and verification of the proposed method. Probabilistic (generalization) bounds on the prediction errors are introduced and validated. We conclude the article with a discussion of future objectives and challenges that involve exploring the link between bounds on the service life distribution and error bounds on the degradation process. |
11:50 | Prediction of the Remaining Useful Life of MOSFETs Used in Automotive Inverters by an Ensemble of Neural Networks PRESENTER: Giovanni Floreale ABSTRACT. Failures of the switches of Electric Vehicles (EV) inverters can cause the unavailability of the vehicles powertrain. For this, the automotive industry is interested in methods for the prediction of the Remaining Useful Life (RUL) of switches, such as MOSFETs. In this regard, the main challenges are: a) due to the variations of the measured signals MOSFETs’ degradation can be hidden by the inherent signals variability due to the continuously changing operating conditions in automotive applications; b) the scarcity of data collected from in-field applications. In the present work, we develop a Simulink model for simulating the evolution of physical quantities correlated to MOSFETs’ degradation, such as temperatures and electrical signals, during run-to-failure degradation trajectories. Then, a prognostic model based on the use of an ensemble of Artificial Neural Networks (ANNs), which receive in input sliding windows of the measured signals, is developed for predicting the MOSFETs’ RUL. The model is validated considering a H-bridge inverter made of four MOSFETs. |
12:10 | A Simulation-based Bayesian approach for parameter estimation with model misspecification analysis using ALT data PRESENTER: Anis Ben Abdessalem ABSTRACT. Accelerated life testing (ALT) is a standard approach to gather information on the failure times of highly reliable devices. In ALT, devices are exposed to higher levels of stresses (e.g. higher temperature, voltage, pressure) to produce failures more quickly and, hence, reduce the cost and length of tests. The collected data obtained at these levels are then analysed and extrapolated to use stress levels. However, before extrapolation, one needs to select an appropriate distribution that better reflect the variability of the times to failure and an acceleration law. In practice, often several statistical distributions could be used either from statistical or physical considerations. Simpler models are always preferred as they offer many advantages to practitioners. Hence, the primary aim of this work is to investigate the effects of model misspecification specifically when the three-parameter Weibull distribution is incorrectly specified as one of the following non-nested models: the Birnbaum-Saunders and the lognormal commonly used when the failure is caused under cyclic loading (fatigue). To estimate the parameters of the proposed ALT models, an efficient variant of approximate Bayesian computation algorithm called ABC-NS is used. |
12:30 | Condition-based opportunistic maintenance of cascaded hydropower stations PRESENTER: Wanwan Zhang ABSTRACT. The purpose of this paper is to build a new condition-based opportunistic maintenance (CBOM) model. It combines short-term hydro scheduling (STHS) and generator maintenance scheduling (GMS) by failure property. One generator in a cascaded hydro system is used as research example. CBOM model schedules 9 maintenance activities in one year for this generator. Sensitivity analysis reveals that this model offers sufficient flexibility to modify scheduling plans based on maintenance requirements. In all the parameters, accident penalty cost and maintenance duration have no effect on maintenance results. Upper and lower limits of failure probability influence the number of maintenance activities. Compared with age-based maintenance (ABM), CBOM strategy obtains more profits and cancels unnecessary maintenance activities by trade-off between operation and maintenance. |
11:10 | Designing of the Medical Data Management System with Reliable Microservice Architecture PRESENTER: Jozef Kostolny ABSTRACT. The design of the information system for storing and analysing medical data is to support easy expansion with new elements with comprehensive application logic. As these individual elements act as separate modules, this creates space for the use of independent technologies from other components. The technologies used can therefore be specific to the selected application logic. Subsequently, the communication of individual modules works using messages using API. This design works with the microservices architecture, which, with the possibility of scalability of the system, allows to create of a reliable system that will provide its services even in the event of a higher load and the number of users. |
11:30 | Assessing Risk Awareness with Hospital Information Systems PRESENTER: Margarida Martins ABSTRACT. Hospital information systems continue to transform healthcare practices at a disruptive rate. However, such transformational changes within healthcare practices also introduce new technical risks. Therefore, all stakeholders within a healthcare setting must be aware of the potential risks hospital information systems pose to patient care safety and quality. This paper adopts a focus group study and Delphi method approach to examine (i) the range of risks identified by thirty-two experts from different Portuguese hospitals and (ii) the level of awareness across healthcare practitioners. The contribution of the research is threefold. First, we present a literature summary on risks associated with hospital information systems. Second, we present findings on a typology of twenty-three risks and evaluate the perception of healthcare professionals on their potential impacts. Third, we discuss strategies to improve risk awareness with hospital information systems within the hospital environment, with broader implications for other healthcare settings, considering the different perspectives of healthcare workers. |
11:50 | MODELLING VARIATIONS IN NEWBORN LIFE SUPPORT PROCEDURE USING COLORED PETRI NETS PRESENTER: Alfian Tan ABSTRACT. Variations in clinical practice are common in daily medical activities. Such variations exist to accommodate different factors, such as patient condition, but in addition some unwarranted variations occur that can cause unwanted consequences of the procedure. Examples of such consequences include longer completion time of a clinical task, failure to deliver an effective treatment to the patient, and unnecessary healthcare cost (Carter, 2016). The causes of different variations can come from patient condition and their preference, knowledge and skills of clinical staff, as well as the lack of clear technical guidance for a clinical protocol. A careful consideration of variations and their effects on outcomes should lead to improvements in healthcare delivery. A modelling approach that could capture such factors and their effects would help to achieve this goal. In addition, automatic action recognition techniques to monitor and analyse these variations could help prevent future adverse events in clinical processes as well as support data collection for the model development. Smith et al. (2019) demonstrates the combination of object and action recognition technique in the newborn resuscitation procedure that can further be expanded to identify variations in this clinical activity. In this paper, a Newborn Life Support (NLS) procedure is modelled and analysed using a Coloured Petri Nets (CPN) approach and the simulation technique. This procedure is chosen not only because it is prone to error, but it is also potential for further study as it involves intensive clinical teamwork which requires both adequate technical and non-technical skills of the team members. In terms of the technical aspect of the CPN, colours in this approach are used to represent model parameters, such as the gestational age of the baby, the condition of the baby during the procedure, the number of actions performed by the team, and other technical modelling aspects in order to control the flow of tokens in the diagram. Probabilistic aspects of the model include the duration of every resuscitation task in the procedure, the choice of actions by clinical staff and the condition of the baby after receiving the treatment. The outputs of the model consist of a reliability measure of the procedure, such as the percentage of babies with unsatisfactory condition at the end the protocol and the number of babies who need full resuscitation, as well as an efficiency measure, such as the duration of the procedure until successful outcome is achieved. Model parameters are based on literature and in-field study data (2016-2018) that was carried out at Nottingham University Hospitals NHS Trust, Nottingham, UK. This in-field data was collected simultaneously with the research conducted by Henry et al. (2021). The modelling approach in our paper is demonstrated using a number of scenarios based on possible NLS variations, such as differences in the maximum number of trials of standard ventilation procedure, the probability of technical error in an unsuccessful inflation effort, and the proportion of successful intubation tasks. |
12:10 | Quantitative assessment of the benefit-risk ratio in the design of a medical device ABSTRACT. Scientific innovation is considered as boundless, in every direction one aims at exploring, with the technical knowledge and the available of resources as the sole limitations. If a scientific idea brings benefits to the community and meets an existing demand, then this idea becomes a candidate for further development. At this point, the matter becomes a bit more complicated. Any scientific innovation and its technical realization bring negative side-effects together with the claimed benefits. For example, the economy that is based on the old technology will certainly suffer from the technical shift, as new risks can be introduced by connecting industrial controls in the internet (e.g. cyber security), etc. This is why the estimate of benefits and risks, and their benefit-risk ratio, is required for every application domain such as transportation, communication or energy production. In this respect, medicine is not an exception. The validation of a new method in medicine and its implementation by means of a medical device proceeds with well-defined phases, the most important of which are summarized as follows. The first phase consists of monitoring the response of the human body, which may vary from one patient to another, for example as a function of age and gender. This is the clinical phase, the goal of which is to collect data and infer statistical evidences over a significant range of case studies, before validating the method. Once the effectiveness is proven and the method is validated, the evaluation of side effects of the method itself, and the risks introduced by the medical device through which it is implemented, must be analyzed and evaluated. This is the phase where benefits and risks are compared. Included within the benefits is the expectation of increasing the chances of healing from a disease, of recovering sooner from an injury, and eventually, of enjoying a higher quality of life. Within the risks, there are all failures of the medical device with respect to the intended use and misuse, and these concern the safety of the patient who is undergoing the medical treatment. These risks need to be understood, analyzed and evaluated against the acceptable benefit-risk ratio. For medical devices in Europe, this matter is regulated by Medical Device Regulation MDR 2017/745. The manufacturer of a new medical device must comply with the MDR and the standard ISO 14971 for the risk management process. Both are binding for the effective and safe use of the medical device throughout its entire life cycle. The state-of-the-art recommends the manufacturer to make every effort (as far as possible) to reduce the risks below the established acceptable levels. Any decrease of the benefit-risk ratio, for example caused by a development of the medical device, shall be reviewed before this is accepted or eventually rejected. This is because patient safety is the non-negotiable value, often quoted as “safety first”, which governs the validation and approval of a new method, either pharmaceutical or administered by medical devices, as well as its subsequent development. Economical aspects come into play only if further risk reduction is neither technically nor economically practical in comparison to the achievable benefit-risk ratio. This paper addresses the analysis and evaluation of the benefit-risk ratio for a complex medical device, the MedAustron Particle Therapy Accelerator (MAPTA) in Wiener Neustadt, Lower Austria. MAPTA treats cancers and similar diseases of the human body by delivering accelerated light ion beams (protons and carbon ions). The particle therapy accelerator started operation in December 2016. Since then, it has undergone several changes in various directions, such as to expand the scope of the treatable diseases, to improve the effectiveness of the existing techniques, to optimize the treatment times and to increase the patient throughput. This paper describes the benefit-risk ratio analysis for a few case studies taken from the experience with MAPTA, and it shows how scientific innovation in medicine meets the limitations that are imposed by norms and standards on the medical industry. |
12:30 | Systemic Risk of Undesirable Contagion within System Time Horizon: Work in Progress ABSTRACT. Numerous systemic failures of various internetworked infrastructures demonstrate that benefits of interconnectivity are associated with various risks, including risk of undesirable contagion. The goal of system designer/operator is balancing economic benefits and systemic risks associated with increase in the system interconnectivity and utilization of system resources. This paper proposes an approach to mapping system designer/operator risk tolerance level within system time horizon to the size of the corresponding safety margin with respect to the upper bounds on the system resource utilizations. The proposed approach is based on the following key observations: (a) contagion, being a collective phenomenon, is, in effect, a phase transition and (b) discontinuous phase transitions are typically associated with existence of metastable, i.e., persistent, regimes with unacceptably high aggregate loss. Our approach, which lies within the framework of Landau theory of phase transitions, accounts for system proximity to the boundary of the contagion-free region, continuous or discontinuous nature of contagion emergence on this boundary, and time horizon of interest. |
Panelists:
Andreas Bye, IFE, Institute for Energy Technology, Norway
Scott MacKinnon, Chalmers University of Technology, Sweden
Jeff Julius, Jensen Hughes, USA
Mary Presley, Electric Power Research Institute, EPRI, USA
Marilia Ramos, University of California, Los Angeles, USA
Traditional human factors engineering (HFE) and human reliability analysis (HRA) methods were largely developed for analog systems, which leads to several questions. These questions are being addressed across several domains such as nuclear, maritime, and automotive. Panelists will address the following questions:
- Overview of your domain - what are the challenges from regulators and end-users that question the applicability of traditional human factors engineering (HFE) and human reliability analysis (HRA) methods, when these traditional methods are applied to digitalization and automation?
- From the human performance perspective, do we understand the impact of digitization and the digital environment (consisting of digital instrumentation, controls, automation, procedures) on crew and system reliability, and are current methods adequate to reflect that?
- What activities have been taken to address these challenges?
11:10 | Comparison of Hose and Arm Leak Frequencies ABSTRACT. Transferring fuels between ship and shore is a weak link in a carefully managed industry. As traditional marine fuel oil is replaced with cleaner but more hazardous alternatives, such as LNG, hydrogen and ammonia, it is increasingly important to prevent leaks during transfer. One way of minimising transfer leaks might be to replace flexible hoses with articulated loading arms. An assessment of the benefits of such investment would need to estimate the difference in the frequency of leaks between hoses and arms. The importance of this issue has been recognised since the first risk assessments of marine terminals over 45 years ago, yet the available information is very uncertain. This paper compares hoses and arms, and quantifies the uncertainties in their relative leak frequency. Hoses and arms each have advantages and disadvantages, and so they are not necessarily simple alternatives. The paper points out that leak frequencies from applications where arms predominate may not give valid estimates of the benefits of introducing them in applications where hoses predominate. The paper also highlights the importance of the scope of the leak frequency data. Some sources have a narrow scope, considering the hose or arm as components distinct from the surrounding equipment. Other sources have a wide scope, considering the whole transfer operation. The relative leak frequency of hoses and arms may be very sensitive to this scope definition. The paper then reviews twelve sources that offer relative leak frequencies for hoses and arms, which might be used in a comparative risk assessment. It traces their original sources, revealing the extent to which they are based on actual leak data or judgement. It evaluates their quality, including the underlying data, its validity for comparative assessment, and the scope definition. It expresses the results as a probability distribution, representing the uncertainty in the relative leak frequency of hoses and arms. This varies over three orders of magnitude. There is high confidence that arms have lower rupture frequencies than hoses, but the available studies do not agree whether smaller leaks are more likely on hoses or arms. Until better data sources are available, this “wisdom of the crowd” estimate provides the best available understanding of the relative leak frequency of hoses and arms. It shows that uncertainties are likely to be critical in any assessment of the cost-effectiveness of replacing hoses with arms. |
11:30 | Safety and reliability as part of sustainability in ocean-based industries PRESENTER: Trond Stillaug Johansen ABSTRACT. In the broadest sense, sustainability refers to the ability of something to maintain or sustain itself over time, without degradation or reduced performance. In practice, sustainability is regarded as covering three dimensions: environment, society, and economy. To some extent, it can be argued that these dimensions correspond to the consequence elements that traditionally are considered in risk assessments: personnel risk, environmental risk and financial risk. Negative consequences for environment, society, and economy are often associated with failures and accidents, where risk assessments can be an important tool to understand and subsequently mitigate the consequences. Safety and reliability can thus be regarded as essential elements in achieving sustainability, at least with respect to unplanned consequences. In this paper, a review is made of sustainability reports from a number of companies representing aquaculture, offshore energy production and shipping. Emphasis will be on whether and how safety and reliability play a part in the way sustainability is described and approached. Main findings are that the companies in the chosen industries focus more on safety than reliability in connection with sustainability reporting and that offshore energy production is the only sector that include reliability in sustainability reports. |
11:50 | Operations Error Analysis of the Use of Electronic Chart Display and Information System PRESENTER: Igor Kozine ABSTRACT. This study constitutes a basis for risk analysis of automated navigation aids on board of ships. The set of methods complementing each other and forming a complete operational tool combines both post-accident and predictive analyses. The post-accident analysis we use is the Accident Anatomy Analysis, while for predictive hazard identification we use the Action Error Analysis. Cognitive modelling, which we also carry out, allows a level of analysis detail even deeper than action error analysis and allows information transfer between different accident types. The basis for the post-accident analysis and cognitive modelling is the accident reports while the action error analysis is based on procedures and observations. Aiming at an approach applicable for any type of automation and humans interacting with it, we exemplify the method on one system and analyse groundings involving the Electronic Chart Display and Information System. Among many results and conclusions one general is that the focus on “operator error” is rather misleading, and we should consider “operations error” probabilities. |
12:10 | What affects the risk of recreational craft users? – A literature review PRESENTER: Christoph A. Thieme ABSTRACT. Annually, many incidents and accidents with recreational crafts are occurring. They result in fatalities, major injuries, and severe material damages. However, there is no full overview of how many incidents and accidents occur and the factors leading to these accidents. The Norwegian Maritime Authority (NMA) together with key stakeholders identified the need for more comprehensive collection and analysis of incidents, accidents and associated risk influencing factors (RIF). To address the issue an integrated data platform for recreational craft accidents, including a risk module to analyse and predict areas with high accident rates, is under development. This paper summarizes the preliminary findings from a literature study on the RIF associated with accidents and their outcomes with recreational crafts at sea. In total 59 articles were reviewed published in the period between 2001 and 2021, of which 35 were describing relevant RIF. These articles cover a range of statistics and accidents reports on recreational crafts. The most often mentioned RIF relate to wear of personal flotation devices, craft type and length, and weather conditions. The most detailed RIF described in the literature are related to the recreational craft users involved in accidents. The results of this literature study will give input to the development of the data platform on recreational craft accidents in Norway. |
12:30 | Availability Analysis of a Cargo Vessel as an Integrated Subset of Systems. PRESENTER: Thomas Markopoulos ABSTRACT. Maritime cargo vessels and especially tankers bulk carriers and LNG carriers play an important role to the global economy as a representative segment of the international trade and maritime transportation industry. They present significant technical complexity since they consist of different systems and other auxiliary subsystems onboard. Additionally, they present operational complexity due to the operational requirements, their cost structure, the quality standards and the environmental directives imposed by international organizations. The evaluation of the system availability contributes to the optimal employment of the available resources, providing useful inferences concerning the operational use of a cargo vessel. Since the level of availability is closely related to the operational cost of a cargo ship, it affects significantly the decision making system of the maritime industry stakeholders. This paper is an attempt to study and evaluate the availability of a ship as an integrated subset of systems, considering the specific features of each one of them and the potential interaction with the other subsystems onboard. Due to the complexity of the whole system different scenarios of operational and failure modes are tested identifying potential weaknesses of the system in order to meet the imposed operational and environmental requirements as well. |
11:10 | Proportionate Assurance of Smart Devices used in the UK Nuclear Industry PRESENTER: Peter Bishop ABSTRACT. The nuclear power industry makes extensive use of Commercial Off The Shelf (COTS) computer based smart devices. In many countries, the safety justification of such components includes demonstration of “production excellence”, which consists of showing that the device was developed according to adequate hard-ware and software development standards such as IEC 61508. At present, all smart devices must fulfil the same assessment criteria, with no consideration of complexity. This can potentially lead to disproportionate time, effort and cost spent on assessments than needed to provide the required safety justification. This paper proposes a new strategy that calibrates the rigour of smart device assessments for the nuclear industry by considering the "simplicity" of the device. We present an approach for categorising smart device simplicity and relate this to the rigour of assessment. For a device to be simple, it needs to be both behaviourally and structurally simple. We propose (mostly quantitative) measures for structural and behavioural simplicity. A device that breaches these measures, or contains “no-go” criteria, would be considered complex. We then propose alternative assessment criteria for the justification of smart devices that fulfil our simplicity measures. |
11:30 | Current Status and Strategy for the Development of the Korean PSA Standard ABSTRACT. The PSA for Korean nuclear power plants (NPPs) started after the TMI accident. After that, each nuclear power plant (NPPs) of Korea has a plant-specific Probabilistic Safety Assessment (PSA) model to assess the risk of NPP. To fulfill the objectives of the PSA, it is essential to keep the appropriate quality of PSA models. The PSA standard is the most basic element to ensure the quality of PSA. In early 2010, the Korean industry tried to develop the Korean PSA standard based on the ASME PRA standards of the U.S.A. Those standards were not used in real works since the Korean regulatory body did not approve them. However, it was not a big issue since the PSA was not a legal requirement. In Korea, after the Fukushima accident, the PSA became the legal requirement in 2014 as the PSA became the element of Periodic Safety Review (PSR). In addition, the safety goal was introduced in Korea in 2016, and the Korean regulatory body asked the Korean utility to perform PSAs to support the development of the Accident Management Plan (AMP). After the legalization of the PSA, the quality of the PSA becomes an important issue in the Korean PSA community. The Korean regulatory body asked the utility that the quality of the PSA model meets the Capability Category II of ASME/ANS PRA Standard overall. However, there are many cases that some requirements of the ASME/ANS PRA standards and/or some practice of the USA regarding the PSA quality could not be applied to Korean PSA. It is due to the differences in regulatory requirements and technical basis between Korea and the USA. From the regulatory requirements point of view, there are issues related to the scope of PSA. The safety goal of Korea introduced in 2016 is based on the 0.1% rule of the U.S.A. However, the Korean regulatory body introduced an additional rule related to Cesium (Cs-137). This goal is related to the frequency of the accidents that result in the release of Cs-137 more than 100TBq. Such accident frequencies should be less than 1.0E-6/year. So, in Korea, the full scope of level 2 PSA is required for the PSR and AMP. In addition, the scope of the PSA for the licensing of a new NPP is extended to the Level 3 PSA. It will be a big challenge since there is no Level 3 PSA code in the world used for the licensing works. From the technical basis point of view, there are various issues. Basically, the ASME/ANS PRA Standard is developed for the operating light water reactors. However, some Korean PSAs are performed during the construction period of NPPs in which some data required for the PSA are unavailable. In this case, there is no way to meet some requirements of the ASME/ANS PRA standards. In addition, there are PSAs for CANDU NPPs, Wolsong 2/3/4 in Korea. The definition of CDF of Korean CANDU PSA is totally different from that of the ASME/ANS PRA standards. In some cases, there are not enough reliability data for special events such as common cause failure (CCF) in Korea. There are other problems related to peer review. In Korea, there is only one utility, Korea Hydro & Nuclear Power Co., Ltd. (KHNP), operating NPPs. And there is one engineering company that performs the PSA for the KHNP. In Korea, there are not enough PSA experts who are independent of the PSA of KHNP. So, it is not easy to consist of the peer review team. To cope with this situation, the Korean PSA community is trying to develop the Korea-specific PSA standard. The Korean Nuclear Society organized a special committee to resolve these issues and the Korean industry side also performed a project to develop a strategy and roadmap for the development of new Korean PSA standards. This paper summarizes the current status and the strategy for the development of the Korean PSA Standard. |
11:50 | Standardizing uncertainty: A document analysis searching for the role of standardization in transforming uncertainty-based risk concepts PRESENTER: Marius G. Vigen ABSTRACT. The constituents of risk have received much attention in risk management circles the last decades, as there has been a shift from probability-based perspectives, towards a focus on events, uncertainties, and consequences. In 2015, the Petroleum Safety Authority Norway altered the risk definition underlying their regulation, now emphasizing uncertainty as a core component of risk. Based on a thematic document analysis of relevant Norwegian standards, we have identified a shift towards subjective risk and knowledge. The word probability has been removed from the newer risk definitions, and the concept of probability itself seems to have a more subjective and less mathematical meaning in these standards compared to the older probability-based definition. The risk concept also seems to address harm that can be enhanced by flaws in the risk assessment itself. We discuss how the risk standards may be both enabling and constraining for the industry’s risk understanding and work, when it comes to implicit coordination, and possible gaps between academics and skilled workers. All in all, the new risk definition in the standards seems more nuanced, but further research is needed to see how it is implemented in practice. |
12:10 | A review of global efforts towards establishing safety directives for intelligent systems PRESENTER: Carmen Mei-Ling Frischknecht-Gruber ABSTRACT. Intelligent systems have entered our lives in a wide variety of domains, ranging from smart homes to stock management, autonomous systems or the military. If we take a closer look at the various industries that apply intelligent technologies, there is hardly any area that does not consider the possibilities and applications of machine learning. When it comes to safety-critical applications, there is an urgent need to critically examine how a model generates an answer and if this answer can be trusted in regards not only to interpretability but also in its prediction precision. Furthermore, human values must be incorporated into the practical development of AI systems to ensure a safe and secure application and use. Despite these needs, due to the nature of deep learning, we are confronted with a black box in the prediction model area, which needs to be addressed using interpretability and explainable AI approaches to minimise possible bias and at the same time increase transparency, fairness, justice and inclusion. In order to enhance trust in intelligent systems, accountability, responsibility and robustness must be ensured as well. AI policies and standards need to be put in place to enforce this in practice. We are facing a global challenge here; standards must be set not only at national but also at international level, and a common understanding of how to deal with AI at ethical and legal level must be found. Since intelligent systems are powerful, but also critical technologies whose development is advancing at an enormous pace, action must be taken quickly. Many states and international organisations are working on the development of international standards. A common governance framework helps to strengthen the trust in artificial intelligence technology and could be organised by existing international standards bodies. Two key actors in this process are China and the United States. Additionally, the EU has far-reaching plans to consolidate standards for AI. In April 2021, the EU Commission presented a proposal for a regulatory framework on AI, which should provide ethical guidelines for a trustworthy AI and a new legal framework. This paper provides an overview of current efforts that are being made at the international level by governments and global organisations. Further, we discuss current and upcoming challenges and risks posed by intelligent systems. Ethical guidelines and legal frameworks are considered. In particular, the classification of risk levels and possible mitigation strategies proposed are examined and compared. The latest state of technical feasibility and possible certification to ensure safe, transparent and robust AI systems will be examined. In future work, concrete certification approaches and possible technical implementations for safe AI systems that meet the proposed governance frameworks will be investigated. |
11:10 | The importance of the Safety Management System in the prevention of NATECH risks on the Italian territory PRESENTER: Romualdo Marrazzo ABSTRACT. The Seveso III Directive 2012/18/EU, implemented in Italy by a legislative decree issued in 2015 “D.Lgs. 105/2015”, imposes an obligation for the site operator, in identifying the hazards and assessing the major risks of the establishment, to take the NATECH risks into account, paying attention to the entire spectrum of natural hazards that may affect the site. The results of the NATECH risks assessment must be considered in the location, design, construction, and operation of the industrial establishment, as well as in the implementation of mitigation measures and emergency planning. The operator should develop appropriate measures to address natural hazards, so as to allow the maintenance of control of the plants vital to safety and their safe operation. In this sense the Safety Management System for the Prevention of the Major Accidents (SMS-PMA), and the relative integration with the operational management of the establishment, plays an important role to ensure the correct implementation of the prevention and protection measures against major accidents originating from NATECH events, with specific procedures for extreme weather conditions, such as heavy rainfall, lightning, strong winds and extreme temperatures. Starting from the main outcomes of the analysis of some industrial accidents, that recently occurred at Italian “Seveso” establishments, where natural hazards have been identified as a significant and triggering cause, a specific focus is then presented on the main types of plants, infrastructures, and industrial equipment vulnerable to extreme weather conditions. These lessons learned are also useful examples on how organizations could manage these problems, through specific procedures, good practices and methods used to assess industry’s response to NATECH issues. Finally, the article describes an in-depth analysis carried out on the NATECH risk of lightning for industrial plants and equipment, starting from the Italian technical regulation, with details of the main dangers caused by lightning, to be considered in the risk assessment for the identification of the critical elements for safety, as well as the main protection measures for electrical and electronic equipment. |
11:30 | Flood risk identification and analysis for pressure equipment PRESENTER: Antonino Muratore ABSTRACT. In Europe, economic losses due to floods have steadily increased in recent years. Floods are natural phenomena that cannot be prevented but increasing human settlements and economic assets in floodplains and the reduction of the natural water retention by land use together with climate change contribute to increase the likelihood of adverse impacts of flood events. In light of the recent hydrogeological instability phenomena throughout the European area, with particular reference to the exceptional events that occurred on the Italian territory in the Lombardy and Sicily regions, we conduct an in-depth study on the aspects related to the management of flood risk in workplaces with pressure equipment. In the presence of pressure equipment, the flood risk can lead to the release of dangerous substances, concomitant events such as explosions, toxic dispersions, surface pollution of water bodies and aquifers. The severity of these accidents is amplified by the possible simultaneous outage of auxiliary mitigation systems designed to contain events or to make the systems safe (fire-fighting systems, evacuation routes, etc.), as well as the simultaneous failure of several equipment. The purpose of the work is to identify and analyze the critical units of pressure present in industrial plants for flood risk required for a subsequent risk weighting (i.e., risk evaluation) in order to facilitate the employer's decision-making processes regarding the prevention and/or protection measures to be implemented with the related intervention priorities (Risk Treatment). |
11:50 | A multidimensional assessment framework for NATECH events related to hydrological disasters in natural gas pipelines PRESENTER: Francisco Filipe Cunha Lima Viana ABSTRACT. Climate change has triggered industrial accidents, known as Natech (Natural Hazard Triggering Techinological Disasters), drawing attention of risk management practitioners. In this context, the increasing frequency of intense rain events has posed innumerous danger scenarios for industrial systems. In natural gas pipelines impacts are dependent on the intensity of the event, the vulnerability and the resilience of the system. An immediate impact, caused by the action of the water course, may cause landslides or floods that affect the pipeline’s structure and cause rupture, holes or deformations. In this case, it is difficult to estimate an exact failure mode and interactions caused by a gas leak. However, the possible consequence of this event poses multiple critical conditions concerning the society, the environment and companies. Thus, this work presents a framework for multidimensional risk assessment to support decision-making in natural gas pipeline conditioned to the action of hydrological catastrophes. In this approach, it is possible to assess multiple risk factors according to the inherent characteristics of the pipeline and guide a proper prioritization processes for risk mitigation. |
12:10 | Risk management in natural gas pipelines: a hybrid approach for optimum portfolio selection PRESENTER: Ramon Swell Gomes Rodrigues Casado ABSTRACT. Seeking to keep operations safe in many organizations, including natural gas transmission companies, is a challenging issue. In this regard, a crucial task for the manager of these organizations is to deal adequately with the dilemma of investing in improvements by selecting and combining the sections of a pipeline that are the most critical ones in terms of risks without exceeding organizational resources. In other words, it is not enough to rank the sections and fund them in descending order of priority until resources are exhausted. Organizations need to balance the criticality that a combination of sections offers with their respective constraints. Therefore, this study seeks to contribute to the discussion on selecting a portfolio of strategic sections while tackling multidimensional risks, in natural gas pipelines, in a context of mitigating losses. This paper puts forward an estimate of the optimal combination of the most critical sections in response to risks and a set of optimal solutions based on using a consolidated portfolio selection model in natural gas pipelines. A numerical application is applied to validate and ratify the approach to portfolio selection. Results confirm that this approach ensures an optimal portfolio of sections is chosen vis-à-vis the risks in a natural gas pipeline. Furthermore, it also provides better control for decision-making on managing risks in a pipeline. |
12:30 | A Methodological Framework for the Resilience Analysis of Road Transport Networks Exposed to Freezing Rain PRESENTER: Behrooz Ashrafi ABSTRACT. Road transport network is one of the major critical infrastructures to be maintained resilient, since its disruption would not only have direct economic consequences, but it could also lead to domino effects on other critical infrastructures. This study proposes a methodological framework for the analysis of the resilience of road transport networks exposed to natural hazards, in particular to freezing rain, that is a precipitation event wherein the supercooled droplets of rain freeze upon contact with any surface, creating a glaze of ice and contributing, in particular, to icy roads conditions. A numerical example is presented that considers a road transportation network exposed to freezing rain, whose probability of occurrence is estimated using historical meteorological conditions; the network disruption and recovery are evaluated for the resilience analysis of the selected road transport infrastructure. |
Plenary Session:
Decision science and risk (Simon Wilson Trinity College Dublin, Emanuele Borgonovo Bocconi University, Matteo Pozzi Carnegie Mellon University,)
Dedicated to the memory of Prof. Singpurwalla. Nozer