ABSTRACT. Digital twins are revolutionizing industries by providing real-time computing, monitoring, and predictive analytics capabilities. However, their success hinges on overcoming significant data and resource management challenges. This keynote will explore four key issues critical to the advancement and scalability of digital twins. First, we will discuss the complexities of real-time data processing within modern computing continuum, emphasizing the need for seamless integration and efficient resource allocation across distributed systems. Second, we will explore the use of Large Language Models (LLMs) for dynamic verification of the resilience of digital twins, highlighting their potential to enhance adaptability and real-time decision-making. Third, we will examine end-to-end monitoring strategies to ensure data integrity, transparency, and reliability, enabling trust in automated decision processes. Finally, we will address the integration of emerging computational technologies, such as quantum accelerators (e.g., Quantum Brilliance) and neuromorphic chips (like Intel Loihi and BrainChip Akida), at the edge network to accelerate data processing and improve the responsiveness of digital twins. This talk will provide insights into how these advancements can be leveraged to develop robust, scalable, and intelligent digital twin ecosystems, driving innovation and efficiency in real-world applications.
Failure and defect detection of safety critical 3D printed goods
ABSTRACT. The increasing adoption of 3D printing in safety-critical applications, such as aerospace, automotive, and medical industries, demands stringent quality assurance to prevent failures that could lead to catastrophic consequences. In this work, we propose a novel model-based failure and defect detection system for 3D-printed components using a combination of camera-based monitoring and eddy current sensors. By supervising the printing process in real time, our approach enables early-stage defect detection, allowing for immediate intervention and the termination of faulty prints. This not only reduces production costs by avoiding unnecessary material waste and machine time but also enhances the reliability of printed components by preventing latent defects from compromising structural integrity. Defects that remain undetected could introduce material weaknesses, leading to cracks or complete failure in safety-critical applications. Furthermore, the collected data provides valuable documentation for process validation and quality control. We employ artificial intelligence to analyze sensor data and classify defects, ensuring accurate and automated decision-making. Our results demonstrate that AI-driven multi-sensor monitoring significantly improves defect detection accuracy compared to traditional post-production inspection methods. This approach represents a crucial step toward making additive manufacturing more reliable for safety-critical industries and provides a strong guidance for future model-based safety applications.
Model-Based Safety Assessment for Flight Control Systems: Methodology and Case Study
ABSTRACT. Technological advances have increased complexity of avion-
ics systems, requiring methods to efficiently and accurately derive both
quantitative and qualitative safety assessments for certification. To ad-
dress this challenge, Model-Based Safety Assessment techniques have
been developed over the years. In December 2023, the new version of
ARP4761A integrates MBSA formalism into the recommended prac-
tices for safety processes, in alternative to classical safety assessment
techniques (e.g. Fault Tree Analysis). The purpose of this article is to
present an efficient methodology to support the various safety analy-
ses required by the certification authority. Accordingly, the article main
objectives being to cover probability of occurrence, DAL allocation, inde-
pendence principles elicitation and requirements traceability. The exam-
ple reported is a comprehensive MBSA process of an industrial rotorcraft
flight control system: the article follows the architecture description, ex-
plains the safety model creation and comments on the derived results.
In the final part of the article, lessons learned from the implementation
of MBSA technology in an industrial environment are reported.
Multi-approach based Safety Analysis of a Wastewater Treatment System
ABSTRACT. Wastewater treatment systems are critical for protecting ecosystems and public health, yet the prediction of untreated effluent discharges from wastewater treatment plants (WWTPs) and their impact on plant performance has been largely underexplored. In this work, we investigate the safety and reliability of the wastewater treatment system of one of the largest agglomerations in Paris, France. We are interested in predicting system’s failures that lead to untreated effluent discharges into the Seine River. The system comprises two WWTPs connected by a bypass channel designed to mitigate untreated discharges from the plant with lower capacity. Despite flow management efforts, both plants frequently face overloading. This is particularly true during heavy rainfall, which increases the risk of discharges. To address this issue, we combine fault tree analysis and machine learning techniques to evaluate vulnerabilities and predict discharges. The approach leverages real-world data and demonstrates strong predictive capabilities despite the challenges of a small dataset. The findings support decision-making efforts to mitigate untreated wastewater discharges and enhance system reliability.
Application of a MBSA approach on a representative subsystem of EGNOS (European Geostationary Navigation Overlay Service)
ABSTRACT. RAMS (Reliability, Availability, Maintainability, and Safety) are crucial disciplines in the space industry. As in other fields (e.g. air traffic management, aeronautics, rail, etc.), space systems or systems-of-systems are becoming increasingly complex, leading to increasingly complex safety assessments, with the growing difficulty of ensuring the completeness and integrity of analyses. Faced with this increasing complexity, a more sophisticated safety approach is required, which has led to the need to experiment Model-Based Safety Analysis (MBSA) at Thales Alenia Space.
In the aim of evaluating the interest and added value of a MBSA approach, a benchmark, on the static aspects, of two commercially available MBSA tools, CECILIA Workshop (Satodev) and System Analyst (Thales), has been performed on a representative subsystem of EGNOS (European Geostationary Navigation Overlay Service), the European SBAS (Satellite-Based Augmentation Systems).
This article briefly introduces the scope and objectives of the activities performed by Thales Alenia Space. Then, it focuses on the methodology used: the case study, the two MBSA tools, the subsystem modeling principles for both tools, and the evaluation strategy. Furthermore, it presents the results: (i) a comparison of CECILIA Workshop outputs versus System Analyst outputs (ii) tool evaluation matrix synthesis. Finally, we conclude with the next steps identified to ultimately implement a MBSA approach on SBAS-type projects.
Safety Analysis Methods in Aerospace: A Case-Based Comparison of FTA and MBSA
ABSTRACT. Safety-critical areas, such as aerospace, require in-depth and
rigorous analysis of the systems under failures. In accordance with indus-
try standards, complex assessments are created to describe how failures
can lead to specific functional failures and to verify compliance with spe-
cific certification targets. This article reports on a comparison between
two independent methods on which the assessments are based. The first
is the well-known Fault Tree Analysis (FTA), the de facto industrial
standard. While the second is the analysis of the system Failure Propaga-
tion Model (FPM) included within the newest paradigm of Model-Based
Safety Assessment (MBSA). The objective of this work is to evaluate key
parameters to highlight the characteristics of both techniques while in-
tegrating them into an industrial process for civil aviation development,
in particular during the Preliminary System Safety Assessment (PSSA).
A benchmark is provided by analysing a realistic rotorcraft flight control
system on which both methods are developed.
MBCA: A Model-Based Approach for Cybersecurity Analysis of Cyber-Physical Systems
ABSTRACT. Cyber-physical systems are increasing in complexity and interconnectivity. Performing cybersecurity risk analysis and identifying all possible attack scenarios manually have become challenging and time consuming. This is similar to the problem being tackled for the studies of Reliability Availability Maintainability and Safety (RAMS) of complex systems where Model-Based Safety Analysis (MBSA) is proposed as a solution.
As a continuation of the research conducted in this PhD thesis [1] we introduce, in this paper, the methodology Model-Based Cybersecurity Analysis (MBCA). This novel methodology is inspired by MBSA and provides a practical and interactive approach to representing and automatically computing cyberattack paths (attack sequences) via a model. To illustrate this technique, we model the cybersecurity attributes of a drone and part of its ground control infrastructure using the software SimfiaNeo, which supports the MBSA methodology. The cybersecurity attributes selected for our modelling approach correspond to security measures, vulnerabilities, attacker’s actions, and safety feared situations.
We apply MBCA using SimfiaNeo to compute the different attack paths of the considered system architecture. Each sequence consists of the attack source, the target, and the attack path that leads to the feared situation; these elements constitute a threat scenario. We also illustrate the integration of the MBCA approach in Security Risk Assessment (SRA) methodologies and throughout a product development cycle.
Reference
1. T. Serru: Model-Based Security Assessment of Cyber-Physical Systems: Analyzing The impact of cyberattacks on safety using Altarica (2023)
Cybersecurity Threat Detection through Business Process Log Analysis
ABSTRACT. Cybersecurity management and orchestration are critical concerns in modern digital environments. Detecting anomalies effectively can mitigate risks and prevent breaches. This paper explores the application of methods and techniques from business process log analysis to detect cybersecurity threats starting from system-level logs generated while using organizational information systems. Until now, cybersecurity threat detection has predominantly relied on identifying anomalies at the technical level. However, an organization's business and operational levels contain rich information relevant to uncovering cybersecurity issues that cannot be detected through technical analysis alone. Business process log analysis provides a data-driven approach to comprehending the actual behavior of systems, enabling the identification of deviations from normal process execution that may indicate potential security threats. We propose a framework integrating process discovery and conformance checking to identify anomalous behavior patterns from system-level logs. A key aspect of our approach is its adaptability to user-defined policies and requirements, which guide the anomaly detection process. In this way, we guarantee that identified anomalies are relevant and actionable within the given context of an organization.
The framework has been applied to real-world scenarios, and we demonstrate its effectiveness in identifying irregular activities.
Interpretable and Trustworthy Attack Diagnosis for UAVs Using SafeML
ABSTRACT. Unmanned Aerial Vehicles (UAVs) are increasingly employed in critical applications such as public safety, logistics, and infrastructure monitoring. As their autonomy grows through Machine Learning (ML) model integration, new challenges emerge related to security, reliability, and model interpretability. Cyberattacks such as GPS spoofing and jamming can compromise UAV navigation systems, while ML models often operate as opaque black boxes, limiting operator trust in high-stakes environments. This paper proposes a diagnostic framework based on the SafeML technique to enhance the trustworthiness of ML-driven UAVs. SafeML applies statistical monitoring using the Empirical Cumulative Distribution Function (ECDF) and Wasserstein Distance to detect Out-Of-Distribution (OOD) data and quantify prediction reliability at runtime. The study evaluates multiple ML models, including Random Forest (RF), LightGBM, and XGBoost, on a UAV dataset featuring real-world GPS spoofing and jamming scenarios. Experimental results show that the best models achieve accuracies above 98\%, with SafeML effectively identifying low-confidence predictions that correlate with classification errors.
Incorporating failure of Machine Learning in probabilistic safety assessment and runtime safety assurance
ABSTRACT. Machine Learning (ML) models are increasingly integrated into safety-critical systems, such as autonomous vehicle platooning, to enable real-time decision-making. However, ML-based components are inherently imperfect, introducing a new class of failure: reasoning failures, often caused by distributional shifts between operational and training data. Traditional safety assessment methods rely on design artifacts or code to analyse potential failures, but these are not applicable to ML-based components, which learn behaviour from data.
Recently, SafeML was proposed as a technique to dynamically detect distributional shifts and assign confidence levels to the reasoning of ML-based components. Building on this work, this paper introduces a probabilistic safety assessment framework that incorporates ML failures into a broader causal safety analysis using Bayesian Networks (BNs). SafeML is used to detect and probabilistically represent potential ML failures, enabling dynamic safety evaluation and adaptation of intelligent systems using this framework.
We demonstrate this approach in an automotive platooning system incorporating traffic sign recognition. The findings highlight the benefits of explicitly modelling ML failures in safety-critical applications, enhancing system robustness under uncertainty.
Safer Skin Lesion Classification with Global Class Activation Probability Map Evaluation and SafeML
ABSTRACT. Recent advancements in skin lesion classification models have significantly improved accuracy, with some models even surpassing dermatologists' diagnostic performance. However, in medical practice, distrust in AI models remains a challenge. Beyond high accuracy, trustworthy explainable diagnoses are essential. Existing explainability methods, have reliability issues with LIME-based methods suffering from inconsistency, while CAM-based methods failing to consider all classes. To address these limitations, we propose Global Class Activation Probabilistic Map Evaluation, a method which analyses activation probability maps of all classes probabilistically and at a pixel level. By visualizing the diagnostic process in a unified manner, it helps reduce the risk of misdiagnosis. Additionally, application of SafeML enhances detection of false diagnoses and issues warnings to doctors and patients as needed, improving diagnostic reliability and ultimately patient safety. We evaluated our method using the ISIC datasets with MobileNetV2 and Vision Transformers.
CODIF: Counterfactual data-augmentations for estimating perception influencing factors
ABSTRACT. Deep neural networks (DNNs) have become the state of the art for object detection tasks in autonomous driving systems (ADS). These models often perform safety-critical tasks, such as pedestrian detection and collision avoidance. Therefore, these models must demonstrate an elevated level of dependability within the operational design domain (ODD). Safety analysis requires a causal perspective to understand the effects of perception influencing factors within an ODD. However, the ODD contains complex causal relations that introduce several sources of confounding bias. This makes it difficult to estimate the causal effect of influencing factors within the ODD using associational metrics. Our framework eliminates confounding bias by taking a counterfactual data-augmentation (CDA) approach to estimate the causal effect of perception influencing factors. Our running example of an influencing factor is "half-occlusions" (visibility range of 40%–60%). Our framework describes a process of identifying relevant half-occlusion characteristics and assigning appropriate augmentations. Finally, a comparative analysis is presented between our causal metric and the associational metric, which is based on conditional probability.
The Information Meta Model for Machine Learning IM3L: A Structured Approach to ML Integration in Engineering Systems
ABSTRACT. Machine learning (ML) has become an essential technology in the development of modern software-intensive systems, particularly in safety-critical domains such as autonomous driving. However, despite the maturity of model-driven and software engineering practices in these domains, the integration of ML components often remains unsystematic and poorly aligned with established engineering workflows.
To address this challenge, this paper proposes the Information Meta Model for Machine Learning (IM3L), a conceptual modeling language that supports the structured design of ML components in complex system contexts. IM3L enables engineers to systematically capture and reason about key characteristics of ML-based functionality---including data structure and semantics, class and feature relationships, learning method, and relevant quality metrics---in a way that aligns with established model-driven engineering (MDE) practices. This approach fosters interdisciplinary alignment and establishes a robust foundation for traceability, comparability, and quality assurance within existing model-driving engineering (MDE) practices.
To illustrate the practical application of the proposed approach, the paper presents a representative example utilizing the German Traffic Sign Recognition Benchmark (GTSRB) dataset within a prototypical object detection scenario. The example demonstrates how IM3L can be used to systematically document and structure the critical properties and underlying assumptions of an ML-based system. This facilitates a well-grounded understanding of the system's intended functionality and its integration within the broader system context prior to implementation.
RAGuard: A Novel Approach for in-context Safe Retrieval Augmented Generation for LLMs
ABSTRACT. Accuracy and safety are paramount in Offshore Wind (OSW) maintenance, yet conventional Large Language Models (LLMs) often fail when faced with highly specialised or unexpected scenarios. We introduce RAGuard, an enhanced Retrieval-Augmented Generation (RAG) framework that explicitly integrates safety-critical documents alongside technical manuals. By issuing parallel queries to two indices and allocating separate retrieval budgets to knowledge and safety, we guarantee both technical depth and safety coverage. We further develop a SafetyClamp extension which fetches a larger candidate pool, "hard-clamping" exact slot guarantees to safety. We evaluate across sparse (BM25), dense (Dense Passage Retrieval) and hybrid retrieval paradigms, measuring Technical Recall@K and Safety Recall@K. Both proposed extensions of RAG show an increase of Safety Recall@K from almost 0% in RAG to more than 50% in RAGuard, while maintaining Technical Recall above 60%. These results demonstrate that RAGuard and SafetyClamp have the potential to establish a new standard for integrating safety assurance into LLM-powered decision support in critical maintenance contexts.