MIMAR2025: 13TH IMA INTERNATIONAL CONFERENCE ON MODELLING IN INDUSTRIAL MAINTENANCE AND RELIABILITY
PROGRAM FOR TUESDAY, JULY 8TH
Days:
next day
all days

View: session overviewtalk overview

08:30-09:00 Session R: Reception

Reception & registration

Location: Hall
09:00-09:10 Session W: Welcome

Welcome & Introduction

Location: ROOM A
09:10-09:55 Session K1: Keynote "From theory to practice: the process of optimisation of preventive maintenance policies"- prof. Shaomin Wu

Abstract: Numerous preventive maintenance policies have been published in the reliability and maintenance literature. Nevertheless, few examples on the application of preventive maintenance policies have been reported. The reasons are various, mainly because it is notoriously difficult to collect failure data. As a result, many developed maintenance policies divorce from the ground truth and are therefore inapplicable. This talk discusses some possible methods to shed some light on such problems. We tackle the challenge from a perspective of estimating system reliability based on sparse data and integrating various uncertainties in optimisation of maintenance policies. The uncertainties are stemmed from the uncertainty of parameter estimation on samples of small size, model specification uncertainty, and cost information uncertainty.

Location: ROOM A
10:00-11:00 Session 1A: Special Session: Reinforcement learning for predictive maintenance optimization
Location: ROOM A
10:00
Deep Reinforcement Learning for Dynamic Imperfect Maintenance of Deteriorating Systems Subject to Shocks

ABSTRACT. In this work, we consider a single-unit system subject to degradation and random shocks. The degradation of the system is modelled based on a general degradation path model in which the virtual age of the system is used instead of its calendar age. Random shocks are assumed to be non-fatal and affect the age of the system by a random amount. we propose a new maintenance strategy considering “do nothing”, “imperfect repair” and “replace” as alternative actions on a deteriorating system. Unlike most existing works regarding maintenance with imperfect repair actions, we propose a dynamic improvement factor that changes according to the state of the system at maintenance time. Obviously, a dynamic state-dependent improvement factor model leads to a lower maintenance cost and higher system reliability, since the health state of the system during the entire maintenance horizon is utilized in real-time. A Markov decision process (MDP) is formulated to model the maintenance problem with continuous state space and a Deep Reinforcement Learning (DRL) algorithm is used to optimize the maintenance policy where the decision maker is trained by a Deep Q-network (DQN). Numerical and illustrative examples are given to demonstrate the performance and effectiveness of the proposed method.

10:20
Optimal maintenance decision-making for continuously degrading systems with imperfect repairs using Markov decision processes

ABSTRACT. In many practical systems, repair is often a more economical option than direct replacement. However, repairs usually have imperfect effects, as they only stochastically reduce degradation without fully restoring the system. Their effectiveness varies due to factors such as repair quality, component wear, and operational conditions. Considering this stochastic behavior, we develop a condition-based maintenance policy for a continuously degrading system that incorporates imperfect repairs alongside preventive and corrective replacements, reflecting realistic maintenance practices. The system degradation is modeled using an inverse Gaussian process with a general shape function, while the stochastic nature of repair effectiveness is captured by a beta distribution. Instead of relying on a parametric policy representation, we formulate the problem as a Markov decision process to directly learn a state-to-maintenance-action mapping under the long-run average cost criterion. The policy is optimized using dynamic programming with function approximation to efficiently handle the mixed discrete-continuous state space. To evaluate its performance, we conduct a comparative study against alternative maintenance policies, demonstrating the potential cost benefits of adaptive decision-making. Numerical experiments show that a near-optimal policy can be learned while effectively capturing degradation dynamics and repair uncertainties.

10:40
Deep Reinforcement Learning for Dynamic Imperfect Maintenance of Deteriorating Systems Subject to Shocks

ABSTRACT. We consider a single-unit system subject to degradation and random shocks. The system's degradation is modeled using a general degradation path with virtual age, while random shocks affect the system's age. We propose a new maintenance strategy that includes three actions: "do nothing", "imperfect repair," and "replace" with a dynamic improvement factor adjusted based on the system's state at the time of maintenance. A Markov decision process with a continuous state space is formulated to model the maintenance problem. To optimize the maintenance policy, we employ a deep reinforcement learning algorithm, where the decision-maker is trained using a Deep Q-Network.

10:00-11:00 Session 1B: Degradation modelling and reliability assessment
Location: ROOM B
10:00
Reliability Assessment of Multi-state System with Dependent Degradation Process Based on Partially Sampling Monte-Carlo Algorithm

ABSTRACT. With the growing application of complex systems in the industrial sector, multi-state systems (MSSs) have become a widely adopted framework for modelling degrading systems, offering a flexible approach to represent various states of degradation. However, these models face significant challenges in accounting for dependencies between components, complicating their implementation in real-world scenarios. Existing degradation models and their reliability evaluation methods are either time-inefficient or rely on strong assumptions, such as the Markov process, which can lead to inaccurate reliability results. To address these challenges, we propose a novel semi-Markov model that incorporates degradation dependencies and transforms it into a multidimensional Markov chain. To overcome the curse of dimensionality, we propose a Partially Sampling Monte Carlo (PSMC) algorithm that efficiently manages large-scale Markov chains for reliability assessment. This approach significantly reduces computation time by partially sampling in each iteration. Compared to existing Monte Carlo methods, our algorithm demonstrates significantly greater efficiency when handling high-dimensional Markov chains, yielding more accurate reliability results within the same computational time. Additionally, we model the relevant parameters and validate their statistical properties. The effectiveness and efficiency of the proposed method is further illustrated through three simulation-based experiments and a real-world case study of computing networks hosting a large language model.

10:20
Model-Based Load-Dependent Degradation Modeling For PEM Fuel Cells: A Multi-Health Index Approach Toward Energy Management in Multi-Stack Systems

ABSTRACT. Multi-stack proton exchange membrane fuel cells (PEMFCs) present a promising solution for high-power applications and carbon-free energy. Despite significant advancements in PEMFC technology, durability and cost remain key challenges for large-scale commercialization. One potential approach to mitigating degradation is the optimization of operating parameters, particularly power demand, which is a major influencing factor. While power is generally dictated by the application's requirements, in a multi-stack configuration, it can be dynamically allocated among the stacks. Determining the optimal load distribution requires a reliable degradation model, which remains a challenge due to the system's complexity and the multi-faceted nature of degradation mechanisms. This paper proposes a load-dependent degradation model incorporating two key health indices that contribute to fuel cell performance loss: electrochemical surface area (ECSA) degradation and internal resistance increase. The ECSA is linked to power demand through a platinum dissolution model, while resistance evolution is represented as a load-dependent stochastic process. These indices are then integrated into a fuel cell potential model, enabling a more accurate assessment of degradation dynamics. The proposed model provides a foundation for optimizing load allocation strategies, ultimately enhancing PEMFC lifespan and performance in multi-stack architectures.

10:40
Uncertainty Quantification as a Complementary Latent Health Indicator for Remaining Useful Life Prediction on Turbofan Engines

ABSTRACT. Health Indicators (HIs) are essential for predicting system failures in predictive maintenance. While methods like RaPP (Reconstruction along Projected Pathways) improve traditional HI approaches by leveraging autoencoder latent spaces, their performance can be hindered by both aleatoric and epistemic uncertainties. In this paper, we propose a novel framework that integrates uncertainty quantification into autoencoder-based latent spaces, enhancing RaPP-generated HIs. We demonstrate that separating aleatoric uncertainty from epistemic uncertainty and cross combining HI information is the driver of accuracy improvements in Remaining Useful Life (RUL) prediction. Our method employs both standard and variational autoencoders to construct these HIs, which are then used to train a machine learning model for RUL prediction. Benchmarked on the NASA C-MAPSS turbofan dataset, our approach outperforms traditional HI-based methods and end-to-end RUL prediction models and is competitive with RUL estimation methods. These results underscore the importance of uncertainty quantification in health assessment and showcase its significant impact on predictive performance when incorporated into the HI construction process.

11:20-12:40 Session 2A: Predictive maintenance
Location: ROOM A
11:20
An optimization model for periodic maintenance of industrial equipment with uninterrupted operation

ABSTRACT. Industrial processes in the food and chemical sectors often rely on equipment that runs continuously (24 hours a day), where operational safety depends on effective preventive maintenance to avoid unexpected shutdowns and consequent loss. Optimization methods play a key role in decision making for equipment maintenance, enabling the development of shutdown plans for various maintenance tasks while ensuring the required level of service for uninterrupted industrial operations. This study focuses on a food manufacturing facility with dozens of machines that process agricultural raw materials into intermediate products for other industries and final products for consumers. These machines operate year-round and require periodic maintenance to prevent breakdowns. Maintenance tasks vary in complexity, ranging from brief procedures lasting a few minutes to extensive operations that take several days. The company faces a significant challenge: planning machine maintenance efficiently without disrupting industrial processes or significantly reducing its capacity. To address this, a novel mixed-integer programming model was developed and computationally implemented. It is solved using Gurobi in a few minutes for the smallest instances. The results are promising and provide feasible maintenance plans that can be implemented in real by the company. Future research includes testing larger problem instances and developing MIP-heuristics to accelerate solution times and enhance scalability.

11:40
Risk Assessment of Condition-Based Maintenance Contracts

ABSTRACT. We assess the financial risk incurred by maintenance service providers responsible for delivering preventive and corrective maintenance under a fixed upfront fee agreement. To minimize maintenance costs, preventive maintenance is optimized based on operational condition data. The latter, however, is only revealed during the contract and allows the explanation of machine heterogeneity in the failure behaviour. We propose a method to jointly estimate the failure distribution and optimize the preventive maintenance policy. We consider machine heterogeneity via Bayesian updating of the parameters that govern the failure distribution. The posterior distributions of the failure parameters allow us to build a maintenance cost distribution that considers parameter uncertainty at each moment in the contract. This is used to estimate the contract's maintenance costs and estimate its risk. We show that including operational data revealed during the contract improves the maintenance risk assessment compared to a one-size-fits-all approach.

12:00
Towards Automating xGSPN-mBSPN Model Generation for Scalable Fault Diagnostics in Dynamic Systems

ABSTRACT. Efficient fault diagnosis is fundamental to industrial reliability and diagnostic applications, particularly for ensuring system safety and efficient performance. While traditional model-based fault detection approaches have proven effective, they face scalability challenges due to the manual effort required in model construction. This paper introduces an automated framework, integrating extended Generalized Stochastic Petri Nets (xGSPNs) for system modelling with modified Bayesian Stochastic Petri Nets (mBSPNs) for fault diagnosis. A set of novel algorithms is proposed to enable the automatic generation of the xGSPN model from system specifications and derivation of the mBSPN diagnostic module from the xGSPN representation, including the automated construction of input Conditional Probability Tables (iCPTs), required for diagnostic reasoning. These automation processes reduce manual effort, improve model accuracy, and enhance adaptability for large-scale and time-varying systems. The effectiveness of the proposed approach is validated using a water tank level control system, demonstrating its capability in detecting and diagnosing single and multiple faults. The findings contribute to advancing hybrid fault detection and diagnostic methodologies, making them more practical for industrial reliability and fault diagnostics applications.

12:20
Predicting the remaining life of lithium-ion batteries: a frugal data-based approach

ABSTRACT. Predictive maintenance aims to anticipate the end-of-life of components, thereby optimising human intervention and use of parts, but requires constant monitoring. Calculating the Remaining Useful Life (RUL), which estimates the time remaining for a system to operate satisfactorily, is a crucial stage in predictive maintenance. Classically, there are three approaches for calculating RUL: with models, with data or with hybrid methods. Methods using data, whether statistical or using neural networks, often require very large quantities of data. In this paper, a frugal approach in terms of the data required is proposed to calculate the RUL of lithium-ion batteries. This method use a polynomial as an approximation of the capacity over the cycles, whose coefficients are obtained through least mean squares optimization un der linear constraints. With an average prediction horizon of 30 % remaining lifetime, this method is relevant in terms of computational complexity and required quantity of data.

11:20-12:40 Session 2B: Special Session: Advancements in Prognostics and Health Management
Location: ROOM B
11:20
Real-Time Monitoring of Nozzle Clogging in Cold Spray Process Using Airborne Acoustic Emission and Data-Driven Prognostics

ABSTRACT. The cold spray process is an emerging solid-state deposition technique that accelerates metallic powder particles to supersonic speeds through a converging-diverging nozzle using a carrier gas. Upon impact with a substrate, the particles form a dense, adherent deposition, making cold spray highly suitable for coating and repair applications. Its key advantages include low-temperature operation, minimal oxidation, and reduced thermal degradation of the substrate, making it particularly attractive for aerospace, automotive, and other industries.

A common problem during cold spray is nozzle clogging. It can occur due to the slow buildup of powder inside the nozzle, which restricts the gas-particle flow and thereby affects the quality of the deposit. Detection of clogging is therefore important for quality assurance. Furthermore, the ability to predict the occurrence of clogging in advance could enable corrective action before part quality is affected. In previous work, the authors showed the potential of airborne acoustic emission (AAE) as a real-time, non-intrusive monitoring technique, providing valuable insights into the process without requiring a direct line of sight with the spray plume. Preliminary experiments showed that the acoustic waves generated during the process contained valuable information related to particle velocity, nozzle positioning with respect to the object, and nozzle clogging. In this work, the analysis of the AAE signals was further developed for the detection and prognostics of nozzle clogging.

Run-to-failure experiments were performed to analyse nozzle clogging progression, reveal its stochasticity and extract relevant features to characterize the different clogging stages: healthy condition, clogging initiation, clogging buildup, and the end of life. A health indicator (HI) was derived from AAE signal features to quantify varying levels of nozzle clogging. This HI was not only used to quantify clogging progression but was also directly linked to the quality of the deposited coating. As clogging began, porosity increased gradually in the deposit, deteriorating its quality.

Data-driven models developed during this work leverage the HI for prognostics, estimating the nozzle’s remaining useful life (RUL) before complete clogging occurs. Combining AE-based monitoring with predictive algorithms is expected to minimize unplanned downtime, reduce material waste by reducing rejected parts, and improve the overall efficiency of cold spray applications. The proposed methodology is validated through further experiments with varying process parameters, assessing its effectiveness in early clogging detection and RUL estimation.

This work highlights the potential of condition based maintenance in cold spray applications. Early detection of clogging and linking HI to deposit quality ensure consistent coating performance and optimized operational efficiency.

11:40
Data-driven Prognostics under uncertainty: A comparative study on the state-of-the-art HMMs

ABSTRACT. As systems become more complex, the task of making them safe and reliable without wasting materials and resources proves challenging. To tackle this challenge, the field of Prognostics and Health Management (PHM) is emerging, providing novel modelling techniques to predict the future damage state of these systems and optimize maintenance strategies. A paradigm shift in the PHM field has occurred in the last few years, where analytical modelling has been complimented (or entirely replaced) with data-driven modelling. This transition is driven by the growing capabilities of data-driven models and the increasing complexity of systems, which often render analytical methods either inaccurate or computationally prohibitive. Due to the ever-increasing popularity of ANNs, novel and high-performing models have been devised and applied in PHM. However, an aspect often overlooked when applying ANNs for prognostic tasks is that they are, by nature, deterministic. Contrastingly, predicting any value in the future is inherently stochastic. Thereby, any predicted variable needs to be modelled as a random variable to quantify the associated uncertainty coming from the process and the prediction itself. For that reason, stochastic models are getting more traction in the PHM field. Hidden Markov models (HMMs) are one of the most popular stochastic models for predictive tasks since they have a rich mathematical formulation to model the system's hidden (not directly observable) degradation process while properly considering the associated uncertainty. A plethora of modifications to the vanilla HMM have been devised that: relax the Markovian assumption (Hidden Semi-Markov Model), use adaptation mechanisms to handle outlying cases in time (AHSMM), or use similarity-based learning to enhance their performance (SL-HSMM). Although these extensions enhance predictive capability, they also introduce additional computational costs. This study evaluates the performance and computational efficiency of these advanced HMM variants using real-world experimental data from carbon-fiber-reinforced polymer composite specimens subjected to tensile-tensile fatigue loading. Acoustic emission data are utilized to predict the remaining useful life (RUL) of the specimens. Multiple HMM-based models are compared both in terms of accuracy and uncertainty quantification. A novel equation for calculating the RUL is also presented and compared with the literature-standard one. Therefore this work provides a comprehensive assessment of state-of-the-art Hidden Markov Models for PHM applications.

12:00
Optimal maintenance planning for offshore wind farms considering time-varying costs and limited manpower

ABSTRACT. With wind energy taking up a bigger share of the worldwide electricity production each year and the desire to have switched to a fully sustainable global energy landscape by 2050, finding least-cost maintenance programs for wind turbine components becomes increasingly important. In this thesis we analyse the problem of determining which maintenance activities should be conducted at times other than originally planned when encountering limited available manpower. We consider the period-dependent age replacement policy (p-ARP), block replacement policy (p-BRP) and modified block replacement policy (p-MBRP) to construct least-cost maintenance policies for a single component under time-varying costs. The first two form the groundwork for three algorithms that we propose in case of dealing with multiple components where only a limited number can be maintained simultaneously. First, we propose the Dynamic Maintenance Delay (DMD) heuristic that deals with delaying preventive maintenance activities. Next, we include bringing forward maintenance by introducing the Dynamic Maintenance Reschedule (DMR) and Static Maintenance Reschedule (SMR) heuristics. We evaluate the performances by means of simulation for eighty identical components. Moreover, we compare the outcomes with the optimal policy in case of two components.

12:20
Redefining Prognostic Essentials: Focus on Reliability, Robustness and Feasibility.

ABSTRACT. Prognostics play a pivotal role in predictive and prescriptive maintenance by forecasting the future health and performance of assets based on their current condition and operational context. While traditional research has focused on enhancing RUL prediction accuracy, this work argues that the feasibility, robustness, and reliability characteristics are equally vital for addressing the demands of modern maintenance strategies. To effectively support maintenance decision-making, prognostic methodologies must meet three critical criteria: feasibility, robustness, and reliability.

Feasibility refers to the ability of a prognostic methodology to function effectively with limited degraded data, as obtaining extensive degradation datasets can be prohibitively expensive. Robustness ensures that prognostics maintain reliable performance across a wide range of operational conditions, including those not encountered during training. Reliability is vital due to the inherent uncertainties in prognostics, arising from factors such as manufacturing variations, unpredictable future loading conditions, and environmental influences.

Motivated by these requirements, this work introduces a novel adaptive similarity-based prognostic methodology inspired by Markov models. The proposed approach will be evaluated and validated against state-of-the-art methods using both simulated and real-world data from the aerospace sector.

14:00-16:00 Session 3A: Special Session: Data-driven decision-making models for predictive maintenance
Location: ROOM A
14:00
Optimizing a maintenance strategy by combining age-based maintenance and an imperfect prognostic fault detection model

ABSTRACT. Age-based maintenance is a popular maintenance strategy where components are always maintained at a certain age or upon failure. There is, however, an increasing interest in predictive maintenance instead. In predictive maintenance, the measurements of the sensors installed around a component are used to develop prognostic models. Unfortunately, these prognostic models are often not perfect, but make errors in their predictions/classifications. By solely planning maintenance based on the outcomes of an imperfect prognostic model, the number of failures, maintenance tasks and maintenance costs might actually increase, compared to age-based maintenance.

However, also an imperfect prognostic model might still provide useful information on the potential failures of the components. In this presentation, we will therefore combine an age-based maintenance strategy with the results of an imperfect prognostic fault detection model. This fault detection model aims to raise an alarm when a component becomes unhealthy, but before this component fails. We assume that this model is imperfect, with a known false positive and false negative rate.

With Bayes theorem, we derive a formula for the probability of a false positive, based on both the false positive rate and the age of the component. We subsequently use the classical renewal reward theory to optimize the age at which to preventively replace components, if we also replace components based on the alarms of the imperfect fault detection model. Last, we also analyse if it is beneficial to ignore alarms when the component is just installed, since the probability of a false positive is high in this phase.

We analyse this approach with a small case study, where we assume a Weibull lifetime distribution for a component. We show how even models with many false positives and false negatives can still contribute to lowering the maintenance costs, if combined with age-based maintenance. We also show how the optimal parameters of this optimization approach indicate whether it is optimal to use a corrective maintenance only, age-based maintenance only, predictive maintenance (based on the alarms) only or a combination of age-based and predictive maintenance.

14:20
On the Value of Predictive Spare Parts Printing as the Second Supply Mode

ABSTRACT. We investigate the use of additive manufacturing (AM) for predictive on-site spare parts printing, leveraging real-time sensor data to optimize decision-making in spare parts provisioning. AM enables the production of spare parts with an extremely short lead time, offering a faster supply solution than regular replenishments from the original equipment manufacturer. The former is, however, more expensive than the latter, which leads to a trade-off. We examine this trade-off within the context of spare parts provisioning for a partially observable degrading system. Specifically, we consider a system equipped with embedded sensors that provide classification data on the condition of a critical component. We then propose a Bayesian procedure to infer from this data and the confusion matrices of the classifiers whether these critical components are nearing failure. Using this Bayesian procedure, a decision maker responsible for the uptime of the system needs to sequentially decide whether to predictively print spare parts, replenish spare parts from the regular supplier, or do nothing. To analyze this sequential decision problem, we formulate a partially observable Markov decision process. Through an extensive numerical study, we explore the optimal conditions under which predictive printing or traditional replenishment is preferred, and we analyze the impact of input parameters on these decisions. Furthermore, we derive sensor performance guidelines based on the performance measures obtained from the confusion matrix, providing practical insights into the effective integration of sensor data in spare parts provisioning. The findings of this study contribute to the understanding of AM’s strategic value in spare parts management, particularly in environments where real-time condition monitoring is available.

14:40
A Probabilistic and Machine Learning Approach to Predictive Maintenance of Railway Tracks

ABSTRACT. Abstract

Railway track infrastructure is subject to continuous degradation, with Rolling Contact Fatigue (RCF) being a major cause of defects such as squats and cracks. Effective maintenance and frequent inspections are essential to ensure operational safety and cost efficiency. This study introduces a hybrid probabilistic model and machine learning approach to model and predict railway track defects. The methodology integrates Negative Binomial and Poisson distributions for defect count modelling, the Random Forest Regressor for predictive accuracy, and the Hidden Markov Models (HMM) for degradation state transitions. Real-world railway track data, which include the service operations, environmental, and maintenance parameters, are used to validate the models. Results from the Random Forest model outperforms traditional methods (MAE: 0.53, R²: 0.86), while HMM analysis reveals critical deterioration patterns, with a 76% chance of tracks skipping warning stages to reach severe condition within one inspection cycle, and an 83% probability of persistent severe degradation. These findings demonstrate weakness of conventional maintenance and support shifting to predictive strategy to prioritise early intervention in high-risk areas, optimise inspection schedules, and hence improve both safety and cost-efficiency in rail infrastructure management

15:00
A data-driven robust approach to a problem of optimal replacement in maintenance

ABSTRACT. Maintenance strategies are pivotal in ensuring the reliability and performance of critical components within industrial machines and systems. However, accurately determining the optimal replacement time for such components under stress and deterioration remains a complex task due to inherent uncertainties and variability in operating conditions. In this paper, we propose a comprehensive approach based on Robust Markov Decision Processes (RMDP) to optimize component replacement decisions in machines with one critical component while addressing uncertainty in a structured manner. RMDP offers a robust framework for decision-making under uncertainty, allowing for the modeling of component degradation and variability in operating conditions. Our methodology uses data-driven ambiguity sets, including likelihood-based and Kullback-Leibler (KL)-based ambiguity sets, to capture and quantify uncertainty in the degradation process. We show the mathematical relationship between the KL-based and Likelihood-based ambiguity sets and provide statistical guarantees for the optimal cost. Through computational experiments, we demonstrate the effectiveness of our RMDP approach in identifying the optimal replacement time that minimizes the total maintenance cost while exhibiting greater stability compared to traditional methods.

15:20
Data-Driven Maintenance Optimization for a Unit with a Bivariate Deterioration Process

ABSTRACT. We consider a single-unit system with two condition indicators, i.e., the system deteriorates according to a bivariate deterioration process. We assume that the deterioration process, including its parametric form, is unknown. Instead, we assume that a condition-based maintenance policy has to be specified only based on limited condition data. This makes the approach fully data-driven.

More specifically, we assume that condition data of K runs-to-failure is available. For each run-to-failure, the two conditions are measured periodically, until the system is failed. We use logistic regression to estimate the failure probability for each system state (x1,x2), and based on this we determine in which system states to carry out preventive maintenance. We compare the resulting policy to the oracle policy under the assumption that the exact deterioration process is known, and analyze how the performance of our approach depends on the amount of data that we have.

15:40
From prognostics-based predictive maintenance to end-to-end predictive maintenance for complex systems

ABSTRACT. Large datasets are currently available on the health condition of complex systems. Using machine learning, recent studies have leveraged such data to generate remaining useful life (RUL) prognostics. Here, the focus of the machine learning regressors is on achieving prognostics of high accuracy. Once obtained, in a second stage, these prognostics are usually integrated into maintenance planning optimisation models. However, aiming for high accuracy prognostics in a first stage does not guarantee that the maintenance costs are also minimized in a second, maintenance planning stage. We analyze the benefit of an integrated, end-to-end framework for the predictive maintenance problem where we directly estimate maintenance costs from sensor data, instead of relying on prognostics, with the common approach of planning maintenance based on prognostics. The results show that end-to-end predictive maintenance planning results in lower maintenance costs, and overall lower computation time for training.

14:00-16:00 Session 3B: Reliability and maintenance engineering
Location: ROOM B
14:00
Evaluation of Mission Reliability for a System under the Presence of Spare Parts

ABSTRACT. The main objective of this study is to evaluate mission reliability for a system in the presence of spare parts. That is achieved by using a Markov stochastic process and survival signature methodology. The survival signature [1] provides a methodology to evaluate the reliability of systems with multiple types of components.

This study presents an analysis of a system with multiple types of components, supposing that the distribution function of the failure time of components is an Erlang distribution. During the mission, some spare parts are available for some component types to replace failed components immediately [2]. However, when all available spare parts have been used, further failing components cannot be replaced or repaired.

The study introduces the method for a system with just one component type, after which the application to a system with multiple component types is presented. This method can be used as input to make decisions about the number of spare parts available to meet some mission reliability requirements.

References: [1] Coolen, F. P., and Coolen-Maturi, T. (2012). Generalizing the signature to systems with multiple types of components. In Complex systems and dependability (pp.115-130). Springer Berlin Heidelberg. [2] Van Houtum, G. J., and Kranenburg, B. (2015). Spare parts inventory control under system availability constraints (Vol. 227). Springer.

14:20
ReLife: a free and open-source Python library for data-driven decision-making in asset management based on reliability theory

ABSTRACT. In the face of aging infrastructures and the challenges posed by the energy transition, asset managers are tasked with making critical investment decisions influenced by the uncertain lifespans of industrial assets. ReLife is a free and open-source Python library that serves as a comprehensive toolkit aimed at empowering asset managers to make risk-informed decisions. The library provides diverse statistical methods ranging from survival analysis and age replacement policy to advanced reliability models, such as condition-based maintenance under Gamma deterioration process or optimal replacement of assets subject to minimal repairs driven by non-homogeneous Poisson process dynamics. It also includes formal tools such as renewal processes sampling and a solver for the renewal equation for computing the renewal function and the expected total costs over time. It also deals with delayed processes and handles the Lebesgue-Stieltjes integration. The library ultimately seeks to assist asset managers in selecting maintenance policies that minimize the expected equivalent annual costs, thereby facilitating decisions that reduce socio-economic impacts. As a free and open-source project, ReLife aims to serve as a platform for collaboration among reliability professionals, researchers, and practitioners.

14:40
Wireless real-time wear liner monitoring for predictive maintenance and life-cycle cost optimization in mineral processing transfer chutes

ABSTRACT. Unscheduled downtime caused by transfer chute failure in mineral processing operations often results in major production loss, significant increase of maintenance cost, and potential safety incidents considering the hazardous environment these may be in, and human intervention involved for repair. This paper presents an innovative method to wirelessly monitor liner wear condition in mineral processing transfer chutes in real time. The objective is to optimise transfer chute’s lifespan by reducing maintenance operation cost through the implementation of effective predictive maintenance strategy able to improve reliability of wear liner thickness measurement and minimise safety risks while enhancing plant availability. The suggested method relies on the integration of wireless sensors that can measure chute liner thickness and report on wear rate and crack characterization, particularly for Bisalloy, Al2O3 and ZTA (Zirconia Toughened Alumina) tiles. These sensors send collected data to a centralised platform in real time for chute condition analysis, thus facilitating early fault detection of wear and anomalies. A simulation based on a stress-strength interference model was conducted to interpret the impact of transferred material particles on the chute liner surfaces, identifying wear patterns and predicting liner degradation over time. The results show that the implementation of this wireless real-time liner condition monitoring has the potential to significantly optimise the efficiency of maintenance practices and extend the life of mineral processing transfer chutes.

15:00
Evaluating Maintenance Strategies for Locomotive Wheelsets Using Petri Net-Based Modelling Approach

ABSTRACT. Effective locomotive fleet management is crucial for ensuring the safe, reliable and cost-efficient operation of railway systems. One of the primary maintenance tasks in railway transportation is the condition monitoring and management of rolling stock wheelsets, which directly influence operational safety, ride comfort, and maintenance costs. This paper proposes a Petri net-based approach to model the state transitions of wheelset flange and tread degradation to evaluate different maintenance strategies for locomotive wheelsets. The methodology aims to enhance decision-making regarding maintenance intervals, minimize locomotive downtime, and reduce maintenance costs, while maintaining high safety and reliability standards. A key feature of the proposed approach is the ability to simulate different maintenance strategies for the locomotive wheelsets and evaluate their impact on fleet performance metrics, such as availability, reliability, and life-cycle costs. By varying the inspection and maintenance policies within the simulation, the optimal strategy that balances safety, cost, and operational efficiency can be identified. Another important outcome is that the proposed model allows the safety risks to be evaluated when the locomotive wheelset condition is outside the standard’s limits that are established by maintenance policy. Additionally, the model can be used as a decision-support tool that helps to evaluate the trade-offs between different maintenance policies and select the most cost-effective strategy

15:20
Reliability Prediction for Combined Hardware-Software Systems Using Survival Signature

ABSTRACT. The increasing integration of hardware and software in modern safety-critical systems has increased the need for accurate reliability prediction models to prevent catastrophic failures. Several reliability approaches have been developed in the literature to evaluate the performance of combined hardware-software systems based on different assumptions using various statistical and probabilistic methods [1]. However, there remains a need to develop models that can be applied to a wide range of such systems.

This research presents a unified framework for predicting the reliability of a combined hardware-software system using the survival signature concept [2]. A key advantage of this approach is its applicability to large systems and networks, as it can handle multiple types of components without requiring the assumption that their failures are independent and identically distributed. This makes it particularly well-suited for real-world systems that integrate hardware and software components. The methodology enables system reliability evaluation by examining how the system’s structure and component characteristics affect overall system performance.

The survival signature methodology is implemented in this research to predict the reliability of a system consisting of n subsystems, where each subsystem comprises one hardware component and one software module, and the system requires at least k of functioning subsystems to function. The analysis integrates system failure diagnosis to determine whether failures originate from hardware or software components, using diagnostic equations derived from the survival signature approach. This enables the evaluation of failure propagation and the identification of component types with the greatest impact on reliability. Initial results demonstrate the effectiveness of this unified approach in predicting system reliability by incorporating both hardware and software failures, providing a foundation for improved system design and maintenance strategies.

References [1] Sourav Sinha, Neeraj Kumar Goyal, and Rajib Mall. Survey of combined hardware-software reliability prediction approaches from architectural and system failure viewpoint. International Journal of System Assurance Engineering and Management, 10:453–474, 2019. [2] Frank P. A. Coolen and Tahani Coolen-Maturi. Generalizing the signature to systems with multiple types of components. In Complex Systems and Dependability, pages 115–130. Springer, 2012.

16:20-18:00 Session 4A: Special Session: Reliability, maintenance and resilience modelling of distribution systems
Location: ROOM A
16:20
Decision Support Tool for Optimizing Performance and Recruitment in Football

ABSTRACT. Modern football increasingly relies on data analysis and technological advancements, particularly artificial intelligence (AI) and machine learning (ML), to optimize performance, strategies, and player management. Recent research highlights various applications of these technologies, ranging from match result prediction to injury prevention, player performance evaluation, and collective strategy optimization. Match result prediction is a key area where researchers leverage historical data. Rodrigues and Pinto utilized models like random forests to analyze data from past matches, achieving an accuracy of 25.26% [1]. Anfilets, using deep multilayer neural networks, improved this accuracy to 61.14% [2]. Other works, such as those by Bomao, have applied several ML models to predict football match outcomes and scores. In-depth data analysis reveals that home teams have a significantly higher win rate compared to away teams [3]. These studies demonstrate progress but also highlight the limitations of these models in addressing football's unpredictability, which stems from factors such as player psychology, referee decisions, and environmental conditions. At the same time, injury prevention has become a major concern for clubs. Majumdar et al. developed models using physiological data to assess the risk of fatigue and injuries, enabling better training management [4]. FatigueNet, a model based on GPS data, integrates deep learning to predict player fatigue, thereby reducing injury risks while maintaining optimal performance [5]. These approaches not only help teams safeguard player health but also ensure that players perform at their peak during crucial matches, emphasizing the role of technology in modern sports science. Research also includes performance evaluation and estimating players' market values. Mustafa and Sakir proposed a method using machine learning to predict players' market value by incorporating objective data such as match performance [6]. Zanganeh et al. applied transfer learning techniques to improve these predictions, demonstrating increased reliability in evaluating player value [7]. These advancements allow clubs to make more informed decisions during player transfers, maximizing return on investments. Finally, other studies focus on optimizing collective strategies. Andriyanov compared player clustering models based on Gaussian mixtures and neural networks, revealing superior performance for Gaussian approaches [8]. Enhanced tracking systems now provide real-time data on player movements, which, when coupled with advanced models, offer coaches valuable insights to refine team strategies and adapt during matches. These studies highlight the potential of AI and ML to revolutionize football by providing powerful tools to analyze, predict, and enhance performance. In this regard, our work aims to address the challenges faced by coaches in player selection by developing a decision support tool for coaches, analysts, and recruiters. This tool will analyze both individual and collective performances, optimize tactical strategies, strengthen a team's weak areas, and facilitate the identification of ideal player profiles for recruitment. This approach enhances match efficiency, optimizes competitive performance, and significantly improves decision-making precision.

Références : [1] Rodrigues, F., & Pinto, Â. (2022). Prediction of football match results with Machine Learning. Procedia Computer Science, 204, 463-470. [2] Anfilets, S., Bezobrazov, S., Golovko, V., Sachenko, A., Komar, M., Dolny, R., ... & Oso- linskyi, O. (2020). Deep multilayer neural network for predicting the winner of football matches. International Journal of Computing, 19(1), 70-77. [3] Pan, B. (2025). Explore Machine Learning's Prediction of Football Games. In ITM Web of Conferences (Vol. 70, p. 04005). EDP Sciences. [4] Majumdar, A., Bakirov, R., Hodges, D., Scott, S., & Rees, T. (2022). Machine learning for understanding and predicting injuries in football. Sports Medicine-Open, 8(1), 73. for Understanding and Predicting Injuries in Football. Sports Medicine-Open, 8(1), 1-10. [5] Kim, J., Kim, H., Lee, J., Lee, J., Yoon, J., & Ko, S. K. (2022). A deep learning approach for fatigue prediction in sports using GPS data and rate of perceived exertion. IEEE Access, 10, 103056-103064. [6] Al-Asadi, M. A., & Tasdemır, S. (2022). Predict the value of football players using FIFA video game data and machine learning techniques. IEEE Access, 10, 22631-22645. [7] Zanganeh, A., Jampour, M., & Layeghi, K. (2022). IAUFD : A 100k images dataset for automatic football image/video analysis. IET Image Processing, 16(12), 3133-3142. [8] ndriyanov, N. (2020). Comparative analysis of football statistics data clustering algo- rithms based on deep learning and Gaussian mixture model. In CEUR Workshop Procee- dings (Vol. 2667, pp. 71-74).

16:40
Optimal manufacturing-remanufacturing- transport planning in low carbon supply chain : Incorporating a carbon tax strategy

ABSTRACT. This study aims to develop a Mixed-Integer Linear Programming (MILP) model to optimize a manufacturing-remanufacturing-transport supply chain. The model considers raw material procurement, transportation costs, inventory capacities, and vehicle constraints. While most studies overlook raw material procurement and transportation strategies, inventory management is affected by the quantities of new and remanufactured products, ordered raw materials, and finite stock capacities in upstream and downstream buffers. This study addresses a realistic scenario where new and remanufactured items are treated distinctly, leading to differentiated manufacturing, transportation, and warehousing plans. The proposed integrated model considers raw material orders, transportation, production, and returns of used products. The model optimizes raw material procurement, manufacturing, remanufacturing, transportation, and returned product handling under a carbon tax strategy, using the carbon footprint as a sustainability metric. The profit-maximizing mathematical model, implemented in CPLEX, is validated with numerical results, demonstrating its effectiveness in optimizing MRSC systems.

17:00
Joint maintenance, mission abort and repairpersons assignment optimization problem in systems under random operating environment

ABSTRACT. Despite the increasing number of studies dealing with mission abort policies (MAP), very few references considered the impact of maintenance on mission-abort decisions. This paper presents a novel model to jointly optimize selective maintenance scheduling and mission-abort decisions for mission-critical systems. The system is assumed to operate in an uncertain environment, which impacts its performance. Given that maintenance resources are limited, this paper develops a new integrated optimization approach where the classical SMP is extended to include mission-abort policies. In this approach, SM and mission-abort decisions are integrated into a single optimization model. A mixed-integer non-linear programming model is formulated to minimize expected total maintenance, mission failure, and penalty costs. A solution method is developed, and numerical experiments are then conducted to validate the proposed model and highlight its added value.

17:20
Optimizing budget allocation for multi-mission selective maintenance planning

ABSTRACT. Mission-critical systems in sectors such as aerospace, defence, transportation, petrochemistry, and power generation require high reliability to prevent failures causing major economic losses, environmental damages, and safety risks. For such systems, solving the selective maintenance problem (SMP) yields optimal maintenance planning decisions during scheduled breaks. Its extension, the multi-mission SMP (MMSMP), focuses on optimizing component maintenance, maintenance levels, and repairperson assignments over multiple consecutive missions interspersed with maintenance breaks. While recent advances integrate predictive, resource-constrained, and fleet-wide strategies, they rely on the unrealistic assumption of fixed budgets, ignoring the reality of fluctuating and tight financial constraints faced by planners. This study investigates how different maintenance budget allocations across missions affect system performance. Using a two-phase decomposition model and binary integer programming, it explores various budget distribution strategies: uniform, linearly increasing, and inverted-V. The goal is to determine how allocating resources differently across missions can enhance asset reliability within fixed budget limits. The findings aim to guide maintenance planners in making budget decisions to improve overall system reliability while balancing resource constraints.

17:40
Reliable Dual-Channel Supply Chain: Integrating Leasing, Remanufacturing, Maintenance and Pricing Policies

ABSTRACT. The rapid growth of e-commerce has significantly changed consumer purchasing behavior and preferences, pushing companies to adapt their supply model. Traditional single-channel supply chains have become inadequate, prompting businesses to integrate online platforms alongside their physical stores to survive in a competitive environment. This shift has led to the widespread adoption of dual-channel supply systems, allowing companies to expand their customer base and enhance market reach. Alongside direct selling, leasing has emerged as a viable alternative, offering customers flexible acquisition options. However, the management of leasing, remanufacturing and maintenance policies requires a well-defined strategy to ensure operational efficiency, reliability and sustainability of the different activities of the supply chain. This study focuses on optimizing operational strategies in dual-channel supply chains, particularly regarding pricing, remanufacturing, selling, leasing, maintenance and warranty. By simultaneously considering these activities and services, the studied system could describe numerous real cases of manufacturing and remanufacturing businesses in a dual-channel supply chain. This work aims to provide insights and support for managers and decision-makers by examining how integrating reverse logistics and different services, such as repair and warranty, into these systems can improve efficiency, sustainability, and reliability. The research examines a dual-channel supply chain comprising a Manufacturer, an Online platform and a Retailer. Both distribution channels allow the selling of new products and propose after-sales services such as repair and warranty. Conversely, the retailer proposes to lease products for a fixed duration and ensures its proper functioning by carrying out maintenance actions on the products during the leasing period. In this study, the Retailer performs periodic imperfect preventive maintenance directly on the leased products, and corrective maintenance action characterized by minimal repairs in case of failure. However, at the end of the leasing contract, the products are returned to the refurbishment process aiming to restore their state to as good as new, in order to get either resold or leased again. As a matter of fact, maintenance services are integrated to enhance customer satisfaction and extend product lifespans. The Manufacturer is responsible for the manufacturing activities of new products while also managing the refurbishment of leased products and the remanufacturing of products returned at the end of the life cycle. For each period, new products are obtained from directly transforming raw materials, and also integrating remanufactured subcomponents obtained from collected products in order to reduce the manufacturing costs. The main focus of this work is to develop pricing policies in a multi-periodic planning horizon for the selling and leasing processes taking into account customer preferences, market sensitivities and demands, and the costs of the proposed services. The optimization of the remanufacturing and maintenance strategies ensures the finest management of the different costs, encouraging competitive pricing in order to attract more customers while maintaining profitability, sustainability and reliability. The leasing pricing decisions are also optimized for each period based on market conditions and maintenance costs, ensuring that leasing remains an attractive option compared to direct purchase.

16:20-18:00 Session 4B: Fault detection and diagnosis
Location: ROOM B
16:20
From knowledge graphs to probabilistic models for system-level diagnostics

ABSTRACT. The increasing complexity of high-tech systems poses a significant challenge on service organizations tasked with the timely identification of the root causes of unexpected downtimes. While data-driven methods are effective for diagnosing frequently occurring issues, or those affecting a large number of systems, rare issues suffer from data scarcity, necessitating alternative approaches. This paper presents a two-step model-based methodology that leverages system architecture information to support diagnosing of systems for which little data is available. Firstly, system design and observability information, including diagnostic tests, is captured in a knowledge graph. Secondly, the knowledge graph is queried and transformed into a probabilistic graphical model. This is fully automated using transformation rules based on the ontology underlying the knowledge graph. The probabilistic graphical model then infers the most likely causes of failure using measurement data, and guides service engineers by suggesting cost-effective diagnostic actions. This paper outlines the proposed methodology, demonstrates its application on a small example system, and reports early-stage validation findings from high-tech system cases.

16:40
Comparative Performance of Machine Learning Architectures for Fault Detection and Diagnosis in Chemical Processes

ABSTRACT. Fault Detection and Diagnosis (FDD) is crucial for both maintenance and control in chemical industries, where early fault detection can prevent costly failures and optimize operations. This is particularly critical in pilot units like those at IFP Energies Nouvelles (IFPEN), which operate under short-term experimental conditions with frequently varying operational parameters. This study conducts an extensive benchmarking analysis using the Tennessee Eastman Process (TEP), a widely used simulated chemical process dataset, to evaluate multiple approaches for fault detection and diagnosis, featuring numerous continuous operations with different sensors and faults. Several methods were implemented and compared including Multi Scale PCA (MS-PCA), AutoEncoder, Ensemble Learning, and LSTM models for fault detection, alongside Random Forest, XGBoost, and BLSTM (Bidirectional LSTM) for fault diagnosis. Using the TEP dataset, our results demonstrate that Ensemble Learning achieves detection rates ranging from 80% to 100% across various fault scenarios for the fault detection task. For the fault diagnosis task, BLSTM achieved a diagnosis accuracy of 98.76%. The study reveals that ensemble-based approaches consistently outperform individual models in handling the complex, multivariate nature of chemical process data, due to its robustness in combining multiple perspectives, comprehensive data capture, and localized detection capabilities. Furthermore, the superior performance of the BLSTM is due to its ability to capture both past and future temporal dependencies in the sequential data, particularly important in chemical processes where fault patterns may manifest with complex temporal relationships. This research contributes to the field of Fault Detection and Diagnosis by providing empirical evidence for the effectiveness of ensemble methods and Bidirectional LSTMs in order to address industrial FDD applications on chemical processes.

17:00
Federated Multi-source Domain Adaptation via Barycenter for Intelligent Fault Diagnosis of Machine Groups

ABSTRACT. Machine groups play a pivotal role in modern industries such as wind farms and flexible production factories. These systems enable collaborative production through multiple interconnected machine nodes, promoting enhanced efficiency and quality in swarm production. Ensuring the reliable operation of these nodes is vital, making intelligent fault diagnosis a cornerstone for maintaining their health and functionality. The rapid development of deep transfer learning has further revolutionized fault diagnosis by enabling the transfer of knowledge from well-studied machine nodes (source domains) to less-explored nodes (target domains). This eliminates the need for extensive retraining on the target domain and underscores the importance of improving model generalization in fault diagnosis for machine group. However, machine group nodes often exhibit unique characteristics and varying degradation patterns, creating significant challenges. In many cases, multiple source domains are available, but none individually offer sufficient, highly transferable knowledge to the target domain. Multi-source transfer learning has emerged as a promising solution, focusing on extracting domain-invariant features from multiple sources and applying them to the target domain. Techniques such as maximum mean discrepancy (MMD) and generative adversarial networks (GANs) have been employed to address domain discrepancies, enhancing the generalization of diagnosis models across diverse domains. Despite their effectiveness, existing multi-source transfer methods often assume centralized domain data for distribution adaptation. This assumption is impractical in real-world engineering scenarios due to two primary challenges. First, data privacy concerns hinder data centralization, as monitoring data often contains sensitive information, including production quality metrics and manufacturing processes. This is particularly relevant for machine nodes belonging to different organizations, where sharing sensitive data is restricted by competitive and privacy considerations. Second, high costs associated with data centralization, including the need for high-bandwidth industrial internet and powerful data centers to process large volumes of data, conflict with the cost-efficiency goals of most enterprises. Federated learning offers a promising alternative, enabling global model training across multiple clients without requiring centralized data. Instead, lightweight, encrypted data—such as model parameters—is aggregated by a central server and shared with clients for further training. This architecture preserves data privacy while reducing transmission and computational costs. Federated learning approaches have recently been explored in multi-source transfer learning for fault diagnosis. For instance, adversarial networks have been employed to align domain distributions, while intermediate distributions or public datasets have been used to bridge domain gaps. However, existing federated methods face two significant limitations: First, most methods focus on marginal distribution alignment while neglecting conditional distributions in decentralized settings. This oversight is particularly problematic in scenarios involving distinct but related machines, where conditional distribution discrepancies remain unresolved. Second, the intermediate distributions used in prior works are often unrelated to the given domains or fixed, making them unsuitable for dynamic domain adaptation during federated training. To overcome these challenges, we propose a dynamic barycenter bridging network (DBBN) for federated multi-source transfer fault diagnosis in machine groups. DBBN introduces a federated learning framework that utilizes a central server to collect lightweight distribution parameters (e.g., means and covariances) from multiple source domains instead of raw data. Using these parameters, a dynamic distribution barycenter is calculated through fixed-point iteration and serves as an intermediate distribution to bridge diverse machine nodes. This barycenter is broadcast to all domain nodes, enabling targeted alignment of both marginal and conditional distributions during collaborative training. Unlike the static intermediates used in previous methods, DBBN dynamically adjusts the barycenter throughout the training process, ensuring effective adaptation to domain shifts. The contributions of this work are twofold. First, we propose a federated multi-source transfer learning framework, called DBBN, to facilitate lightweight and secure diagnosis knowledge transfer in machine groups. DBBN achieves directional alignment of multiple domain distributions by transmitting extremely lightweight data, such as distribution means and covariances, rather than high-dimensional features or raw data. Second, we introduce the Wasserstein barycenter as an intermediate distribution to bridge gaps between domains. By dynamically adjusting the barycenter during training, DBBN ensures its proximity to all domains, improving model generalization and transferability. Experimental results on a multi-source transfer fault diagnosis case using machine-used bearings demonstrate DBBN’s ability to improve adaptation accuracy and achieve superior fault diagnosis performance.

17:20
Low-Power Runtime Monitoring for Hardware Based on Time-Sensitive Behavioral Contracts

ABSTRACT. Embedded systems are vital to modern life and their reliable operation is crucial. While validation and verification during design are essential, they cannot fully address unexpected issues arising during operation, often due to environmental factors. To enhance system reliability, runtime monitoring is widely used to detect and address misbehavior in real-time and increase trustworthiness. Most runtime monitoring solutions, however, focus solely on functional or extra-functional aspects. This work introduces an assertion-based runtime monitoring approach using Time-Sensitive Behavioral Contracts (TSBCs), implemented as hardware components within a versatile VHDL library. The library provides low-level building blocks and readyto- use parameterizable monitor entities that simplify integration. To ensure minimal impact on the monitored system, the approach was evaluated for resource usage, performance, and power efficiency. In a motor inrush current monitoring scenario, the hardware solution reduced power consumption by 0.4% and achieved a frequency over 800 times higher than its software counterpart and comparable hardware solution.

17:40
Dependability and Reliability Analysis Model for Aging Aircraft Avionics Systems with Intermittent Fault Phenomenon

ABSTRACT. Understanding and solving problems arising from rogue units and 'no failure found' (NFF) cases in aviation maintenance is the key to the upkeep of operational reliability. These challenges have emerged, leading to proven reliability monitoring strategies and comprehensive diagnostic approaches. They are essential for discovering the cause of reliability degradation in aviation line replaceable units (LRUs). Addressing the wicked problem of rogue units necessitates the development of nuanced and unique strategies tailored to each unit's specific characteristics, as most units with high NFF rates share similar underlying factors. A layer of complexity in avionic systems originates from fault isolation being challenging and requiring one to know system dynamics. BBNs, expert methods, and system configurations are employed for assessment. From the Fokker Services ERP system Access database, data is stored as inputs and outputs of diagnostic structure. The qualitative relationships are transformed into the conditional probabilities for BBNs. BBN1 and BBN2 networks built using ERP system database data test reliability. Research in the rogue units and NFF incidents of Fokker aircraft applying the Bayesian Belief Nets methodology will lead to better system dependability and diagnostic capabilities in aviation maintenance. Proactive monitoring, targeted interventions, and interdisciplinary collaboration help sustain operational efficiency.