View: session overviewtalk overview
| 10:30 | Control C-Design Strategy for the Optimisation of Heterogeneous Autonomous Agent Formations |
Technical Session IV which includes presentation of 06 papers.
| 12:00 | Driver Alert System: CNN-Powered to Identify Driving-Related Eating Distraction PRESENTER: Rafia Kiran ABSTRACT. Traffic safety is a severe problem around the world. Driver distraction has been a persistent contributor to road accidents, with numerous studies focusing on behaviors such as phone use, drowsiness, and in-vehicle interactions. However, less attention has been given to distractions caused by eating while driving, a behavior that splits attention, slows down reaction time, and raises the chance of an accident. To address this gap, the Driver Alert System is specifically designed to detect drivers while eating. DAS is using a self-built dataset, “NRW25,” with a mobile application for real-time image processing to detect eating habits. Upon detection, it immediately triggers an alarm to alert the driver, sends a notification email to the concerned authority, and logs the incident to a centralized database. These records are then made accessible through a web interface for monitoring and review. DAS uses deep learning-based classification models, specifically Convolutional Neural Networks (CNN), that help in assisting authorities to monitor drivers' performance, increase driver awareness, and eventually lower the number of accidents brought on by in-car distractions like eating. It also maintained records of driver behavior and sent out timely alerts, demonstrating the effectiveness of the suggested approach in recognizing driver actions. Throughout the phases of training, testing, and real-time classification, the system displayed efficient performance. DAS can identify when a driver is eating with an overall accuracy of 98% and demonstrated 97% accuracy in real-time detection. It can be used by government organizations like the National Highways & Motorway Police and the Punjab Safe Cities Authority to enforce regulations and keep an eye on public transportation. Companies such as TCS, Leopards Courier, and Daewoo Express can also utilize DAS for tracking driver behavior on lengthy routes. Additionally, service providers like Uber, Bykea, InDrive, etc., can also integrate DAS for real-time monitoring and passenger safety. |
| 12:15 | Deep Learning based Underwater Image Detection and Processing for Oceanography PRESENTER: Anila Saghir ABSTRACT. In the modern era, researchers have become increasingly interested in underwater observations and the exploration of marine resources. This research interest has led to a surge in underwater image processing and analysis. However, unlike a typical environment, Underwater image acquisition faces multiple distortions, including unstable illumination, random noise, poor contrast, and severe optical degradation due to the aquatic medium. These optical degradations are caused by diffusion, scattering, and absorption of light in water. The precision and effectiveness of underwater object detection in the obtained images are greatly impacted by these challenges. Enhancement of these images can substantially improve their quality for human recognition and object detection algorithms. This paper provides a deep learning and image enhancement based unified framework for the degraded underwater image enhancement and object detection using colour correction and convolutional neural networks. The proposed model restores the visual quality of the acquired underwater images and improves the detection accuracy compared with the conventional methods. Results with the enhanced images and an overall improvement in object detection precision validate the accuracy of the proposed model. |
| 12:30 | Performance Comparison of Machine Learning Techniques for Chelosterol Prediction PRESENTER: Asif Nawaz ABSTRACT. the risk factor of hypercholesterolemia is high in cardiovascular disease but many cases remains undetected due to unavialibility of modern equipment in laboratory for proper screening test. Almost 29 million U.S. adults (≈14% of the total population population) suffer from cholesterol issues and this indicator demand alternative screening testing method. To resolve for this major issue we need to built a ML model which can be implement in screening mechine used to predict high cholesterol level to reduce risk of sudden attack of other decease. By using the using non-invasive NHANES survey data. The Input information of patient including demographics (age, sex, race/ethnicity), lifestyle factors (dietary intake, physical activity, smoking), anthropometric measures (BMI, waist circumference, blood pressure), and self-reported health history (e.g. diabetes, hypertension) can be used. The digital data outcome is defined by clinical thresholds (e.g., total cholesterol ≥240 mg/dL or LDL ≥160 mg/dL). Preprocessing involves inputting missing values, encoding categorical variables, and scaling numeric features. We first train five classifiers (logistic regression, random forest, XGBoost, SVM, k-NN) using stratified 5-fold cross-validation, tuning hyperparameters. The model performance is tested on the basis of accuracy, sensitivity, specificity, and area under the ROC curve (AUC). From results we can analyze that XGBoost model shows the highest discrimination (AUC ~0.80) with high sensitivity, outperforming logistic regression (AUC ~0.68) and others. The main data related to predictors included age, BMI, blood pressure, and diet quality, consistent with known risk factors. These results shows that by using survey data meaningfully stratify cholesterol risk. Finally, we analyze the it implementation on public health, a non-invasive ML-based screening tool could reduce high-risk individuals for follow-up testing, improving preventive care. |
| 12:45 | A Decentralized AI–Blockchain Framework for Adaptive Energy Efficiency and Power Management in Smart Grids PRESENTER: Manzar Ahmed ABSTRACT. The rapid evolution of smart grids demands decentralized, intelligent, and secure frameworks to enhance energy efficiency and ensure reliable power management. This paper present a Decentralized AI–Blockchain Framework for Adaptive Energy Efficiency and Power Management in Smart Grids, integrating artificial intelligence (AI) for predictive control and blockchain technology for transparent, tamper-proof energy transactions. The proposed system leverages edge-based AI agents to forecast load demand, optimize power distribution, and autonomously manage distributed energy resources (DERs) such as solar PV, wind, and storage units. Meanwhile, blockchain-based consensus mechanisms facilitate peer-to-peer (P2P) energy trading, dynamic pricing, and secure data exchange without reliance on a central authority. Simulation results demonstrate that the framework reduces transmission losses by up to 18%, improves energy utilization by 23%, and enhances grid reliability under variable generation conditions. Furthermore, adaptive reinforcement learning enables real-time decision-making to balance supply and demand efficiently. This research work focus on a scalable, resilient, and data-driven solution to future-proof smart grids, particularly suited for regions pursuing sustainable, decentralized energy transformation. |
| 13:00 | VGG16-Based Brain Tumor Detection with Explainable AI PRESENTER: Zarish Majid Shaikh ABSTRACT. Brain tumor detection from medical imaging has become increasingly critical for early diagnosis and treatment planning in healthcare. Current diagnostic methods often require extensive manual analysis by specialists, leading to potential delays and increased workload. This study investigates the application of deep learning techniques, specifically transfer learning with VGG16 architecture, for automated brain tumor classification from MRI images. The research aims to develop a binary classification system to distinguish between tumor-positive and tumor-negative brain MRI scans. The methodology employed a dataset sourced from Kaggle containing labeled brain MRI images categorized as "Yes" (with tumor) and "No" (without tumor). Images were preprocessed through resizing to 224×224 pixels, normalization, and label encoding. The dataset was split into training (67%) and testing (33%) subsets with balanced distribution. A pre-trained VGG16 model served as the feature extraction backbone, with frozen layers to retain ImageNet weights. A custom fully connected head comprising dense layers with dropout regularization was added for binary classification. The model was trained using Adam optimizer and categorical cross-entropy loss function over five epochs. The proposed system achieved remarkable performance with 98% training accuracy and 96% validation accuracy, demonstrating minimal overfitting and strong generalization capability. Additionally, Grad-CAM visualization techniques were implemented to provide explainable AI insights, revealing the model's focus areas during classification decisions. These results indicate significant potential for automated brain tumor detection in clinical settings, offering healthcare professionals a reliable tool for rapid preliminary screening. The high accuracy and interpretability features suggest practical applicability for reducing diagnostic time and supporting medical decision making processes in neuroimaging applications. |
| 13:15 | A Statistical Analysis of CNN Architectures with Grad-CAM Visualization on MRI Brain Scans PRESENTER: Sadia Fatima ABSTRACT. Alzheimer’s disease (AD) is a progressive neurodegenerative disorder that affects millions worldwide, making early detection vital for effective intervention. Early and accurate detection particularly during the Mild Cognitive Impairment (MCI) stage can significantly improve patient outcomes by enabling timely intervention. Recent advancements in deep learning and neuroimaging offer promising avenues for improving diagnostic accuracy. In this study, three convolutional neural network (CNN) architectures including GoogLeNet, AlexNet, and ResNet50, were rigorously evaluated for binary classification of AD using brain MRI scans. A comprehensive framework combining transfer learning, statistical significance testing, and explainable AI was used to ensure both model reliability and interpretability to distinguish Cognitively Normal (CN) subjects from those with Mild Cognitive Impairment (MCI), an early stage of AD. The results show that AlexNet achieves the highest accuracy (93.33%), outperforming GoogLeNet (91.2%) and ResNet-50 (89.33%). Statistical testing confirms AlexNet’s superiority over ResNet-50 (t = 3.42, p = 0.002) and GoogLeNet (t = 2.18, p = 0.041), while the latter two did not differ significantly (t = –1.95, p = 0.063).To enhance interpretability, Gradient-Waved Class Activation Mapping (Grad-CAM) was applied. Interestingly, AlexNet consistently highlighted clinically relevant regions, such as the ventricular areas and cortical structures associated with AD pathology, providing neuro anatomically meaningful explanations for its superior performance. The findings of the study challenge the assumption that deeper architectures guarantee better performance. The relatively simpler AlexNet (60M parameters) not only outperformed the deeper ResNet-50 (25.6M parameters), but also demonstrated lower overfitting (6. 2% vs 9.2% train validation gap) and better interpretability. This work highlights that carefully optimized transfer learning with simpler models can deliver both high accuracy and explainability in medical imaging, offering a promising pathway toward clinically trustworthy decision support systems |
Technical Session V which includes presentation of 06 papers.
| 12:00 | Comparative Analysis of Dynamic Firewall Management and Intrusion Detection Systems in Software-Defined Networks PRESENTER: Nadia Mustaqim Ansari ABSTRACT. Software-Defined Networking (SDN) introduces programmability, scalability, and centralised control, but its architecture also exposes critical vulnerabilities such as denial-of-service, spoofing, and flow table overloading attacks. Traditional static firewalls and signature-based intrusion detection systems (IDS) are insufficient to address these evolving threats. This paper presents a comparative analysis of two advanced approaches: a Finite State Machine (FSM)-based dynamic firewall and an Artificial Intelligence (AI)-driven IDS. The dynamic firewall was implemented in Mininet using POX and Ryu controllers to provide adaptive rule enforcement, while the IDS employed machine learning and deep learning models including Random Forest, Decision Trees, Convolutional Neural Networks (CNN), and Long Short-Term Memory (LSTM) trained on benchmark datasets such as NSL-KDD, CICIDS2017, and UNSW-NB15. Results demonstrate that dynamic firewalls excel in lightweight, real-time prevention with minimal latency. In contrast, IDS frameworks deliver superior detection accuracy against both known and zero-day attacks but at the expense of higher computational overhead and false positives. The findings emphasise the complementary strengths of both approaches and propose a hybrid security framework that combines proactive filtering with intelligent anomaly detection to secure next-generation SDN infrastructures. |
| 12:15 | An Affordable and Power-Constrained Underwater Modem for AUV Swarm in ISR Applications PRESENTER: Muhammad Ibrar Ul Haque ABSTRACT. During intelligence, surveillance, and reconnaissance (ISR) operations, underwater communication is essential for coor-dinating autonomous underwater vehicle (AUV) swarms. Commercially available modems to be used in underwater communication projects with limited funding are costly and power-hungry. This study proposes a low-cost, low-power underwater modem fit for short range communication in ISR applications. Low-priced hardware platforms and commercial off-the-shelf (COTS) parts are used in the research to reduce cost. A frequency shift keying (FSK) modulation is chosen because it is easy to use and reliable. The modem has a modu-lar architecture so that it may be used with many different types of AUVs and can also upgraded as needed. Preliminary testing was done in controlled conditions in an indoor water tank at low data rates to validate the functionality of the mo-dem. The modem's lower power consumption makes it fit for long-duration ISR operations in time-critical situations. Appli-cations including harbor monitoring and environmental sens-ing can be supported by the proposed modem. This study facil-itates the bridging of the gap between premium underwater communication systems and their actual ISR deployments, therefore enabling AUV swarm adoption to be more realistic in the marine environment. |
| 12:30 | An AI-Based Intrusion Detection System to Enhance Security in Software-Defined Networks PRESENTER: Rizwan Iqbal ABSTRACT. Software-defined networking has emerged as a transformative technology, offering centralized network management and dynamic control of network resources. However, the open nature and programmability of SDN expose it to various security vulnerabilities, with Distributed Denial of Service (DDoS) attacks being a significant threat. This paper presents an Intrusion Detection System (IDS) to enhance network security using machine learning algorithms. A model was trained using real-world network traffic data collected from an SDN environment, which was then integrated into an IDS. The trained model was evaluated through extensive testing, demonstrating high accuracy in detecting DDoS attacks. As the complexity of attacks on SDN systems continues to increase, it is essential to explore innovative detection mechanisms to address evolving threats in future research. |
| 12:45 | Adaptive Kalman Filtering with Machine Learning–Based Nonlinear Compensation for Underwater Vehicle State Estimation. ABSTRACT. Reliable operation of autonomous underwater vehicles (AUV) depends on accurate state estimation. The major challenges come from sensor noise, unpredictable underwater environments, and complex vehicle dynamics. Traditional state space models assume linear dynamics to work with classical estimation methods. Kalman Filter (KF) variants and least squares rely on this assumption. These methods struggle when the system is highly nonlinear or changes over time. This limits their estimation accuracy for underwater vehicles operating in complex environments. This work overcomes these limitations by proposing a hybrid estimation framework that extends classical Kalman filtering by combining nonlinear compensation and adaptive uncertainty modeling. It uses a confidence-gated Radial Basis Function Neural Network (RBFNN) to capture unmodeled nonlinear dynamics and applies bounded corrections to the predicted state of the Extended Kalman Filter (EKF). It adapts process noise in response to changing residual patterns. The RBFNN is retrained online in a controlled manner, using residual thresholds and scheduled updates to maintain the state estimates reliable. Tests on a public AUV dataset showed reduced position RMSE by 31\% and improved trajectory tracking over the standard EKF. |
| 13:00 | Gravity Centrality: A Physics-Inspired Measure Connecting Network Influence and Human Behavior PRESENTER: Mahnoor Shahid ABSTRACT. In social networks, influence determines how information, trends, and ideas proliferate. This study presents a gravity-based centrality model that uses a physics-inspired methodology to quantify node importance. To investigate its theoretical behaviour, we test it on small-world, random, and scale-free synthetic networks. Next, we use it in a practical data set on neuropsychosomatic body language to to look into the relationships between students' body language and mental health and cognitive and somatic markers such coping mechanisms, stress levels, and environmental influences.The findings demonstrate that gravity centrality provides a novel viewpoint that connects theory to practical implementation and offers insights for social influence studies, network science, and marketing. |
| 13:15 | A Comparative Analysis of Modern CMOEAs for Pathfinding Problems on Real Encoding PRESENTER: Benish Fayyaz ABSTRACT. Pathfinding problems are very important in robotics, gaming, self-driving cars and logistics. These problems usually are multi-objective problems with complicated constraints, thus Constrained Multi-Objective Evolutionary Algorithms (CMOEAs) are well suited for their solution. This paper contains a thorough comparative evaluation of four state-of-the-art CMOEAs (DRLOS-EMCMO, SCEA, MCCMO, and SFADE) on the pathfinding problems with real encoding. We test these algorithms on eight performance criteria including convergence, solution quality, robustness, diversity, computational efficiency, scalability, hyperparameter sensitivity, and Pareto optimality. Experimental results show that the DRLOS-EMCMO and MCCMO algorithms outperforms other algorithms and the DRLOS-EMCMO provides the best overall performance on the MaOPP_real benchmark problem. |
Technical Session IV which includes presentation of 10 papers.
| 14:30 | Critical Node Identification Using Random Forest Based Weight Learning for Multi-Criteria Decision Making PRESENTER: Abdul Moiz ABSTRACT. Identifying important nodes within networks has many applications in fields of study including biological networks, wireless sensor networks, optical communication systems and optimizing network reliability. In this research a hybrid method was proposed that utilizes a Random Forest based (RF) algorithm and the VIKOR method to identify the most important nodes within a network utilizing the Escherichia coli protein-protein interaction network as a model of a complex network. A key issue in network optimization is the need to weight the various centrality metrics. Rather than use existing methods in the literature the RF-VIKOR method learns the optimal weights using supervised machine learning techniques based upon ground truth data to reflect the non-linear relationships among the features in networks. Results were obtained through comparative studies with entropy-based Weighting (ET-VIKOR) on the 2,727 nodes with 254 experimentally confirmed important nodes. Network topology analysis indicated that heterogeneous variance patterns existed across all centrality metrics with betweenness centrality having the greatest variance (Coefficient of Variation = 3.38). The results also indicated that the RF-VIKOR methodology provided an improvement over ET-VIKOR in identifying important nodes by identifying 120 of the top 600 ranked nodes as true important nodes while ET-VIKOR identified 117 important nodes. Therefore, the enhanced pattern recognition ability of the RF-VIKOR methodology is beneficial for the identification of nodes with complex multi-metric attributes. |
| 14:45 | Computational Modeling of Pulsatile Non-Newtonian Blood Flow in a Stenosed Bifurcated Femoral Artery PRESENTER: Shahnoor Khan ABSTRACT. — Cardiovascular disorders, particularly those linked with artery stenosis, continue to be a prominent source of morbidity and mortality around the world. Stenosis changes blood flow dynamics, resulting in aberrant velocity distributions, pressure dips, and wall shear stress (WSS), which have a substantial impact on vascular physiology and disease. A transient transport flow model was employed in a bifurcated femoral artery with varied stenosis circumstances to study flow phenomena when the non-Newtonian blood flow regime within the three-dimensional space domain was set to pulsatile. The nonlinear governing equations were solved using the Finite Element Method with adequate initial boundary conditions. Hemodynamic parameters, including Wall Shear Stress, were calculated numerically for different stenosis severity (25%, 50%, and 75%) and length (0.5mm, 1mm, and 1.5mm). The approximate velocity and pressure profiles for 25%, 50%, and 75% stenosis were calculated at various places, with the maximum velocities achieved. The unstable response of stress caused by stenosis-induced shear on the outside wall was described. As pulsatile flow time increased due to stenosis, wall shear stress steadily reduced. Modeling research of blood flow through arteries could aid in diagnosing and treating arterial disorders, as well as expanding understanding about vascular physiology, leading to new medical treatments to improve patient treatment results. |
| 15:00 | Confidence-Aware Feature Recovery Using Fuzzy Denoising Autoencoders in Biomedical Datasets PRESENTER: Moazzama Mateen ABSTRACT. A constant struggle in constructing reliable Software Effort prediction models is the widespread occurrence of missing values in historical project datasets, often due to human error, inconsistent reporting, or incomplete documentation. These deficiencies undermine the robustness and predictive validity of the models. Comparable challenges arise in neuropsychiatric research, where data incompleteness and heterogeneity significantly reduce analytical depth and the reliability of derived insights. This paper introduces a Fuzzy Denoising Auto encoder (FDAE), an innovative research based on integrating denoising reconstruction with fuzzy membership refinement, designed to recover missing features while quantifying their reliability. The model synergistically captures latent feature relationships through autoencoder representations and refines missing entries using fuzzy memberships derived from cross-dataset similarity. Departing from conventional approaches that rely solely on numerical error metrics, the proposed framework incorporates a fuzzy confidence index, derived from membership weights, to quantify the reliability of each imputation. To ensure rigorous evaluation, statistical measures including Mean, Standard Deviation, and 95% Confidence Interval are used to assess the stability of imputations. Experimental results on neuropsychiatric datasets demonstrate that FDAE achieves near-perfect recovery for mental illness prevalence features and strong but variable recovery for schizophrenia study attributes, providing both accuracy and reliability in handling incomplete data. |
| 15:15 | Smart Donor Segmentation and Predictive Analytics for Fundraising Optimisation in Nonprofits ABSTRACT. In the nonprofit sector, effective supporter segmentation is critical for optimising fundraising strategies and enhancing engagement. This study introduces a comprehensive data-driven framework that integrates unsupervised machine learning, temporal modeling, and cluster profiling to identify meaningful supporter patterns using transactional donation data. Five clustering algorithms, K-Means++, K-Medoids, K-Modes, DBSCAN, and MeanShift, were systematically applied to high-dimensional supporter data sets. Through rigorous preprocessing, including RFM feature engineering and dimensionality reduction via Principal Component Analysis (PCA), the models achieved robust performance. The K-medoids algorithm yielded the highest silhouette score of 0.69, producing well-separated and interpretable clusters. Temporal analysis and seasonal decomposition revealed state-wise heterogeneity in donation behavior, with New South Wales (NSW) demonstrating significant volatility and peak donations exceeding $310,000. Gender-based trends indicate a higher and growing contribution from female supporters over time. Forecasting using ARIMA models provides actionable monthly predictions with tight confidence intervals, enabling proactive campaign planning. The proposed framework bridges static segmentation with dynamic forecasting, enabling nonprofit organisations to tailor strategies by region, demographics, and seasonality. This multi-tiered approach enhances decision-making, fosters personalised engagement, and provides a scalable blueprint for modern fundraising analytics. |
| 15:30 | Fuzzy Social Network Analysis through Community Detection Techniques PRESENTER: Faisal Ahmed ABSTRACT. Detecting communities in complex networks, such as social groups or biological systems, is a challenging task, especially when communities overlap or are not clearly defined. This research focuses on using fuzzy computational methods and ideas from physics to tackle this problem. Fuzzy logic is used to handle uncertainty and overlapping structures, while mathematical models from physics, like energy minimization and entropy techniques, help analyze how communities form and evolve in dynamic network. The study also uses optimization method and spectral analyze to test and improve the accuracy of these techniques. By combing fuzzy mathematics and physics, this research aims to develop a flexible and efficient system for community detection. This work also highlights the importance of combing tools from different fields to solve real-world problems, making it easier to understand complex system in the areas of social network, biological study, and data science. |
| 15:45 | Optimizing Channel Estimation in Free Space Optical LEO-OFDM communication Systems using Clustering-Based Refinement PRESENTER: Saad Rustum ABSTRACT. This paper proposes an effective channel estimation framework for free-space optical (FSO) low-earth orbit (LEO) communication systems, combining the least squares (LS) technique with hierarchical and Gaussian mixture model (GMM) clustering. Simulations across 4-QAM, 16-QAM, and 64-QAM modulation schemes over a 10 to 40 dB signal-to-noise ratio (SNR) range show significant improvement of link budget by 3 dB in FSO channels and 5 dB in Doppler-affected FSObased LEO channels. The error vector magnitude (EVM) results demonstrate consistent performance enhancement across all modulation orders, indicating that hierarchical clustering optimizes multipath structures while GMM’s probabilistic approach boosts noise resilience across SNR levels. Both methods lower pilot overhead using cluster-derived weights, proving adaptable to the challenges of both FSO and LEO channels. This framework offers a robust solution for high-capacity optical communications, with future work focusing on real-time use, adaptive modulation, and support for even higher-order constellations. |
| 16:00 | A Comprehensive Machine Learning Framework for Predicting Diabetes, Heart and Kidney Diseases With Cross Dataset Validation and Risk Assessment PRESENTER: Ayesha Muntasha ABSTRACT. Diabetes, a complex metabolic disorder, often coexist alongside kidney and heart complications, forming an interconnected web of chronic diseases. This study introduces a multi disease predictive framework that leverages supervised machine learning to identify diabetes risk while capturing its comorbid patterns with renal and cardiovascular conditions. A unified analytical pipeline encompassing data preprocessing, feature selection, stratified k-fold cross validation and performance benchmarking was developed to ensure consistency and generalizability across datasets. Multiple traditional and ensemble classifiers, including Logistic Regression, Support Vector Machines (SVM), XGB, Random Forest, Gradient Boosting, k-Nearest Neighbors and Naïve bayes were systematically evaluated under identical conditions. Results reveal that ensemble-based models consistently yield superior predictive stability and cross dataset adaptability, furthermore, risk factor, correlation analysis highlights glucose BMI and blood pressure as shared predictions across diseases, reinforcing their diagnostic significance. By integrating predictive modeling with comorbidity insights, this work advances an interpretable and integrative approach to early diabetes detection and the prevention of chronic disease progression. |
| 16:15 | A Machine Learning Approach for Risk Stratification of Cardiovascular Disease in Type 2 Diabetic Cohort PRESENTER: Syed Ibad Hasnain ABSTRACT. Cardiovascular disease (CVD) is an important complication of Type 2 diabetes, resulting in high morbidity and mortality. The early diagnosis of this condition is vital for appropriate treatment, but traditional marker-based diagnostics tend to lack sensitivity in the early phases. This research study uses different machine learning approaches to enhance the early diagnosis of CVD among Type 2 diabetic patients. The sample 703 patient's data was collected through cross sectional study carried out within five months from May 2024 to September 2024 at one of the tertiary care hospitals in Karachi, Pakistan, using stratified random sampling. The features include sex, age, BMI, HbA1c, creatinine, cholesterol, triglycerides, History of stroke and myocardial infarction. Patients were grouped on basis of their disease status. Descriptive analysis showed clear patterns of poor health indicators with CVD group. Machine learning models including Logistic regression, Support Vector Machine and Random Forest were implemented and evaluated. These models showed promising results, especially in identifying patients at risk of CVD. In conclusion, Random Forest, demonstrated higher accuracy over other models validating the feature importance. There are still some limitations to overcome such as interpretability and class imbalance; however, these models provide substantial advancement over conventional diagnostic methods. |
| 16:30 | Flame Sight: A deep learning framework, YOLOv8, based on Deep Learning to find forest fire PRESENTER: Ghafaria ABSTRACT. Forest fires remain to be a significant environmental, social and economic hazard across the world. The quick and precise identification is an essential measure in reducing the environmental harm and minimizing the potential harm to human life. satellite-based monitoring and heat sensors are also notorious traditional fire detectors because of the lag time and high far. This paper introduces a yolov8 deep learning model that can detect forest fires, which was trained with a four-class dataset of fire, smoke, smog, and sunlight to increase its practical use. The data had gone through preprocessing such as photometric augmentation and geometric augmentation to recreate the various lighting and atmospheric conditions. it trained the model over 50 epochs with an average precision (map@0.5) of 0.84 and map@0.5:0.95 of 0.60 and precision and recall average of 0.81 and 0.89, respectively. Interestingly, the model achieved a recall of 0.99 in smoke detection, which is high sensitivity in detecting fire at the early-smoking stage whilst minimizing false alarms due to smog and sunlight. Such findings make yolov8 a trusted model to use in detecting wildfires almost immediately, and further development of next generation yolo models and temporal modeling can be considered in the future. |
| 16:45 | Semantic Segmentation of using DeepLabV3. Wheelset Laser image defect Detection: Achieving Almost-Perfect Performance in the form of transfer. Learning and Reformulation of Tasks PRESENTER: Zohaib Ahmad ABSTRACT. Wheelset laser stripe image detection and segmentation is an important part of railway safety inspection systems. Current deep learning methods on the publicly available WLI-Set dataset perform moderately well (76-92% mIoU), in 3-class semantic segmentation where the pixels in the background form the majority of the loss gradient although they are not included in the mIoU. In this paper, we introduce a refined 2-class semantic segmentation model based on DeepLabV3 with ResNet-50 backbone, which has 99.9% of mean Intersection-over-Union (mIoU) on the WLI-Set validation dataset, which is 23.5% higher than the best- performing baseline (U-Net, 92.2% ). Surprisingly, our model attains 99.7% mIoU, with 1 epoch of training (6 minutes) which indicates very high sample efficiency and transfer learning using ImageNet pre-trained weights. To exclude the possibility of data leakage as an explanation of why our training and validation sets have high overlap, we strictly test that there is no overlap between the two sets (by programmatic checking). We should note that our most important innovation is to exploit the background pixels as an ignore index (255) instead of a segmentation class and remove the imbalance of classes at the optimization level. Our findings are valid and reproducible, and the fact that we can use per-sample metrics (600 images), confusion matrix analysis, and qualitative inspection to validate our results is extensive. This work sets a new standard of wheelset defect segmentation and gives some information that can be used in the wayside inspection systems as well as other monitoring tasks in the industrial railway. |
Technical Session IV which includes presentation of 10 papers.
| 14:30 | Optimized News Data Fusion For Improved Financial Markets Prediction PRESENTER: Komal Batool ABSTRACT. The prediction of financial markets is a prominent and challenging area of research, offering valuable insights to market participants for informed decision-making. However, accurately forecasting financial market behavior is inherently complex due to its stochastic nature and the influence of various controllable and uncontrollable factors. This study focuses on predicting the daily closing prices of the S&P 500 and NASDAQ indices using four machine learning models: Support Vector Regression (SVR), Random Forest (RF), Linear Regression (LR) and K-Nearest Neighbors (KNN). To examine the sensitivity of the market to different types of information, three distinct datasets are utilized. The first comprises historical price data alongside macroeconomic indicators. The second consists of sentiment features derived from web-based news articles. The third is a hybrid dataset that integrates the previous two. Experimental results demonstrate that the Random Forest model consistently outperforms the other models in forecasting both indices. Additionally, the hybrid dataset proves to be the most effective, as models trained on it achieve the lowest Root Mean Square Error (RMSE) and R2, indicating superior predictive accuracy. |
| 14:45 | Resonance Stability and Bandwidth variation in Fabricated X Band Microstrip Resonator ABSTRACT. This paper presents the fabrication and experimental characterization of a microstrip resonator designed for X-band applications. The structure was previously simulated, exhibiting a narrow bandwidth of 154 MHz at a resonance frequency of 10.64 GHz. In this work, the same design was fabricated on a single-sided copper-coated FR4 substrate and tested using a Vector Network Analyzer in transmission mode. The measured results show excellent agreement with the simulated data in terms of absorption and resonance frequency, while revealing a significant increase in bandwidth. Various factors contributing to this bandwidth broadening are analyzed and discussed in detail, emphasizing the influence of practical fabrication parameters on the performance of the microstrip resonator. |
| 15:00 | Enhancing Public Health Decision-Making through Patient-Level Healthcare Data Imputation and Machine Learning PRESENTER: Sadia Aziz ABSTRACT. Diabetes mellitus is a prevalent chronic condition that has significant effects on public health. Improving results and reducing complications depend on early detection. This study compares the performance of multiple machine learning classifiers with data imputation methods on two healthcare disease data sets: the Pima Indians Diabetes dataset and the Chronic Kidney Disease (CKD) data set from the UCI repository. After processing missing values using a variety of imputation strategies, these classifiers were trained using Random Forest, Support Vector Machine, Decision Tree, K-Nearest Neighbor, Naive Bayes, XGBoost and Logistic Regression. The results clearly show that Logistic Regression, Support Vector Machine and Random Forest models generate accurate predictions with a high degree of reliability (90–100%). Our study highlights the value of integrating machine learning with data imputation for better early disease identification. |
| 15:15 | From Data Driven Models to Knowledge-Guided Neural Networks: A Comparative Study on Modeling of Micro Gas Turbine PRESENTER: Saddam Khan ABSTRACT. Accurate power prediction of micro gas turbines (MGTs) is essential for reliable and efficient operation in dynamic energy systems. Machine learning (ML) models can approximate static relations but fail to capture temporal dependencies during transient operations. Deep learning (DL) models, with their sequence learning capability, provide improved accuracy but often ignore physical consistency. This paper presents a comparative study of ML algorithms, i.e. Support Vector Regression (SVR), Random Forest (RF), Extreme Gradient Boosting Machine (XGBM) and Light Gradient Boosting Machine (GBM), DL architectures, i.e. Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Temporal Convolutional Network (TCN) and Patch Time-Series Transformer (Patch TST) and Knowledge-Guided (KG) GRU model that integrates a physics based multi-state constraint. The dataset obtained from a 3 kW micro gas turbine (MGT) laboratory testbed, contains eight experimental runs representing both step and ramp operating modes. Performance is evaluated using Root Mean Square Error (RMSE) and R score. The results show that DL models outperform ML models by up to 50% reduction in RMSE. The proposed KG-GRU achieves the lowest RMSE = 118 and highest R score = (0.981) while reducing ramp-rate violations. These results demonstrate that combining domain knowledge with data-driven networks yields more accurate and physically consistent turbine power prediction. |
| 15:30 | Ensemble Machine Learning for High-Fidelty Techno-Economic Forecasting of Grid-Scale Hybrid Renewable Energy PRESENTER: Syed Taimoor Ali ABSTRACT. This research develops the framework for the first AI-based predictive techno-economic modeling and optimization of hybrid solar-wind renewable energy systems for the resource-rich Sindh region of Pakistan. For this study, we created a synthetic large-sized dataset of 5000 observations that included several elements of the meteorological, operational and economic dimensions and built and evaluated ensemble machine learning models which were Random Forest (RF) and Extreme Gradient Boosting (XGBoost). These models were then trained to forecast critical performance indicators such as total energy generation, Levelized Cost of Energy (LCOE), and CO2 emission intensity. Other features of the dataset included Global Horizontal Irradiance (GHI), wind speed, temperature, humidity and capital cost data obtained partially from NASA-Power analogs. Exceptional predictive accuracy was achieved by both the models, with the XGBoost regressor yielding coefficients of determination (R2) of 0.9978 for energy prediction and 0.9930 for LCOE prediction. |
| 15:45 | Optimizing Kalman Filter Performance for Data Loss Using Gradient Descent Technique PRESENTER: Haris Khan ABSTRACT. State estimation is crucial in control and communi- cation systems, including applications such as target tracking. The Kalman Filter is widely used for such tasks, but its performance deteriorates significantly when measurements are missing. To address this, Open Loop Kalman Filtering (OLKF) and Compensated Closed Loop Kalman Filtering (CCLKF) have been proposed. In CCLKF, Auto-Regressive (AR) models are commonly used for compensation, which improves estimation accuracy but requires inversion of large matrices, leading to high computational cost. In this work, we propose a novel CCLKF approach that uses gradient descent to compute the Linear Prediction Coefficients (LPCs) instead of the traditional normal-equation method. The gradient descent–based approach avoids matrix inversion, significantly reducing computational complexity while maintaining accurate error compensation. The performance of the proposed method is demonstrated through a mass-spring system case study, showing comparable error reduction with lower computational cost. |
| 16:00 | A Digital Vaccination Tracking System for Improving Childhood Immunization Rates and Data Accuracy in Pakistan PRESENTER: Hania Khanum ABSTRACT. A Digital Vaccination Tracking System for children helps to improve immunisation rate in children, data accuracy, ensuring timely vaccinations and improves public health outcomes.As for the increasing case of low immunisation in children as well as increasing polio case in pakistan and having no records of vaccinations. This application will help by digitally tracking complete vaccination schedule of a children. The main focus of this application is on polio, but the system will also include all the essential childhood vaccinations that are necessary for children.This application main target is to eliminate the need for health workers to go door to door, instead by the help of this application the health workers can go directly to that household where the children are due or overdue for vaccines. With accurate child and household data , vaccination campaigns can become more efficient, targeted and reliable. |
| 16:15 | Centrality-Driven Secure Remote Monitoring and Fault Detection in Three-Tank Cyber-Physical Systems PRESENTER: Ubaida Fatima ABSTRACT. This paper presents a centrality-based approach for secure remote monitoring and fault detection in a three-tank cyber-physical system. The process is modeled as a directed network linking sensors, actuators and controllers, and classical metrics along with the Global Clustering Coefficient- dependent degree centrality (GCCDC) measure are used to identify structurally important nodes. Results show that PCL2,PLC3,and P3 hold dominant influence in system communication and flow paths. These insights support targeted security measures and focused monitoring at nodes where faults or attacks are most likely to appear. The study demonstrates how network analysis can strengthen the reliability and protection of industrial process systems. |
| 16:30 | Deep Learning-Based Surface Defect Detection for Railway Track Safety PRESENTER: Roshni Mustafa ABSTRACT. — Defects on the surface of railway tracks pose risks to safety and require quick and reliable identification and classification that are not limited to traditional human inspections. This study provides an instance segmentation framework based on YOLOv12 for the detection and classification of crack, flaking, shelling, spalling, and joint defects. The process included the preprocessing, annotation, and augmentation of the size-restricted, high-resolution dataset collected at the NCRA MUET site to develop a dataset that addressed the class imbalance in the defect classes. YOLOv12 performed well overall, achieving an mAP@0.5 of 0.976 with stable precision–recall behavior across all defect classes, besides joint defects. The system presented provides an accurate, real-time, and scalable method for automated inspections of railway track defects. and accurate automated inspection solution for railway tracks. |