ECAI-2025: 17TH EDITION OF INTERNATIONAL CONFERENCE ON ELECTRONICS, COMPUTERS AND ARTIFICIAL INTELLIGENCE
PROGRAM FOR FRIDAY, JUNE 27TH
Days:
previous day
all days

View: session overviewtalk overview

09:30-11:00 Session 9: ECAI KEYNOTE LECTURES II
Location: Aula
09:30
Spin Wave Computing: From Spin Waves Interference to Phase Rotation and Beyond
10:10
Wireless Sensor Network for rural area monitoring and precision agriculture
11:00-11:30Coffee Break
11:30-14:00 Session 10A: International Workshop on Technology and Materials Engineering
Location: Room 1
11:30
Elaboration of ZnO nanoparticles using egg white and zinc sulphate

ABSTRACT. The biogenic synthesis of zinc oxide refers to the process by which living organisms are used in its preparation. Zinc oxide is one of the most researched oxides, because it is used in a wide range of fields such as skin treatment, ceramic industry, food industry or antibacterial. In this research, zinc oxide was developed with the help of the biological precursor: ovalbumin from free-range chicken eggs. Zinc sulfate of different molar concentrations was also used, making a comparison between the obtained powders. The samples were subjected to compositional characterization through ATR-FTIR analysis and molecular absorption characterization using UV-VIS.

11:45
The influence of precursors in the development of zinc oxide nanopowder using ovalbumin from chicken eggs raised in poultry farms

ABSTRACT. Zinc oxide nanopowders can be obtained by different synthesis methods, such as chemical, physical or mechanical synthesis. This research develops the production of zinc oxide by green synthesis. Green synthesis brings as a novelty the use of natural precursors in the production of oxides. Thus, the research consists in the biogenic elaboration of zinc oxide by using ovalbumin from chicken eggs raised in poultry farms as a precursor. The zinc oxide powders are characterized structurally and morphologically with the help of SEM-EDS and XRD.

12:00
NiO nanostructures synthesis and characterization for functional antibacterial applications
PRESENTER: Daniela Istrate

ABSTRACT. Nickel oxide (NiO) nanostructures exhibit remarkable physicochemical and antimicrobial properties, making them highly relevant for biomedical and environmental applications. This study presents a comparative analysis of various synthesis methods for NiO nanoparticles, including chemical reduction, sol-gel, hydrothermal, and solvothermal techniques. Two types of NiO nanoparticles, synthesized via hydrolytic routes with and without urea, were characterized by Attenuated Total Reflectance Fourier transform infrared spectroscopy (ATR-FTIR), Scanning Electron Mycroscopu and energy-dispersive X-ray spectroscopy (EDS). The NiO nanostructures were further tested for their antibacterial performance against Escherichia coli and Enterococcus faecalis reference strains. Results revealed a significant reduction in bacterial colonies, particularly after 3 hours of contact, confirming the antimicrobial potential of the NiO nanostructures.

12:15
Metallic Oxide Pigments Analysis Applied in Painting Techniques

ABSTRACT. This study explores the application of metallic oxide pigments in watercolor painting, focusing on the dispersion behavior and aesthetic performance of pigments such as nickel oxide (NiO), zinc oxide (ZnO), and cobalt oxide (Co₂O₃). Ten different pigment formulations were prepared using varying ratios of demineralized water, surfactant (Tween 80), and binder (Arabic gum), and were applied on watercolor paper to evaluate their visual and technical characteristics. The results highlight the critical role of gum arabic in achieving uniform, well-adhered layers and the importance of maintaining a balanced ratio between pigment, liquid, and dispersing agents. Samples containing both surfactant and binder exhibited improved homogeneity, color intensity, and stability. In contrast, samples lacking binder showed weak dispersion and poor fixation. The study identifies optimal formulations and offers practical recommendations for future research and artistic applications involving metal oxide-based watercolors.

12:30
Development of a Methodology for Evaluating the Stress and Strain Fields in the Elastic Domain through Finite Element Modeling of a Non-Standard Mechanical Loading Test
PRESENTER: Denisa Toma

ABSTRACT. The thin tubes used for cladding fuel elements in Generation IV liquid lead-cooled reactors (LFRs) are currently in the preliminary research stage; the nature of the material, the thermomechanical stress conditions, and the material parameters are still unknown. However, efforts are being made to predict their behavior under appropriate mechanical stresses, most often using the finite element method (FEM). In the concrete lack of Generation IV materials, studies currently consider the materials and structural components of existing reactors, the exercise is useful in establishing a test matrix, test conditions, methodologies for evaluating material properties, and behavior of future tubes for Generation IV molten lead-cooled fuel elements. In this regard, modeling with the ANSYS code of non-standardized mechanical tests performed on thin-walled tubes used in fuel elements is a solution to study the behavior of the material. This modeling of non-standardized mechanical tests is a first in scientific studies in Romania, being necessary in preparing the analyses on the fuel element claddings of generation IV reactors, when they will be available in the Pitești Nuclear Research Institute. In the future, the fuel elements that will be used in the ALFRED demonstrator (Advanced Lead Fast Reactor European Demonstrator) will be considered. In this paper, a presentation is made of the modeling stages with the ANSYS code of the behavior of thin tubes in non-standardized Ring Tensile Test (RTT) type tests, having as material the Zircaloy-4 (Zy-4) alloy used in the CANDU (CANadian Deuterium Uranium) fuel element. Also, the results of the simulations obtained from the finite element modeling of RTT-type samples with the properties of the Zy-4 material regarding the stress and strain field are presented.

12:45
Study on the Ultrasonic Characterization Method of Liquid Heavy Metals
PRESENTER: Denisa Toma

ABSTRACT. Due to their specific thermo-physical and chemical properties, heavy metals such as lead (Pb) and its alloy, the Pb-Bi eutectic (lead-bismuth), have been chosen as the main coolants for generation IV nuclear reactors, the LFR (Lead Cooled Fast Reactor) series. For operational safety reasons, it is necessary to monitor the capacity of structural materials to withstand the intended operating conditions, the configuration of the fuel core, but also the flow of liquid metal in the reactor circuit throughout the entire operating period. Given the opacity of the liquid metal environment at high temperatures, ultrasonic waves are the only viable physical method for obtaining internal information during reactor operation. The practical implementation of ultrasound technology under such extreme conditions is not yet fully validated and requires extensive experimental investigations. For this purpose, dedicated test facilities, specialized equipment and measurement methods adapted to the liquid metal environment are required. In this regard, within the Institute for Nuclear Research (ICN) Pitesti, activities are being carried out to develop the experimental infrastructure for ultrasonic measurements regarding the determination of the acoustic parameters of liquid lead. The paper presents general aspects regarding ultrasonic technology, the equipment used, signal processing means and preliminary results for the verification of the test system for ultrasonic measurements in the molten lead environment.

13:00
Valorization of Biogenic Waste Shells:Mytilus edulis and Rapana venosa for Controlled Elaboration of Calcium Carbonate

ABSTRACT. This study investigates the elaboration and characterization of calcium carbonate (CaCO₃) from two biogenic sources collected from the Black Sea coast: Mytilus edulis and Rapana venosa shells. The shells were cleaned, dried, ground and thermally treated at 900°C for 2 hours, and analyzed using XRF, SEM, EDS and FTIR techniques. XRF results indicated a high CaCO₃ content in both samples, with CaCO₃ purity of 98.2% for Mytilus edulis and 97.4% for Rapana venosa and lower levels of MgCO₃ and metal oxides. SEM analysis revealed that CaCO₃ derived from Mytilus edulis exhibited a rough and aggregated morphology, while Rapana venosa resulted in more ordered, granular structures, suggesting higher crystallinity. EDS spectra confirmed a Ca:O:C ratio closer to theoretical values for Rapana venosa, which also exhibited a lower carbon content (3.6%) compared to Mytilus edulis (6.3%), indicating more efficient thermal decomposition. The study proposes an optimized process for producing fine, morphologically controlled CaCO₃ from marine waste shells, offering a comparative approach for sustainable raw material valorization with potential applications in environmental, pharmaceutical and industrial domains.

11:30-14:00 Session 10B: Artificial Intelligence and expert systems II
Location: Room 2
11:30
Creating an Artificial Evolutionary Intelligence Using a Graphic Engine

ABSTRACT. Any living thing, both animal and plant, develops depending on the environment and their interaction. This article will talk about the development of an evolutionary system for simulating scenarios, and the main objective is to determine whether the individual is adaptable to the ecosystem. In general, such simulations have two parts: An evolutionary part and a genetic part, but for a better understanding it was chosen to focus only on the evolutionary part. This will create an artificial intelligence that, based on the surrounding ecosystem, will evolve to achieve a final goal. Such a system is very important because it can be added as a distinctive feature in the computerized simulation of living beings or real ecosystems.

11:45
Comparative Study of Deep Learning Models for Traffic Forecasting in V2V Communication in VANETs
PRESENTER: Meryem Hanine

ABSTRACT. Vehicular Ad Hoc Networks (VANETs) are one of the most significant enablers of intelligent transportation systems by facilitating vehicle-to-vehicle communication, also known as Vehicle-to-Vehicle (V2V) communication. VANETs play a vital role in road safety improvement, traffic optimization, and overall transportation efficiency. In this paper, we compare various deep learning algorithms applied for V2V communication to predict traffic in an exhaustive manner. We focus on evaluating the performance, precision, and computational complexity of such algorithms for different traffic scenarios, e.g., urban congestion, highway drive-through, and mixed driving patterns. Specifically, we investigate deep learning-based models such as Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM)networks, and Gated Recurrent Units (GRUs) that perform significantly well in discovering temporal patterns and dependencies within traffic data. The study identifies the strengths and weaknesses of every model with regards to their applicability in real-time traffic forecasting assignments. Through presenting findings about the usability of such algorithms, the paper aims at contributing towards building more reliable and effective traffic control systems for VANETs.

12:00
Fuzzy Logic based Collision Avoidance for Autonomous Surface Vehicle

ABSTRACT. Traditional control methods often struggle handling uncertainty, particularly for Autonomous Surface Vehicles (ASV) operating in dynamic environments. To address this problem, our Fuzzy Control System dynamically adjusts both steering and speed in response to real-time measurements of obstacle distance and relative bearing angle, thereby significantly enhancing the system's adaptability and responsiveness to changing environmental conditions. A Case- Based Reasoning (CBR) fuzzy inference system is implemented to receive sonar-based range and relative angle data of obstacles, enabling robust obstacle identification and distance assessment. Experimental results demonstrate that the proposed fuzzy control system effectively manages uncertainty, proving suitable for complex scenarios such as autonomous navigation and industrial automation. The system ensures stable ASV operation and reliable obstacle avoidance. Furthermore, the fuzzy controller demonstrates advanced navigational intelligence and improved equipment protection, validating its applicability in maritime and fluvial operations including environmental monitoring and offshore inspection, where reliable collision avoidance is essential.

12:15
Handling imbalanced data: the SMOTE technique
PRESENTER: Calin Sandu

ABSTRACT. In machine learning projects the quality and structure of data play a critical role in determining model performance. One common challenge in real-world datasets is class imbalance, where one class significantly outnumbers others. This imbalance can lead to biased models that perform well on the majority class but poorly on the minority class, resulting in misleading accuracy and limited generalization. A widely adopted solution to this problem is SMOTE (Synthetic Minority Over-sampling Technique), which generates synthetic samples for the minority class to help balance the dataset. This paper explores how SMOTE works, its advantages over traditional oversampling methods and its impact on improving model performance in imbalanced classification tasks. A practical, step-by-step implementation is also presented to illustrate how SMOTE can be applied to a real-world imbalanced dataset, making this paper a useful guide for practitioners and researchers seeking to understand and use the technique effectively.

12:30
Development of an Artificial Intelligence-Enhanced Arduino-Based Photovoltaic Tracking System for Optimized Energy Efficiency

ABSTRACT. This article presents the development of an automated photovoltaic tracking system using Arduino to optimize solar energy capture. The system adjusts panel orientation based on light intensity, using LDR sensors, SG90 servomotors, and PID algorithms for precise and dynamic positioning. The design features a reliable hardware and software setup with a sensor shield and simplified wiring. It features a redundant control system with manual intervention, ensuring continuous operation in the event of faults. The system quickly adapts to atmospheric changes, thereby maximizing efficiency. Future expansion through fuzzy logic-based artificial intelligence is proposed to enhance performance under complex weather conditions, increasing long-term durability and energy output.

12:45
Digital Decoder from 16-Bit Binary Code to 7-Segment Quad LED Display

ABSTRACT. All digital systems handle information using bits grouped into words, and most of these words have the meaning of numbers. The users of digital systems may follow easily four bits at a time with the naked eye using four LEDs, but it becomes much harder with a larger number of bits. This is where 4-bit-to-7-segment display converters become useful, as each group of four bits ("tetrad") would be converted to a much easier-to-read hexadecimal digit. In case of longer words, each tetrad would need one such device. This paper presents a digital decoder for 16-bit binary words to four hexadecimal digits, using an EEPROM, a quad-multiplexed 7-Segment LED display, and two general-purpose digital integrated circuits.

13:00
Secure Embedding of Sensitive Identity Data in Surveillance Videos using Steganography

ABSTRACT. As more and more surveillance systems integrate biometric recognition and real-time tracking, the secure handling of sensitive identity metadata has become essential for ensuring privacy, regulatory compliance, and data integrity. Conventional approaches rely on external logs or visible overlays to associate metadata with video, but these methods come with significant challenges, such as tampering, loss of synchronization, and privacy violations. Advances in steganography have enabled robust techniques for concealing information within multimedia, particularly in video, which offers high capacity, temporal redundancy, and resilience to detection. Transform-domain methods like Discrete Cosine Transform (DCT) embedding are suitable for video streams due to their compatibility with compression standards and resistance to distortion. In this paper, we propose a real-time system for securely embedding facial identity metadata directly into surveillance video frames using a DCT-based steganographic method. The architecture integrates face detection and tracking with parallelized embedding and lossless encoding, enabling invisible and codec-resilient metadata insertion. This approach ensures that sensitive information remains tightly bound to the video stream while maintaining imperceptibility and playback compatibility.

13:15
An approach to implementing a 3D Stealth Game in Unreal Engine
PRESENTER: Rotaru Adrian

ABSTRACT. In this paper, we present the design and implementation of a video game developed in Unreal Engine. The game contains a variety of challenges to stimulate the user's imagination. The action takes place in a technological, dark, and cold world. The player can control the avatar, which has access to a wide range of advanced combat and movement abilities. The main character also fights against artificial intelligence implemented as enemy troops populating the world. The user must acquire knowledge throughout the game about the geometric complexity of each level to solve the mission efficiently. Various weapons can be found and equipped in the user's arsenal. The implemented weapons have the functionality of shooting bullets, simulating recoil. At the end of each level, after overcoming all the challenges, the player can interact with the door at the end of the room to teleport to the next level. As levels progress, the game's difficulty and the geometric complexity of the space become increasingly harder to navigate.

11:30-14:00 Session 10C: Communications & IWSSS
Location: Room 3
11:30
Evaluation of Service Assurance in MPLS Networks for Medical Image Transfer

ABSTRACT. This paper presents an updated performance analysis of MPLS (Multiprotocol Label Switching) networks used for the transmission of medical images. A new configuration for the MPLS model is proposed and evaluated using the OpenSimMPLS simulation platform, with results compared against a previous model. The focus is on guaranteeing a level of service (GoS) through improved packet transmission rates, reduced delay and jitter, and optimized network behaviour under medical imaging traffic. The results demonstrate that the new configuration offers higher performance and reliability in handling medical image traffic.

11:45
An Analysis of Genetic Algorithms in Cryptography

ABSTRACT. Pseudo-random number generators are essential in various fields of computer science, such as cryptography, numerical simulations, gaming, and algorithm testing. This article explores the use of genetic algorithms—heuristic methods based on the principles of natural evolution—to optimize pseudo-random number generators. The main approaches presented include optimizing generator parameters, identifying complex structures to improve unpredictability, and periodically assessing the uniformity of the generated numbers through specific fitness functions. The results indicate a significant improvement in the quality of generated numbers, thus providing efficient solutions to challenges in cybersecurity and numerical simulations.

12:00
An Overview of Pseudorandom Number Generators With Bit Rotations

ABSTRACT. This article provides a detailed analysis of pseudorandom number generators (PRNGs) that employ bit rotation operations. It examines various generators such as RANROT, PCG, Xoroshiro128+, and RomuTrio, highlighting their principles, operational specifics, and performance characteristics. The advantages of bit rotations, including enhanced computational efficiency and superior statistical distribution, are discussed alongside potential implementation drawbacks. The comparative insights offered underline the significance of bit rotations in the advancement of efficient and statistically robust PRNGs suitable for simulation and cryptographic applications.

12:15
Evaluation of IoT Network Security Against Botnet Attacks Through Simulation in NetSim

ABSTRACT. This study investigates the performance degradation of IoT networks under distributed denial-of-service (DDoS) attacks, using the NetSim simulator to model botnet-based threats. Four scenarios were simulated: one without attackers and three with 1–3 malicious nodes launching UDP flooding. The results show that the throughput dropped by up to 67% and the latency increased by over 350% as the number of attackers rose. These findings highlight the vulnerability of IoT networks in the absence of protection mechanisms and underscore the importance of security-by-design principles. Future research will integrate detection and mitigation algorithms into the simulated environment.

12:30
Coffee maker prototype based on automotive sensory and communication systems

ABSTRACT. This paper presents a prototype coffee maker system that can be integrated inside of a vehicle. The prototype is based on the existing monitoring and control system within a vehicle that receives information from electronic control units (ECU). The transmission of information between ECUs is based on the various communication protocols present in the automotive industry (CAN, LIN, FlexRay and Ethernet). The actual implementation of the program underlying the operation of the prototype is described in this paper based on the functions within the ECUs. In order to validate the proposed system, simulations of the various types of signals existing on the communication buses were carried out. These signals describe: the state of the ignition, the percentage of battery charge, the presence of the driver in the passenger compartment, the status of the vehicle and the proposed prototype system.

12:45
Configuring KNN-based Receiver for Machine Learning-assisted Secure Random Communication System under Gaussian Environment
PRESENTER: Areeb Ahmed

ABSTRACT. Physical Layer Security (PLS) is considered as the boundary wall of modern communication systems. In this study, we investigated the possibility of ensuring PLS through unconventional random communication systems by incorporating a supervised machine learning algorithm. The proposed machine learning-assisted secure random communication system (ML-RCS) exploits skewed alpha-stable (α-stable) noise signals as random carriers for secure transmission of binary digits. At the authorized reception side, we utilize a KNN-based receiver, who has been pretrained on a private α-stable noise dataset (Private-αSND) which consists of privately chosen combinations of parameters required to encrypt and decrypt the transmitted α-stable noise signals. Along with the secure key-based dataset, a static key (the pulse length) is also needed for exact decryption of hidden binary digits. We evaluated the approach using bit error rate (BERs) plots with all possible parameter’s combinations present in Private-αSND by performing Monte Carlo simulations. With this approach, we practically evaluated the behavior of ML-RCS under additive white gaussian noise (AWGN) channel. The results show that the ML-RCS is resistant to eavesdropping under all AWGN channel environments, which makes it a prospective candidate to establish secure communication by unconventional means.

13:00
The Method of Cybersecurity Audit in Public Administration and Business

ABSTRACT. The current geopolitical situation in the world, the development of information and communication technologies (ICT) have caused significant changes in the functioning of public administration and business in the European Union. Currently, the digital economy is driven by modern information technologies, which offer new tools for effective operation. ICT technologies also affect the everyday life of citizens. The KRI introduced in 2012, General Data Protection Regulation (GDPR) and NIS2 in 2023 takes into account the development of technology and globalization, sensibly regulating aspects related to information and IT (Information Technology) security. Unfortunately, the geopolitical situation in the world and in particular the situation in Europe, cause new cyber threats to appear. In this situation, existing regulations as well as existing tools and methods in the area of IT security do not always keep up with these dynamic changes. The aim of this study is to present an original method of conducting a cybersecurity audit in selected public administration institutions and business. The first part of the article presents the current situation regarding IT security and legal regulations in Poland. The next part presents an analysis of the literature on this topic. The last part of the work focuses on presenting the cybersecurity audit method and analyzing selected case studies. The study includes recommendations that are the result of the activities carried out. The conducted research, analysis and tests have shown that the best results are obtained when using advanced tools for penetration testing and vulnerability testing. Further research can be aimed at verifying the developed method in other conditions, especially in the area of the technical tools used.

13:15
Commitment Schemes for Multi-Party Computation
PRESENTER: Ioan Ionescu

ABSTRACT. The paper presents an analysis of Commitment Schemes (CSs) used in Multi-Party Computation (MPC) protocols. While the individual properties of CSs and the guarantees offered by MPC have been widely studied in isolation, their interrelation in concrete protocols and applications remains mostly underexplored. This paper presents the relation between the two, with an emphasis on (security) properties and their impact on the upper layer MPC. In particular, we investigate how different types of CSs contribute to various MPC constructions and their relation to real-life applications of MPC. The paper can also serve as a tutorial for understanding the cryptographic interplay between CS and MPC, making it accessible to both researchers and practitioners. Our findings emphasize the importance of carefully selecting CS to meet the adversarial and functional requirements of MPC, thereby aiming for more robust and privacy-preserving cryptographic applications.

11:30-14:00 Session 10D: Software, databases, and computer applications
Location: Room 4
11:30
Optimizing the Travel Planning Process based on Personalization and Efficiency using Artificial Intelligence and Political Sciences
PRESENTER: Ion Bostan

ABSTRACT. Travel planning is a complex process that requires consideration of many factors such as cost, comfort level, user preferences, geopolitical situation and available locations. Artificial Intelligence (AI) and political sciences offers significant opportunities to transform this process, reducing the time required for planning and improving the user experience through personalized and efficient recommendations. This paper explores the use of an AI-based model for optimizing the selection process of hotels, activities and restaurants, taking into account user preferences and user constraints as well as the interdisciplinary combination with political sciences, which will essentially contribute beneficially to the planning process. The proposed solution uses modern technologies, such as TensorFlow.js, Node.js and React, to develop a scalable and user-friendly application. The model trained on real data analyzes and ranks the available options, providing users with the most appropriate recommendations. The results highlight the satisfactory performance of the model, with high accuracy in identifying the optimal locations, even on complex datasets. This study highlights the potential of AI in collaboration with the geopolitical situation in the area to revolutionize the tourism industry, contributing to a simpler and more efficient planning experience for users.

11:45
Innovations in Energy Sustainability: An Ecological Approach to Refrigeration

ABSTRACT. This paper addresses the pressing ecological and technological challenges associated with conventional refrigeration systems by proposing an integrated, sustainable approach centered on magnetic refrigeration. The study identifies key issues such as high energy consumption, the use of harmful refrigerants, and inefficient end-of-life practices. The research aims to develop an innovative solution that combines eco-design principles, recyclable materials, and circular economy strategies to enhance sustainability across the product life cycle. The originality of this work lies in the experimental implementation and comparative evaluation of a magnetic refrigeration system versus a traditional compression-based refrigerator. Results demonstrate significant improvements in energy efficiency (up to 40%), noise reduction, and material recyclability. The study concludes that magnetic refrigeration represents a viable and environmentally responsible alternative for future cooling technologies, with broad applicability in residential and commercial appliances.

12:00
Efficient Database Management System for Organizing Activity using Android Technology
PRESENTER: Andreea Popa

ABSTRACT. In the context of rapid technological evolution and the continuous rise in popularity of mobile devices, Android applications have become indispensable tools for users of all types. This research aims to demonstrate the benefits of integrating database management systems with mobile platforms to create an efficient and organized work environment. This paper proposes a practical solution for data management on mobile devices while exploring the advantages and challenges associated with using the latest Android development technologies. By developing this application, we aim not only to solve practical problems but also to contribute to the specialized literature by providing a detailed and well documented case study. The results of this project are intended to be useful both for IT professionals and for organizations seeking efficient data management solutions.

12:15
Domain modeling using template metaprogramming

ABSTRACT. Template metaprogramming (TMP) offers a powerful computation and code generation mechanism. This paper explores its application to domain modeling, a critical aspect of software development. By leveraging TMP, we propose a novel approach to constructing intricate domain models using known techniques. To begin, we compare the built-in memory Abstract Syntax Tree (AST) at compile time with the memory footprint version of the same algorithm (BTree) at runtime. This analysis is carried out using the visualization tool from Microsoft Visual C++ alongside a dedicated memory tracking library (MemTracker). Following this initial optimization that saves 40 bytes of allocated memory, we implement further runtime optimizations that concentrate on trivalent logic through the use of specialized templates and recursion via variadic templates. Our methodology relies on standard C++ techniques, such as recursive templates, Pair structures, and metatypes, for modeling simple data structures like binary trees. It encapsulates domain concepts, relationships, and invariants within template metaprograms, enhancing type safety, performance, and code clarity. The approach relies heavily on popular and personal GitHub repositories, showcasing how TMP can streamline domain modeling while improving code quality and maintainability.

12:30
Communication Protocols in Embedded Systems for Automotive Applications: Comparative Analysis and Implementation through Virtual Instruments

ABSTRACT. This paper presents a comparative analysis of key communication protocols used in embedded systems within the automotive industry, with a focus on both wired and wireless technologies. The study explores the architecture, data rate, reliability, and application domains of protocols such as CAN (Controller Area Network), LIN (Local Interconnect Network), FlexRay, MOST (Media Oriented Systems Transport), Modbus (TCP/IP (Transmission Control Protocol/Internet Protocol)), Ethernet, I2C (Inter-Integrated Circuit), UART (Universal Asynchronous Receiver-Transmitter), Bluetooth, and Wi-Fi. These protocols are evaluated based on their suitability for various vehicle subsystems including powertrain, infotainment, diagnostics, and sensor networks. In addition to the theoretical review, the paper includes two practical implementations developed using virtual instrumentation tools: a Modbus TCP/IP-based monitoring system in LabVIEW and an I2C/UART-based sensor interface in MATLAB. These demonstrators illustrate real-time data acquisition and visualization techniques relevant to embedded automotive communication. The integration of both classic and modern protocols provides insight into the current and emerging trends in vehicle networking.

12:45
MBISort Algorithm: A Novel Hybrid Sorting Approach for Efficient Data Processing

ABSTRACT. A novel and efficient hybrid sorting algorithm, termed the Merge-Block-Insertion sort (MBISort) algorithm, is proposed. MBISort combines the principles of insertion sort, block sort, and merge sort into an in-place procedure that exhibits markedly improved average-case performance compared to standalone block sort and adaptive merge sort. Comparative analyses on structured datasets, such as sorted and partially sorted arrays, demonstrate that MBISort achieves faster execution times over a broad range of input sizes (from 100 to 1,000,000 elements). On average, performance improvements of 20% over adaptive merge sort and 41% over block sort are observed, highlighting its robust efficiency across diverse data types and distributions. The algorithm also performs exceptionally well for large datasets with high degrees of order, a result of the dynamic integration of insertion sort with an adaptive merging strategy. Additionally, a tunable threshold parameter allows MBISort to adapt to varying data distributions and optimize performance.

11:30-14:00 Session 10E: E-Session V
Location: E-session 1
11:30
Evaluation of Machine Learning Algorithms for Predicting Cybersecurity Incidents

ABSTRACT. This research examines the utilization of machine learning to proactively forecast cybersecurity problems, hence reducing the likelihood of substantial system interruptions. Utilizing the public GUIDE dataset, we intend to create a machine learning model that can analyze historical event data and discern trends that signal emerging hazards. The model will be developed to enhance current security measures by delivering early alerts of possible problems, facilitating prompt intervention and averting system failures. This research's conclusions aim to improve cybersecurity measures and mitigate the financial and operational consequences of intrusions.

11:45
Multi-Stream Head Pose Estimation Algorithm Based on Enhanced Feature Extraction
PRESENTER: Zihan Liu

ABSTRACT. Aiming at the existing head pose estimation algorithms with poor real-time performance and low detection accuracy in complex scenes, a Multiple Stream head pose estimation algorithm based on enhanced feature extraction is proposed(FEEM-Net). First, a three-branch parallel structure is designed with different activation functions and pooling methods to enhance the diversity of feature expression at the same level through multi-channel feature extraction. Second, the CBAM convolutional attention module is introduced after the pooling layer to focus the head region features by using the channel and spatial attention mechanism to effectively suppress the background interference. Finally, a bottleneck residual module based on asymmetric convolution is proposed to enhance the modeling ability of multi-scale and multi-directional information, and improve the information flow transfer efficiency by residual connection. Experimental results show that the proposed algorithm reduces the MAE to 4.63 and 4.06 on the AFLW2000 and BIWI datasets respectively.

12:00
KS-LSTM: Improved Obesity Weight Prediction Accuracy based on Kalman Smoothing LSTM
PRESENTER: Andri Pranolo

ABSTRACT. Weight prediction is one of the important aspects in health risk modeling and the development of artificial intelligence-based health monitoring systems. Long Short-Term Memory (LSTM) models are known to be effective in processing sequential data but are often susceptible to high data fluctuations and noise. This study proposes a Kalman Smoothed LSTM (KS-LSTM) approach by applying Kalman Smoothing to the input data before training the LSTM model to reduce irrelevant variability and improve prediction accuracy. Experiments were conducted on a body weight dataset with evaluation based on RMSE, MAPE, and R² metrics. Results show that KS-LSTM performs better on the MAPE metric (0.9430 vs. 1.3576), indicating an edge in proportional accuracy. Although the LSTM showed a slight edge on RMSE and R², the overall results support the use of Kalman Smoothing as a preprocessing step to improve the stability and reliability of the weight prediction model.

12:15
Enhancing LLM-Based Text Compression with Context-Aware Frequency Adaptation
PRESENTER: Rajith Murali

ABSTRACT. This paper introduces a novel approach to text compression that integrates a large language model with a context-aware frequency adaptation mechanism. Traditional large language model-based compression methods rely solely on model predictions to encode text efficiently. The approach extends this method by constructing context-specific frequency tables for each processing window, which are then used to bias the model’s logit distribution before encoding token ranks via arithmetic coding. The method is evaluated on standard compression benchmarks, enwik8 and enwik9 from the Hutter Prize, and on a custom dataset compiled from Project Gutenberg. Using the Llama-3.2-1B model as the base model, alongside the zlib compression library for final encoding, the frequency-scaled approach consistently outperforms both baseline large language model-based compression techniques and traditional compressors. The method achieves compression ratios of 6.4385 compared to 5.8271 for the unscaled large language model approach and 4.0217 for the Lempel–Ziv–Markov chain algorithm on enwik8, 6.7866 versus 6.0632 and 4.6867 on enwik9, and 5.1762 versus 4.8124 and 3.7802 on the Gutenberg dataset. These findings underscore the potential of augmenting large language models with local statistical adaptations to enhance lossless text compression

12:30
Social Media Sentiments Analysis on the July Revolution in Bangladesh: A Hybrid Transformer Based Machine Learning Approach
PRESENTER: Md Sabbir Hossen

ABSTRACT. The July Revolution in Bangladesh marked a significant student-led mass uprising, uniting people across the nation to demand justice, accountability, and systemic reform. Social media platforms played a pivotal role in amplifying public sentiment and shaping discourse during this historic mass uprising. In this study, we present a hybrid transformer-based sentiment analysis framework to decode public opinion expressed in social media comments during and after the revolution. We used a brand new dataset of 4,200 Bangla comments collected from social media. The framework employs advanced transformer-based feature extraction techniques, including BanglaBERT, mBERT, XLM-RoBERTa, and the proposed hybrid XMB-BERT, to capture nuanced patterns in textual data. Principle Component Analysis (PCA) were utilized for dimensionality reduction to enhance computational efficiency. We explored eleven traditional and advanced machine learning classifiers for identifying sentiments. The proposed hybrid XMB-BERT with the voting classifier achieved an exceptional accuracy of 83.7% and outperform other model classifier combinations. This study underscores the potential of machine learning techniques to analyze social sentiment in low-resource languages like Bangla.

12:45
Human-Machine Proximity and Warning System for Industrial Safety

ABSTRACT. In the early 21st century, the adoption and use of robots and machines in the industrial field has increased significantly due to rapidly developing technology. Therefore, critical safety issues have arisen, especially in areas where people and machines work nearby. This research aims to develop an artificial intelligence model that detects people and estimates their distances to machines using computer vision as a solution to the problem mentioned above. The system, obtained by combining the YOLOv8 deep learning model used for object detection with distance calculation algorithms, ensures safety by continuously examining human-machine interactions. This artificial intelligence-supported detection system can be used in industrial environments such as factories and warehouses to prevent accidents and ensure safety. The system receives a video captured by monocular cameras integrated into the environment where people and machines are located as input. Later, processes each frame for human and machine detection, and places the detected people and machines in a bounding box. Then, the distance between the machines and humans is estimated by using the bounding box coordinates. The system provides feedback based on the estimate obtained, allowing immediate intervention in case a person gets too close to the machine. During the development process of the approach, different object detection models and distance measurement methods were tried. For the object detection component, different versions of YOLO from YOLOv8 to YOLO12 were trained with two different datasets, and YOLOv8l produced the best results with the mAP50 (Mean Average Precision) value of 0.948. The measuring distance using depth image, a top-down view, and meter/pixel ratio were tried for the distance measurement, and meter-pixel ratio method generated the best results. Therefore, used in the system along with the YOLOv8l.

13:00
A Systematic Review of Machine Learning and Deep Learning Techniques for Exoplanet Detection

ABSTRACT. The search for exoplanets has rapidly advanced with the surge of astronomical data from missions such as Kepler, Transiting Exoplanet Survey Satellite (TESS), and James Webb Space Telescope (JWST). In this context, Artificial Intelligence (AI) techniques particularly Machine Learning (ML) and Deep Learning (DL) have become essential tools for detecting planetary candidates in vast and noisy datasets. This systematic review examines studies across four major databases (IEEE, Springer, Web of Science, and Semantic Scholar), focusing on the evolution, application, and comparative performance of ML and DL approaches in exoplanet detection. The review reveals that DL methods, outperform ML in accuracy and scalability, while ML techniques remain valuable for interpretability and real-time applications. Emerging hybrid models combine strengths from both paradigms to improve robustness, adaptability, and scientific utility. This paper presents a detailed taxonomy of AI techniques, comparative evaluations, and outlines challenges such as interpretability, data quality, and computational efficiency. The paper critiques current limitations in exoplanet detection and suggests future research directions, recommending the integration of machine learning and deep learning models.

13:15
Design and Implementation of Reconfigurable Approximate Adder in Real Time for Image Watermarking
PRESENTER: Alidena Bhargavi

ABSTRACT. Millions of transistors are being embedded on single chip using VLSI, because of this the space between transistors has been decreased to a greater extent and is leading to chip faults or system faults. To avoid inaccurate results, fault-tolerant systems are required. An adder is a fundamental circuit to perform computations in digital Very Large Scale Integration (VLSI) design. The design metrics of an adder can be improved using approximate adder, such approximate adder has been proposed named Reconfigurable Approximate Adder (RAA). Approximate adders are widely used in error-resilient applications such as multimedia, machine learning, and signal processing, where absolute precision is not critical. In this paper, we proposed the design of a new approximate adder that can address both accurate and approximate carry. The proposed reconfigurable approximate adder circuit produces without error when PG = 0 and with two errors when PG=1 in the Carry output, offering a versatile solution for low-power, high-efficiency computing systems. The thus designed reconfigurable approximate adder design metrics are tested for image watermarking using MATLAB and Cadence Virtuoso Software.

13:30
Design and Implementation of Energy-Efficient Approximate Adder Supporting Image Processing Applications

ABSTRACT. Approximate computing is a fast and energy-efficient way to handle tasks that can tolerate some errors. This study investigates the application of approximate computing to digital image blending, a method commonly used in visual effects and image processing. We designed an approximate adder that is suitable for both FPGA- and ASIC-based implementations and utilized it to blend two images and compared the results with those of accurate adder. The quality of blended images produced by approximate adder is comparable to that of accurate adder, as demonstrated by our results. Using the structural similarity index and peak signal-to-noise ratio, the image quality was assessed. The thus built approximate adder was proven to be superior for image blending, taking into account the blended image's quality and design metrics. We considered the 90nm CMOS Cadence design tool to perform ASIC implementation of the accurate adder and the designed approximate adder. The blended image was generated in MATLAB using the designed approximate adder function to alter the process. The findings indicate that the suggested approximate adder is suitable for image blending, producing blended images of equivalent quality to conventional full adders.

13:45
Modeling Key Parameters in Wastewater Treatment Plants
PRESENTER: Gheorghe Popescu

ABSTRACT. Wastewater treatment is a critical process for protecting water resources and ensuring environmental sustainability. The modeling of key parameters such as dissolved oxygen (DO), nitrogen (in various forms like ammonia, nitrite, and nitrate), plays a fundamental role in understanding and optimizing the performance of wastewater treatment plants (WWTPs). This paper presents a detailed overview of modeling strategies for oxygen and nitrogen compounds, with a focus on predictive accuracy and process control. A comprehensive literature review highlights current advancements, while recent research findings are analyzed to identify gaps and limitations in existing models. Based on these insights, a novel modeling approach is proposed to improve accuracy and adaptability. Simulation results are compared with previous studies through tables and graphical representations. The paper concludes with recommendations for future model refinement and implementation in real-time monitoring systems.

14:00
Design and simulation of boost converter for Smart Grid Using DSP-Based Model Predictive Control
PRESENTER: Youness Hakam

ABSTRACT. In recent years This study details the design and simulation of a boost converter for smart grid applications utilizing DSP based Model Predictive Control (MPC). The suggested method improves voltage regulation, promotes system efficiency, and guarantees rapid dynamic response under fluctuating load situations. MPC is executed on a Digital Signal Processor (DSP) to enhance switching decisions in real time, hence reducing steady state errors and overshoot. The paper provides a comprehensive review of the converter’s performance, emphasizing its superiority compared to traditional control approaches. The simulation results illustrate the efficacy of the suggested control technique in ensuring stable operation and optimizing energy conversion for smart grid integration.

11:30-14:15 Session 10F: E-Session VI
Location: E-session 2
11:30
Intelligent Synthesis of Antenna Array Radiation Patterns using Woodward-Lawson Algorithm
PRESENTER: Ridha Ghayoula

ABSTRACT. The Woodward-Lawson method combines theoretical frequency sampling with practical parameter adjustments to synthesize antenna arrays that generate customized radiation patterns. In this paper, we provide a synthesis and analysis of radiation patterns for linear antenna arrays using this method, detailing the mathematical framework involved. We validated our approach through simulations of two antenna arrays—one with 10 elements and the other with 20 elements—comparing the results to theoretical predictions. Our evaluation focused on key performance metrics such as directivity, sidelobe levels, and overall efficiency. The findings in this paper demonstrate that the Woodward-Lawson method effectively models and assesses antenna performance, confirming its utility in achieving desired radiation characteristics and providing a valuable reference for future antenna array research.

11:45
Improving the Contextual Understanding of LLMs through Multi-Teacher Knowledge Distillation and RAG

ABSTRACT. Large-scale language models (LLMs) excel in various NLP tasks but face challenges in resource-constrained environments due to high computational and memory demands. To address these limitations, we propose an innovative architecture that combines Multi-Teacher Knowledge Distillation (MTKD) and Retrieval-Augmented Generation (RAG) to enhance the performance of smaller, efficient models without compromising accuracy. By utilizing multiple teacher models, we transfer diverse knowledge to a student model, ensuring its ability to manage complex tasks. Additionally, RAG enhances accuracy by dynamically retrieving relevant context during inference. Experimental results demonstrate that this approach outperforms traditional distillation methods, delivering precise, context-aware responses while maintaining computational efficiency. The model is specifically optimized for deployment on consumer-grade GPUs, making it suitable for real-world, resource-constrained applications.

12:00
Next-Gen Microfluidics: Reinforcement Learning Driven Droplet Size Precision System
PRESENTER: Sameer Dubey

ABSTRACT. This study introduces a novel microfluidic device designed for precise droplet size control using reinforcement learning (RL) algorithms in water-in-oil microfluidic setups. The device employs a pressure by-pass valve to regulate the flow rate of continuous fluid, ensuring droplet generation of the desired size. The system is equipped with an onboard computer for RL code execution and image processing, a RaspberryPi-5 is utilized for a stepper motor interface, mini syringe pumps for versatile flow rates, a digital microscope lens for droplet observation, and an integrated power supply. The device operates iteratively, running RL algorithms to achieve a user-defined target droplet size. After set iterations, the device prompts the user for permission to increase the iteration count or alter dispersed fluid viscosity. The effectiveness and versatility of the device in achieving precise droplet size control through reinforcement learning algorithms are evaluated using experiments. The findings from these experiments will contribute to the advancement of microfluidic technologies for various applications, including drug delivery, diagnostics, and chemical analysis.

12:15
Design and Development of Corn Disease Detection with UAV-based Deep Learning Technology
PRESENTER: Vrian Jay Ylaya

ABSTRACT. Corn diseases significantly reduce agricultural productivity, posing a critical threat to food security, particularly in tropical countries like the Philippines, where corn is a staple crop. Traditional disease detection methods rely heavily on manual inspection and are time-consuming, labor-intensive, and prone to inaccuracies. To address these challenges, this research developed an innovative corn disease detection system integrating Unmanned Aerial Vehicle (UAV) technology with deep learning models specifically optimized for tropical agricultural conditions. The researchers compared four pre-trained deep learning architectures—EfficientNetB0, ResNet50, InceptionV3, and VGG16—to determine the most effective model for identifying leaf spot and sunscald diseases in corn plants from UAV-acquired imagery. EfficientNetB0 emerged as the superior model, achieving an impressive accuracy rate of 90.89%. Field implementation of the developed system involved systematic testing across five rows of corn crops, with multiple repetitions conducted for each row to ensure reliability. Results demonstrated consistent and precise detection rates with minimal variation in identifying healthy leaves (84.66%), leaf spots (2.86%), and sunscald (12.48%). These findings validate the robustness and accuracy of the EfficientNetB0-based UAV system as a practical solution for early disease monitoring in corn fields. The proposed technology offers farmers an accessible and cost-effective tool that significantly enhances crop management decisions through timely disease identification and intervention. This UAV-based deep learning approach represents a substantial advancement toward sustainable precision agriculture practices in the Philippines, with the potential for broader adoption that could positively impact agricultural productivity and food security globally.

12:30
Feasibility of Impulse Radar-Based Through-Wall Imaging for Human Detection in Search and Rescue: A Study on Accuracy, Material Penetration, and Deep Learning Integration
PRESENTER: Vrian Jay Ylaya

ABSTRACT. This study investigates the feasibility of a portable impulse radar-based through-wall imaging system integrated with deep learning algorithms to enhance urban search and rescue (SAR) operations. The system addresses critical limitations in traditional detection methods during disaster scenarios by employing ultra-wideband impulse radar technology coupled with specialized antennas and PicoR 5.0 software. This combination enables the penetration of diverse building materials, including concrete (10 cm), wood, and fiber cement. Through controlled experimental trials and field tests, researchers evaluated detection accuracy and response time across varying wall thicknesses and material compositions, utilizing Convolutional Neural Networks (CNN) for advanced signal processing and target identification. The system demonstrated an overall detection accuracy of 89%, with material-specific performance variations: concrete walls showed reduced precision (70%) but maintained high recall (93%). In comparison, wood and open spaces achieved exceptional precision exceeding 97%. Deep learning integration proved crucial, improving system robustness against environmental interference and enabling the identification of stationary subjects through micro-movements associated with breathing. Field validation using a portable cart-mounted unit confirmed operational viability across multiple scenarios, successfully detecting moving adults/children and static individuals through 10 cm concrete barriers. Key findings from 320 test cases revealed an average F1 score of 89% across materials, with confusion matrix analysis showing 285 correct classifications. The technology's effectiveness correlated strongly with material permittivity and signal-to-noise ratio optimization. These results position impulse radar combined with machine learning as a transformative tool for emergency response, providing real-time situational awareness in collapsed structures.

12:45
Automating Digital Forensic Investigations: The Impact of Machine Learning on Electronic Evidence
PRESENTER: Kanika Pandit

ABSTRACT. Digital forensic investigations have become more complicated along with larger scales because of a dramatic expansion in digital data. Conventional data examination techniques depend on hand-based procedures that remain slow and need significant resources along with being prone to mistakes from human operators. Mobile technology has disrupted electronic evidence analysis by introducing machine learning (ML) which fundamentally upgrades the operational efficiency and measurement accuracy and data handling capability beyond traditional methods. Digital forensic automation can be achieved through ML algorithms when handling tasks which include identifying data sources and detecting anomalies and recovering evidence and analyzing networks and multimedia content. The implementation of ML as a digital forensics solution comes with multiple barriers that affect the results including inadequate data quality and biased algorithms and the need for explainability and the risk of adversarial attacks and legal and ethical restrictions. The research investigates machine learning effects on digital forensic investigations by discussing modern implementation together with obstacles faced in addition to expected research pathways. The proper resolution of these challenges remains vital to establish reliable and fair and lawful ML-based forensic instruments for criminal along with civil examination activities.

13:00
A Low Rank Adaptation-Based Convolutional Neural Network and Transformer Model for Cervical Cancer Detection in Histopathological Images

ABSTRACT. Cervical cancer is the fourth common cancer among women worldwide. The diagnosis and classification of cancer are extremely important, as it influences the optimal treatment and length of survival. Histopathological image analysis, recognized as the gold standard for cervical cancer diagnosis, is vital for its early detection. However, the varied morphological characteristics of cervical cancer make accurate manual classification challenging. Traditional diagnostic methods employed by clinicians are often time-consuming and susceptible to errors. Computer-Aided Diagnosis (CAD) systems can assist in the accurate and efficient detection of cancer in histopathological images. This study introduces an automated classification network leveraging a Low Rank Adaptation-based CNN-Transformer (LoRaCT) model. The LoRaCT model integrates convolutional neural networks (CNNs) for extracting local features with Vision Transformers (ViTs) for capturing global context. To address computational efficiency, Low Rank Adaptation (LoRa) layers are employed, significantly reducing the number of parameters while maintaining model performance. The LoRaCT model achieved an average accuracy of 95.23% accuracy on the Caishi dataset, demonstrating its potential for effective and efficient AI-driven cervical cancer detection. The LoRaCT model has achieved comparable accuracy to the standard ViT model, which used the same hyperparameters in this study, with approximately 98.34\% fewer parameters. This approach not only achieves high accuracy but also offers a computationally efficient solution, advancing the field of automated histopathological image analysis.

13:15
Determination of the dietary needs of a type 2 diabetic patient: A neural network approach
PRESENTER: Sidi Mwakalu

ABSTRACT. Diabetes mellitus is a major global health concern, with an estimated 537 million individuals affected in 2021. More than 90% of these cases are classified as type 2 diabetes. Among the key lifestyle management strategies for diabetic patients, dietary modification plays a crucial role in regulating plasma glucose levels. However, the optimal macronutrient composition for effective glycemic control in type 2 diabetes remains unclear. Existing dietary recommendations for diabetes management are often generalized, lacking specificity in both quality and quantity of dietary intake. To address this challenge, this study developed a neural network-based classifier to determine the dietary suitability of various foods for type 2 diabetic patients with different comorbidities. The classifier categorizes foods for four distinct patient groups: (i) type 2 diabetes with chronic hyperglycemia, (ii) type 2 diabetes with hypertension, (iii) type 2 diabetes with obesity, and (iv) type 2 diabetes with all three conditions. The model was trained using data from the Kenya Food Composition Tables, which contain detailed information on macronutrient and micronutrient content, as well as food processing techniques. The neural network architecture comprised five layers, including three hidden layers with ten neurons each. Key hyperparameters included the tanh and sigmoid activation functions, gradient descent for optimization, cross-entropy as the loss function, and a learning rate of 0.1. During training, the model was optimized with 40 hidden neurons per layer, 60,000 epochs, and a learning rate of 0.2. Model performance was evaluated using five key metrics: Accuracy (91.4%), Precision (86.8%), Recall (88%), F1-Score (86.9%), and Matthews Correlation Coefficient (MCC: 0.808). The high MCC score indicates a strong correlation between the classifier’s predictions and the labeled classes, demonstrating its effectiveness in dietary classification for type 2 diabetic patients with comorbid conditions.

13:30
String Current Variability in 3D-NAND Flash Memory: A TCAD Simulation Study

ABSTRACT. In this study, we employ TCAD (Technology Computer-Aided Design) simulations to comprehensively investigate the influence of process-induced variations on the electrical parameters of 3D NAND flash memory string. While prior research has extensively addressed dimensional variability, this work uniquely emphasizes the impact of material and transport level fluctuations, particularly focusing on oxide fixed charge (Qf), interface trap charge (Qtrap) and electron mobility (μn). Our simulations are designed to assess the variability in key electrical parameters, on-current (Ion), threshold voltage (Vth), maximum transconductance (Gm) and subthreshold swing (SS). Results indicate that Vth is significantly influenced by variations in Qf and Qtrap leading to a 36% change, while μn has a substantial impact on Ion and Gm with changes of 21% and 19%, respectively. These findings underscore the critical importance of process stability and variability-aware circuit design methodologies, particularly in the context of aggressively scaled, high-density vertical NAND architectures. As device integration continues to expand into the hundreds of layers, understanding and mitigating the impact of non-idealities such as charge fluctuation and mobility degradation becomes essential for maintaining reliability, yield, and overall system performance. Our results offer valuable insights for device engineers and memory architects aiming to enhance the robustness of next-generation 3D NAND technologies.

13:45
Fleet Management System's Attack Graph
PRESENTER: Ruba Elhafiz

ABSTRACT. Fleet Management (FM) automation relies on preprogrammed systems to perform various functions continuously, minimizing human interaction and resulting in predictable actions that are vulnerable to manipulation. Integrating the Internet of Things (IoT) into the Fleet Management System (FMS) heightens its susceptibility to cyberattacks, which can be challenging to detect and comprehend. This paper highlights the security vulnerabilities within the FMS and presents a framework for utilizing and visualizing its Attack Graph (AG). The system is expressed with the Architecture Analysis and Design Language (AADL), while a model-checker, JKind is employed to continuously verify the model against the security criteria. The generated attack graph visualized with Graphviz provides a comprehensive view of potential attack paths and vulnerabilities of FMS. Furthermore, the AG helps suggest the optimal deployment of Intrusion Detection Systems (IDS) with minimal resources.

14:00
DermNet-CNN: A Hyperparameter-Tuned CNN Model for Accurate Skin Disease Detection

ABSTRACT. The diagnosis of skin disease is a critical area in healthcare, requiring high accuracy and reliability to ensure effective treatment. Traditional diagnostic methods often struggle with the variability in the appearance of skin lesions and noise in medical images, leading to misdiagnosis. This study addresses these challenges by proposing a robust deep learning-based framework for accurate skin disease classification. Motivated by the need for precise and automated diagnostic tools, we focus on enhancing image quality and leveraging advanced convolutional neural networks (CNN) to improve classification performance. The methodology involves comprehensive data preprocessing, including image resizing, morphological black hat transformation, median filtering, and contrast adjustment to highlight fine details and reduce noise. Data augmentation techniques such as flipping, rotating, scaling, and shifting are employed to increase dataset diversity. A systematic evaluation of state-of-the-art and custom CNN architectures is conducted, with rigorous hyperparameter tuning to optimize performance. Our results demonstrate exceptional performance, achieving 98.57% accuracy, high specificity, precision, recall, and F1 score, supported by AUC-ROC analysis and 5-fold cross-validation. The proposed model outperforms existing architectures, showcasing its potential for precise diagnosis of skin disease. This study highlights the effectiveness of combining advanced preprocessing techniques with deep learning models to address the complexities of skin disease classification, paving the way for reliable automated diagnostic systems.

11:30-14:15 Session 10G: E-Session VII
Location: E-session 3
11:30
Optimization of a Photovoltaic Pumping System Using a Modified ABC Algorithm

ABSTRACT. The renewable energy sector continues to deliver sustainable solutions for critical industrial applications, particularly in water pumping systems. Our research introduces an enhanced Artificial Bee Colony (ABC) optimization approach specifically designed to boost the performance of solar-powered pumping systems. Through comprehensive MATLAB simulations of a complete PV system incorporating a three-phase induction motor and DC-DC boost converter, our modified ABC algorithm successfully achieved exceptional 98% maximum power point tracking efficiency. The system demonstrates remarkable torque stabilization capabilities while reliably maintaining consistent pump flow rates. These significant improvements clearly establish the advantages of our proposed method over conventional MPPT approaches. The findings strongly support the practical potential of ABC-based optimization in renewable energy applications, presenting industries with an effective and dependable sustainable energy solution.

11:45
An Enhanced MPPT Algorithm Based on Adaptive Linear Regression with Contextual Weight Memory (ALR-CWM) for PV Systems
PRESENTER: Rida Amine

ABSTRACT. This paper presents an Adaptive Linear Regression With Contextual Weight Memory (ALR-CWM) model to use for Maximum Power Point Tracking (MPPT) control in photovoltaic (PV) systems. The proposed approach employs a linear regression model with a similarity-based memory mechanism to use irradiance and temperature measurements to yield predictions of Vmp, enabling direct estimation of operating voltage without extensive training. On a realistic dataset over three years, the ALR-CWM model under various irradiance levels leads to a voltage prediction mean absolute error (MAE) of 0.00124 V and a tracking efficiency above 97\%. MATLAB/Simulink simulations reveal the system model's correctness, fast convergence, and advantages in embedded MPPT applications. ALR-CWM realizes an interpretable and computationally efficient option for dynamic energy optimization of PV systems relative to conventional and machine learning-based methods.

12:00
Two Stage Single Phase PV System Using Sepic Converter
PRESENTER: Nguyen Duc

ABSTRACT. The design and modeling of a two-stage, single-phase grid-connected photovoltaic (PV) system with a DC-DC converter and a DC-AC inverter are presented in this work. The system is built to provide effective power transfer to the grid while optimizing energy extraction from PV panels. In the DC-DC stage, a SEPIC converter regulates the PV array’s output voltage and integrates a maximum power point tracking (MPPT) algorithm to optimize energy harvesting. The Perturb and Observe (P&O) method continuously adjusts the PV operating voltage, ensuring operation at the maximum power point (MPP) under varying conditions. The regulated voltage is then supplied to the DC-AC stage, where a single-phase inverter converts DC power into AC and synchronizes it with the grid. To enhance current control, reduce harmonics, and improve dynamic response, Model Predictive Control (MPC) is applied to the inverter. An L-filter at the inverter output minimizes switching noise and ensures compliance with grid standards. A key component of the system is the phase-locked loop (PLL), which synchronizes the injected current with the grid voltage, ensuring power delivery at the correct frequency and phase. This study analyzes and improves the PLL structure to enhance phase detection accuracy and response speed. MATLAB/Simulink is used to simulate the system in order to evaluate its performance under various operating scenarios. Simulation results confirm effective MPPT operation, stable DC voltage regulation, and high-quality grid-injected current that meets grid standards. By integrating advanced control strategies, this work proposes a robust and efficient solution for grid-connected PV systems, contributing to the reliable integration of renewable energy into the power grid.

12:15
Predictive Phase Shift Control based on Transformer Current Control for Dual Active Bridge
PRESENTER: Pham Hong Duong

ABSTRACT. The Dual Active Bridge is a commonly used bidirectional DC/DC converter, ideal for high-power applications such as electric vehicles, charging stations, and renewable energy systems, to name a few. Being an advanced technique, predictive control is widely adopted in power electronic converters, and when combined with phase shift control, it opens up new possibilities for converter management. To design an effective phase predictive controller, it is important to opt for the right control variables and define their switching model and function. Unlike traditional DC/DC converters (e.g., buck or boost converters), determining the switching function in phase shift control converters is more complicated. In order to address this, the switching functions for the DAB converter are derived using mathematical methods. This paper presents a predictive phase shift controller design based on the transformer current switching model, featuring a cascade control structure for forward power flow. Simulations were conducted with varying setpoint values and load conditions, and the results, obtained with Matlab/Simulink, illustrate that the system responds dynamically with the actual voltage and current values consistently tracking the desired targets.

12:30
Improving Personalized Customer Engagement in E-Commerce through Distributed Web Systems and Advanced CRM Solutions
PRESENTER: Vassil Milev

ABSTRACT. This article explores the opportunities for improving personalized customer experience in e-commerce through the integration of distributed web systems and advanced Customer Relationship Management (CRM) solutions based on the semantic web. The proposed model combines technologies from the semantic web, artificial intelligence, and machine learning to create an intelligent and adaptive environment for managing customer relationships. The results indicate that this approach significantly improves the personalization and efficiency of customer service, providing real added value to the business through personalized interactions and strategic customer relationship management.

12:45
Fractal-Slotted Microwave Bandpass Filter Designs for Wireless Applications

ABSTRACT. This paper presents three novel dual-mode microstrip bandpass filter designs featuring fractal slot patterns on a polyimide substrate. The goal is to develop a filter that meets the demands of high-frequency wireless communication applications. The dual-mode resonator, serving as the foundation for the filter, incorporates symmetric fractal slotted structures with perturbations. The performance of the filter on polyimide substrate is evaluated using 3D electromagnetic simulation tool CST Studio Suite. A comparative analysis of snowflake, cross, and star-fractal tree structures offer valuable insights into optimal configurations for achieving the desired filter characteristics. The performance evaluation of the fractal filters involves the key simulation parameters such as insertion loss and return loss. The three fractal filter designs obtained using the 3D simulation tool provide a thorough understanding of the filter's behavior and effectiveness. Simulation results demonstrate the effectiveness of the proposed filters, with the cross-fractal design exhibiting a notable insertion loss of 1.38 dB and return loss of 13.78 dB at 2.35 GHz. The proposed dual-mode bandpass filters with fractal slots on a polyimide substrate represents an innovative and promising solution for high-frequency applications in wireless communication systems.

13:00
Design of Current Mode Active filters Using Second Generation Current Controlled Conveyor for Biomedical Applications.
PRESENTER: Syed Zahiruddin

ABSTRACT. Current-mode filters have garnered significant attention in biomedical applications due to their inherent advantages, including low power consumption, wide bandwidth, high slew rate, and compact circuitry. These characteristics make them particularly suitable for processing low-frequency biomedical signals such as electrocardiograms (ECG), electroencephalograms (EEG), and electromyograms (EMG). First order Low Pass Filter (LPF), High Pass Filter (HPF) and All Pass Filters are designed using Second Generation Current Controlled Conveyor (CCCII) in this work. The proposed designs are tested using Transient analysis, AC analysis and Montecarlo analysis. The simulation is carried out using Cadence Orcad Tool of 17.2 version. The proposed topologies are experimentally verified using commercially available ICs AD 844AN, called as CFOA (Current Feedback Operational Amplifier) and LM 13700, termed as OTA (Operational Transconductance Amplifier). The novelty of proposed configurations is, it needs a simple circuit utilizing single CCCII and very less external passive components for realization. Filters find the applications in the field of Biomedical applications, analog signal processing, communication systems, Audio processing, Sensor interfaces, Image processing, Industrial automation, Neuromorphic circuits, Space and defense, Optical communication, and in many more areas.

13:15
Fault-Tolerant Actuator in Reliable Workcell with Video Sensor
PRESENTER: Alyaa Ismaeil

ABSTRACT. Reliability and availability are becoming more and more important nowadays in to decrease system downtime. In the context of Networked Control System (NCS) workcells, this paper focuses on two of its main components, namely smart sensors and smart actuators. The first contribution is the design of a fault-tolerant solenoid operating in a cold stand-by mode. The second contribution focuses on workcells equipped with a video sensor along with fault-tolerant sensors. Riverbed simulations are used to show that the proposed design succeeds in meeting real time deadlines despite the extra traffic due to the redundant sensors and the heavy load introduced by the video sensors. The third contribution is the development of a tool which relates the cost of fault tolerance to system downtime and hence, profit loss. In the context of current supply chain problems, this tool can be used by system managers in order to make appropriate design choices regarding the quality/price of the redundant components they plan to incorporate in the workcell.

13:30
Predictive Current Control with Preselection Vector Scheme for Matrix Converter
PRESENTER: Pham Duc Dai

ABSTRACT. In current research, one of the most often used AC/AC converters is the Matrix Converter (MC). It can adjust the frequency and amplitude of the output voltage while ensuring the input sinusoidal current and the unity power factor. The predictive current control scheme is simple, effective and flexible. It has been successfully applied to the voltage source converter. When this scheme is applied to control the MC, the computational burden is huge because the 27 output voltage vectors of the MC are used in the calculation of the predictive model. This study suggests a way to preselect the vectors utilized in the predictive model in order to lessen the computational burden in the predictive current control scheme for an MC with RL load. According to the simulation findings, the suggested technique in this research performs similarly to the original predictive current control scheme

13:45
Model Predictive Direct Power Control with Selected Switching State Table for 3L-NPC Rectifier

ABSTRACT. The classical model predictive direct power control structure for the Three-level Neutral-Point-Clamped rectifier requires selecting the weighting factor of the cost function weights for the capacitor voltage balance problem. This selection usually only ensures the system operates well at some specific operating points. However, when the operating point changes, the performance of the Model Predictive Direct Power Control structure is no longer good and may even lead to system instability. The paper proposes a control structure, model predictive direct power, with a selected switching state table for the 3L-NPC rectifier. The selected switching state table is built based on the operating principle of the Three-level Neutral-Point-Clamped rectifier. The switching states used in the predictive model will be pre-selected through this selection table. Besides eliminating the weighting factor selection, the proposed control structure also reduces the computational burden of the predictive control algorithm. Simulation results show that the Model Predictive Direct Power Control with a Selected Switching State Table structure is capable of maintaining good performance even when the operating points change significantly.

11:30-14:00 Session 10H: E-Session VIII
Location: E-session 4
11:30
VirtuFit: Virtual Dressing Room
PRESENTER: Dilip Sutar

ABSTRACT. The reluctance to purchase wearable items such as clothing and accessories online often stems from the difficulty in assessing their fit and appearance on the individual. To address this challenge, we propose the development of an Online Trial Room Application. This innovative solution leverages video capture technology to create a virtual fitting experience. The application records a video of the user through their device’s camera, subsequently extracting individual frames to isolate the user’s body. By employing advanced algorithms to identify joint placements, the application can dynamically transform, rotate, and scale images of wearables, allowing for real-time visualization on the user’s figure. Our literature review explores various methodologies relevant to our project, highlighting their respective benefits and limitations. The implementation utilizes Flask for the web application framework, coupled with OpenCV, a powerful Python library for image processing. The application is designed to function seamlessly on devices equipped with a built-in or external camera, internet connectivity, and a web browser, thereby enhancing the online shopping experience for wearables.

11:45
Comparative Analysis of Deep Learning Models for Long-Term Electricity Demand Forecasting in Bangladesh Using Web-Scraped Data
PRESENTER: Tasnia Nafs

ABSTRACT. As energy demand continues to rise in Bangladesh, there is a growing need for more accurate forecasting methods to improve the balance between electricity supply and consumption. Despite increased generation capacity, the country still experiences frequent disruptions due to limitations in prediction accuracy and structural inefficiencies within the power system. This research carries out an evaluative comparison of notable deep learning (DL) frameworks, including Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Bidirectional LSTM (BiLSTM), and Bidirectional GRU (BiGRU), for forecasting daily peak electricity demand at both the national level and across the country’s eight divisions. The dataset was compiled from the Bangladesh Power Development Board (BPDB) using an automated web scraping pipeline. All models were trained on four years of historical data and evaluated using a one-year testing set. Among the models assessed, the BiGRU architecture outperformed others, achieving the lowest testing Mean Absolute Percentage Error (MAPE) value of 4.75%. The BiGRU model was also employed for division-wise forecasting, effectively capturing regional demand variations. Additionally, it was applied to unseen future dates, which are not included in the dataset, where it recursively predicted energy demand one day at a time and achieved an MAPE of 7.3%, demonstrating strong generalization capability. These results signify the aptitude of deep learning-based methodologies for enabling resilient and scalable energy consumption modeling.

12:00
Deployable Cylindrical Parabolic Antenna Design for Interferometric Synthetic Aperture Radar Applications
PRESENTER: Eda Taşcı

ABSTRACT. In this paper, a cylindrical parabolic antenna with dual linear polarization broadside microstrip array feed antenna is designed for Cube Satellite (CubeSat) Interferometric Synthetic Aperture Radar (InSAR) systems operating in Ka band. The cylindrical parabolic reflector surface has a deployable mesh structure, so it can be easily stored inside the CubeSat while complying with the CubeSat weight and size restrictions. This mesh structure is provided with Tungsten cables due to its suitability for space conditions and also this mesh structure is modeled by defining the impedance value for the cylindrical parabolic reflector surface in HFSS (High Frequency Structure Simulator) software. Also since the structure is deployable, deformations may occur on the surface while being opened in space or due to physical environment conditions. The phase distortions caused by the deformed reflective surface in the wave are calculated and the effect of these phase distortions on InSAR performance is investigated. The feed antenna structure of the antenna system described in this paper has been designed and optimized to have a reflection loss below -10 dB in the desired frequency range. The proposed antenna design provides a solution to both Interferometric Synthetic Aperture Radar (InSAR) applications and the size and gain constraints of CubeSat.

12:15
Using Instruction-Following LLM Hidden States as Conditioning for Video Diffusion Model
PRESENTER: R Hema Bhushan

ABSTRACT. Video generation has applications in several fields. With the advent of Generative AI, we see extensive research being conducted on video generation using AI. Through this project, we experiment the usage of LLM Hidden states as conditioning to train a Video Latent Diffusion Model to study their ability of passing richer semantic information about the video samples. We did a comparative study of context retention abilities of LLMs in case of embeddings and hidden states separately. We create a pipeline with 3 major components - the LLM, a custom Bridge Network and the Diffusion UNet. We conduct our study using two different datasets - the simpler Captioned Moving MMNIST and a subset of the Sakuga-42M dataset. We conclude by evaluating our model variants on standard benchmarks and metrics, and state our findings, which could serve as ground for future work.

12:30
Automated Washroom and Monitoring System: IoT- Driven Hygiene Management for University Facilities
PRESENTER: Sukhman Singh

ABSTRACT. For university washrooms which have high occupancy and smell surges—especially in post-class hours with peaks ranging from 25 to 35 users hourly and air quality dropping from 300 to 700 ppm—cause serious hygiene issues. The Automated Washroom and Monitoring System provides an IoT-based system combining two ESP8266 microcontrollers with HCSR04 ultrasonic sensors for occupancy tracking, DHT11 for humidity, and MQ-135 for air quality monitoring. A 5V relay manages ventilation and Firebase records real-time data and a Django dashboard offers administrative insights that show the real-time sensor data in the form of current ppm, current occupancy, and average ppm today and Twilio sends SMS warnings to maintenance staff. a bi-daily random forest model with 85% accuracy estimates cleaning requirements. Applied in a four-week pilot at Lovely Professional University, the approach validated by fifty user surveys (p = 0.03) dropped sanitary concerns by thirty%. This scalable system increases user comfort and operational efficiency around ₹1600. By addressing dynamic use patterns, it meets with India's Swachh Bharat aim and thereby sets a benchmark for smart hygienic management in public and educational facilities. Future advancements will include multi-washroom scalability and advanced prediction models integrating deep learning to boost accuracy and adaptability even more.

12:45
SwiftCommit: Integration of Commit Summary Generator Into Version Control Workflow

ABSTRACT. Commit messages are crucial in software development, documenting code changes and ensuring project clarity. However, writing clear and concise messages can be tedious, leading to inconsistencies that affect collaboration and maintainability. This study presents SwiftCommit, an Artificial Intelligence (AI)-powered commit summary generator designed to automate and improve commit message writing within version control workflows. Transformer-based models BART and CodeT5 were trained on the CommitBench dataset, and their performance was evaluated using Bilingual Evaluation (BLEU), Recall-Oriented Understudy for Gisting Evaluation (ROUGE), and CodeBERT scores, ultimately selecting CodeT5 for its superior accuracy in generating meaningful summaries. SwiftCommit was then integrated into a Visual Studio Code plugin, enabling seamless automatic commit message generation. A total of 10 professional developers from the Information Technology (IT) industry assessed the effectiveness of the SwiftCommit. Participants rated the generated commit messages on adequacy, conciseness, and expressiveness. Results indicated a moderate level of agreement among respondents, confirming that SwiftCommit produces reliable commit summaries that align with human-written messages. This study demonstrates the potential of AI-driven tools in improving software documentation and developer efficiency by reducing the manual effort required for commit writing. Future enhancements may include support for additional programming languages, broader integration with development platforms, and further refinement to improve message quality.

13:00
Parabolic Modulation Based Non-singular Fast Terminal Sliding Mode Control for Single-Phase Stand-Alone Inverters
PRESENTER: Cagdas Hisar

ABSTRACT. Upgraded sliding surfaces are used to control the output voltage of a single-phase stand-alone inverters. As a new generation of sliding surfaces, terminal sliding surfaces, fast terminal sliding surfaces and non-singular fast terminal sliding surfaces are available in the literature. While quicker dynamic results can be obtained with the fast terminal manifolds, the surface-induced chattering effect is very high. Chattering with the non-singular surface is minimal, but dynamic reactions are also sluggish. In comparison to the other two structures, the nonsingular fast terminal manifold produces more desirable results. In this paper, a non-singular fast terminal structure is compared with other ones using various visualizations, and its strengths are displayed in detail. On the other hand, the parabolic modulation (PM) approach, which was newly described in the literature, is utilized to ensure that the generated switching signals had a constant frequency. To demonstrate the efficacy of PM and to encourage its adoption in further research, this approach is contrasted with the hysteresis modulation (HM) method under various circumstances.

13:15
A Comparative Study of Transformer-Based Models and Machine Learning Techniques for Enhancing Bangla Sentiment Analysis

ABSTRACT. Social media monitoring alongside business intelligence needs accurate sentiment analysis because the digital activity of Bangla-speaking communities keeps expanding. The current classification models do not deliver effective Bangla context analysis mainly because of insufficient annotated datasets when processing rare languages. The research examines sentiment classification through the combination of deep learning transformers, Flair, Task-aware representation of sentences, and Generative Pre-training Transformer with traditional machine learning algorithms such as Logistic Regression, Support Vector Classifier, Random Forest, XGBoost, Decision Tree, K-Nearest Neighbors, and Naive Bayes. During tests on our Bangla dataset, the Flair model implementing Bangla BERT Base embeddings produced the best performance of 94.21% accuracy, which surpassed both TARS at 88.5% and traditional classifiers. Embedded into contexts selects complex sentiment patterns efficiently, which proves integral to boosting low-resource Natural Language Processing task performance. For better sentiment classification in practical settings, researchers will concentrate on enhancing the efficiency and diversity of datasets and implementing crosslingual capabilities.

13:30
AI and IoT Integration in Pediatric Phototherapy: Revolutionizing Approach for Neonatal Care
PRESENTER: Akash Katode

ABSTRACT. Neonatal jaundice is a common condition in newborns, treated through phototherapy to degrade excess bilirubin. However, high costs and limited access to advanced incubators remain barriers, especially in regions like Asia and Africa. This paper introduces an affordable IoT-based phototherapy device powered by Extensa LX6 and LX7 processors, featuring programmable temperature and humidity control, along with sensors to monitor skin temperature and cradle moisture. The system includes a 360-degree rotational light source for effective phototherapy. Using Edge Impulse, a machine learning model is deployed on the hardware to monitor the baby’s position through live video streams for real-time remote monitoring. IoT connectivity enables remote control of the light source and exhaust fans, ensuring a safe environment. The device also includes fire protection, auto-sterilization, and a hand disinfectant system, ensuring hygiene and safety. This system combines AI, IoT, and automation to improve neonatal care in underserved regions.

13:45
A novel approach on Cloudlets based applications using Meta-heuristic technique

ABSTRACT. In general, the computing resources linked with distinct distributed locations can be termed as cloudlets. Particularly, these are more provisioned within the latency sensitive and edge computing environments. These decentralized clouds can be deployed within the boundary of the network, closely associated with the system. In such situation, adequate computation adopting low-latency may seek real-time processing. The primary intention in this case is to accumulate the computing resources nearer to the application accessed through central cloud data centers. It is understood that in the cloudlet based environment, the essentiality is observed while allocating the resources on variety of tasks and optimizing the performance minimizing the delay response. Many times several cloudlets can be able to process variety of applications after receiving the requests from other computing resources linked to task execution. In such situation, after verification of the type of applications, the appropriate cloudlet may be retrieved among multiple cloudlets. So adopting specific strategies, the latency linked with applications during execution can be minimized. Accordingly, to optimize the resources within the cloudlets and to prioritize specific aspect like scheduling of tasks, allocation of resources and managing the loads on the processing elements, meta-heuristic technique, specifically particle swarm optimization can be applied to obtain the optimal solution by simulating a swarm of particles.

14:00
XIMR-Net: A Robust Deep Learning Model for Automated Lemon Leaf Disease Classification
PRESENTER: Md. Alif Sheakh

ABSTRACT. Lemon leaf diseases threaten global citrus production, causing significant economic and agricultural losses. While deep learning offers solutions, existing models often lack robustness under real-world conditions like variable lighting and disease severity. This study introduces XIMR-Net, an ensemble deep learning framework that synergizes transfer learning and multi-model fusion to achieve unprecedented accuracy in lemon leaf disease classification. Our methodology begins with rigorous preprocessing: resizing images, normalization, and Contrast Limited Adaptive Histogram Equalization, which improves image quality, as validated by the Peak Signal-to-Noise Ratio. We evaluated ten state-of-the-art CNN architectures known as the transfer learning model. The four main models (InceptionV3, Xception, ResNet101V2, MobileNetV2), each exceeding the accuracy 94%, were integrated using weighted averages in XIMR-Net. Hyperparameter optimization, including focal loss and advanced data augmentation, enhanced precision and recall by over 75% for individual models. XIMR-Net achieved 99.28% accuracy, 98.99% precision, 99.07% recall, and 98.99% F1 score outperforming both standalone models and existing ensemble approaches. Five fold cross-validation confirmed robustness, while confusion matrices revealed near-perfect classification across nine disease categories. By addressing critical gaps in scalability and field adaptability, XIMR-Net provides a deployable tool for precision agriculture, compatible with mobile-based monitoring systems. This work advances AI-driven disease management, offering farmers a reliable, early detection solution to mitigate crop losses and promote sustainable practices.

14:15
Innovative Practices For Managing End-of-Life Aircraft: A Review of Circular Strategies And Digital Solutions
PRESENTER: Zaki El Hani

ABSTRACT. As the global aircraft fleet continues to age, the aviation industry is increasingly confronted with the environmental, economic, and logistical challenges of managing end-of-life (EOL) aircraft. This paper presents a comprehensive review of recent innovations and strategies aimed at improving the dismantling, recycling, and valorization of retired aircraft. Emphasis is placed on circular economy principles, Lean methodologies, and emerging digital technologies that enhance traceability and operational efficiency. Tools such as blockchain, RFID systems, and predictive maintenance models are explored for their capacity to streamline EOL processes and support sustainable resource recovery. Particular attention is given to the growing presence of composite materials and the technical constraints they pose for recycling. By adopting a systematic literature review approach, this study identifies current best practices, persistent barriers, and potential avenues for integrating eco-design, automation, and collaborative frameworks into a unified model for sustainable aircraft decommissioning.

14:30
Toward Sustainable Aviation: A Systematic Analysis of Strategies to Reduce Environmental Impact
PRESENTER: Zaki El Hani

ABSTRACT. Over the past few years, the aviation sector has come under growing pressure to adopt environmentally responsible practices, particularly by cutting down on CO₂ emissions that harm ecosystems. This review synthesizes existing literature on innovative strategies and technological developments aiming to support this transition. Particular attention is given to alternative fuels such as liquid hydrogen, biofuels, and synthetic alternatives which offer substantial potential to reduce carbon emissions, despite persistent barriers related to cost, availability, and infrastructure readiness. The integration of lightweight composite materials has shown benefits in terms of fuel consumption, yet also presents concerns regarding their end-of-life management and recyclability. Electric propulsion systems are gaining momentum for regional and short-haul applications, although their scalability remains limited by energy storage capabilities and insufficient charging infrastructure. Meanwhile, artificial intelligence is emerging as a strategic asset, supporting more efficient operations, resource optimization, and waste reduction. The review methodology is grounded in the PRISMA protocol for article selection, supported by qualitative coding via NVivo and bibliometric analysis using VOSviewer. Overall, the findings underline that achieving meaningful progress in sustainable aviation extends beyond technological solutions, requiring a concerted effort that brings together regulatory frameworks, industrial innovation, and academic research. The insights provided serve as a foundation for guiding future initiatives toward environmentally responsible aviation.

14:00-15:30Lunch Break
15:30-17:00 Session 11A: International Workshop "Emerging Technologis Remodeling the Legal World - Challenges and Question"
Location: Room 1
15:30
Smart elections: Can artificial intelligence influence voter behavior?

ABSTRACT. Traditional electoral systems have been adapted to the rapid evolution of new technologies, trying to find solutions for the various problems that have arisen in the electoral process. From the use of electronic voting to combat political absenteeism to the use of AI in the electoral campaign, recent advances in the field of artificial intelligence have an unprecedented impact on the electoral process, both positively, through multiple benefits, and negatively, through the possibility of being misused and violating fundamental rights. The ability of artificial intelligence systems to launch cyber attacks and produce deepfakes, thus leading to disinformation of the population, can negatively affect democratic electoral processes. So we do not hesitate to ask ourselves, as in any other field, what are the costs of using artificial intelligence in the electoral process? Is the electorate prepared to validly express its right to vote in the current situation of multiple sources of information? Romania, like other countries, recently experienced this negative experience, during the presidential elections of November 24, 2024, when artificial intelligence was used abusively, which led to the manipulation of the electorate and election fraud by violating freedom of expression, the right to vote, but also the principle of equality regarding the right to be elected.

15:40
THE IMPACT OF USING EMERGING TECHNOLOGIES IN BANKING
PRESENTER: Adriana Panțoiu

ABSTRACT. Emerging technologies are being successfully used in banking and are enjoying real success. But their use is not without challenges, both technological and legal. Emerging technologies, on the other hand, come with a number of benefits, but they also create a great deal of vulnerability for individuals and businesses. The law is challenged to face up to these challenges, to ensure adequate protection for all subjects of law and to keep pace with the rapid evolution of these technologies. The legislator plays an extremely important role in this period, both at international and national level. They must issue rules that are applicable in the long term, which requires them to have a good knowledge of, and anticipate, future developments in the field of emerging technologies, so that they are not surprised by the new realities but must somehow steer them towards healthy development.

15:50
Integrating Digitalization and Artificial Intelligence in Urban Transport: Regulatory Challenges and Smart City Initiatives

ABSTRACT. This article analyzes the integration of digitalization and artificial intelligence (AI) within urban transport systems, focusing on the intricate interplay between technological advancements, regulatory frameworks, and smart city initiatives. It employs a systemic analytical approach, drawing from academic literature, policy documents, and global and local case studies (e.g., Vienna, Seoul, Cluj-Napoca, Bucharest) to highlight current trends, challenges, and solutions. The analysis details the foundational role of digital access, connectivity, and the application of AI and Big Data in optimizing traffic flow and resource management. Concurrently, it scrutinizes key regulatory hurdles, including liability in autonomous vehicles, data protection, privacy concerns, and the imperative for digital inclusion to prevent discrimination. The paper synthesizes the persistent gap between rapid technological evolution and slower legislative adaptation, emphasizing that addressing these regulatory challenges is crucial for fostering truly sustainable, equitable, and efficient urban development. Ultimately, it proposes policy recommendations, such as investing in integrated platforms, promoting public-private partnerships with robust oversight, and establishing adaptive regulatory bodies, to harmonize technology and governance for smart city transport.

16:00
Digital versus Traditional Instruments in the Practice of Legal Professions: An Overview of the Lawyer and Public Notary Professions
PRESENTER: Olivian Mastacan

ABSTRACT. This article aims to highlight the latest developments in legal professions regarding the integration of digital and artificial instruments, in contrast with traditional, analogue instruments. Among the multitude of legal professions, the roles of lawyer and notary are particularly significant. Despite their deep-rooted connection with the people, defending their rights and aiding them through complex situations, these "liberal" professions are fundamentally interrelated with public aspects of social and economic life. Both professions fulfil essential public functions that underpin the current structure of society. Their liberal nature does not stem solely from progressivism but from the autonomy they offer practitioners and the intelligent application of legal frameworks within a solidly regulated context. These are professions that are simultaneously innovative and traditional, as they are unified by law, legal norms, and regulation, which constitute the foundation of the manifested universe. Thus, what is the future? Can these professions fully embrace technological advancements? And if so, to what extent ? Assuming that both professions are inherently tied to human, social, relational, political, and economic needs, we express the hope that the integration of new technologies will occur without dehumanizing the legal act, whether notary and lawyers or in nature.

16:10
Emerging Trends in Regulating the Use of Digital Technologies for Sustainable Forest Management

ABSTRACT. Abstract: Law and the legal world are often perceived as archaic, unsuitable, or even behind the times in relation to the evolution of society. However, in the context of unprecedented technological developments in recent decades, it was natural for legislation to take them into account and use them for the benefit of achieving major social objectives, especially related to environmental protection. The sustainable development objectives assumed at a global level and the Regional-European initiatives to achieve the targets thus established represent an opportunity to raise awareness of the need for technology and law to work together for the benefit of people and societies. Even if the introduction of technical notions into legal language is not without complications, and digital technologies are, in fact, transforming the rules and practice of law, requiring a constant effort of adaptation on the part of legal professionals, who must stay abreast of technological developments, the use of modern technologies makes a considerable contribution to the application of law, in terms of effectiveness and accessibility of the data necessary to allow decision-makers to take appropriate measures. This study aims to illustrate the convergence between law and new technologies for sustainable forest management, providing an overview of the most recent initiatives of the European Union and Romania, based on a content analysis of policy documents and legal acts, as well as the literature on the subject.

16:20
Digitalization of the Judicial System in Romania. Important Strategic Objective and Guarantee for a Modern, Fast and Effective Justice System
PRESENTER: Alin Petrea

ABSTRACT. The last decades have demonstrated that the integration of new technologies into human daily activity is an undeniable reality. The phenomenon is complex, has a rapid pace of manifestation and encompasses all segments of social life, one of which is the act of justice. From this perspective, the study offers a brief analysis of the degree to which IT (information technology) solutions have been assimilated into the way judicial institutions work, of the benefits brought by them, of the risks or dangers they present and also of the perspective of realistic assimilation of the these technologies, including those using AI (artificial intelligence). Our paper is placed at the necessary intersection between the digital transformation of the legal system, the ethics of using artificial intelligence and the development of technological infrastructure for justice.

16:30
The Role of Technology in Mediating and Moderating the Relationship between Academic Pressure and Mental Health
PRESENTER: Thaya Madhavi

ABSTRACT. Students' mental health issues are significantly influenced by academic pressure. This study investigates the dual function of technology as a moderator and mediator in the connection between students' mental health and academic stress. This study uses SPSS for both mediation and moderation analysis, based on data from 384 students. The results show a strong positive correlation between mental health problems and academic pressure, with technology acting as a partly mediating factor. Technology does not, however, have a significant moderating influence on this relationship. The study also identifies harmful and protective technical factors that affect this dynamic. The findings imply that although technology can improve coping strategies, excessive usage of it may make psychological stress worse. Based on the results, this study provides educators, legislators, and mental health specialists with practical insights that highlight the benefits and drawbacks of integrating technology in the classroom and direct the development of instructional strategies that promote student academic achievement and mental health.

16:40
A Study On Effectiveness Of Technological Advancements On Human Resource Practices In Selected It Firms
PRESENTER: Thaya Madhavi

ABSTRACT. Technological advancements have significantly transformed Human Resource Management (HRM) practices, particularly in IT firms, by enhancing efficiency and streamlining operations. This study focuses on analyzing the impact of technology on HR functions such as training and development and recruitment, assessing how digital tools improve HR processes and decision-making. With the increasing adoption of artificial intelligence, automation, and HR analytics, organizations can optimize talent acquisition and employee development strategies. The research also examines employee perceptions of technology adoption in HRM and its impact on job satisfaction and productivity. Understanding these perceptions is crucial for organizations to ensure a smooth transition to digital HRM and enhance employee engagement. While technology offers numerous benefits, resistance to change and lack of digital literacy can affect its effectiveness. Furthermore, the study identifies key challenges faced by HR professionals in integrating advanced technologies into HR practices, such as data privacy concerns, implementation costs, and adaptability issues. By analyzing primary data from selected IT firms, this research provides valuable insights and strategic recommendations to improve the effectiveness of technology-driven HRM, ensuring a balance between innovation and human-centric HR practices

15:30-17:00 Session 11B: Bio-medical applications & biomaterials
Location: Room 2
15:30
Basal Ganglia Selection of the Cerebral Cortex Main Conscious Process
PRESENTER: Mihai Popescu

ABSTRACT. The somatic system - the system of human body interaction with the environment needs a rigorous control. This effort begins at the spinal level with the spinal reflexes, continues with the feedforward control at the level of cerebellum for the unconscious control of the posture and equilibrium, set at the basal ganglia which is the main process consciously executed in relation with the environment and ends at the cortex as the final level of integration by executing it. At the cerebellum level the hub of communication is the brain stem, whereas at the level of basal ganglia the hub of communication is thalamus. The purpose of this workpaper is the discussion and modeling of the direct and indirect paths motor program selection inside basal ganglia. Because there is a lack of inexpensive ways of investigation at the level of diencephalon, basal ganglia, brain stem and cerebellum it is convenient to work on cybernetic models and make simulations. To validate the models the results obtained can then be compared with the results from the reference. The modeling effort in this paper continues the work of some of our previous publications mentioned after all during the workpaper.

15:45
Image-Based Artificial Intelligence for Early Diagnosis of Ocular Diseases

ABSTRACT. Early diagnosis of ocular diseases is crucial for preventing vision loss and ensuring timely medical intervention. This study proposes an advanced artificial intelligence (AI)-based approach for the automated classification of ocular diseases using medical imaging. The proposed model employs convolutional neural networks (CNNs) to analyze retinal images and categorize patients into eight diagnostic classes: Normal, Diabetes, Glaucoma, Cataract, Age-related Macular Degeneration (AMD), Hypertension, Myopia, and other diseases/abnormalities. The dataset used in this research consists of annotated ophthalmic images labeled by expert clinicians. A comprehensive preprocessing pipeline, including noise reduction, contrast enhancement, and augmentation techniques, was applied to improve model performance. The classification model was trained using state-of-the-art deep learning architectures, and its performance was evaluated using accuracy, precision, recall, and F1-score metrics. Experimental results demonstrated that the proposed model achieves high classification accuracy and outperforms existing approaches in detecting early-stage ocular diseases. The study highlights the potential of AI-driven diagnostic tools to support ophthalmologists in clinical decision-making and improve the accessibility of eye disease screening.

16:00
Designing a Smart Device for Personal Assistance based on Artificial Intelligence

ABSTRACT. This project proposes a smart watch that combines health monitoring and personal safety using artificial intelligence. The device features facial recognition, voice assistance, and two cameras that analyze the environment to detect dangers, such as suspicious individuals or emergencies, automatically calling emergency services and sending the user’s location. Sensors track heart rate, blood oxygen levels, and stress, while advanced algorithms can anticipate medical issues and better recognize risk situations. Data is protected through encryption and local storage, providing a complete solution for safety and health.

16:15
Enhanced Micro-Vessel Image Segmentation Using the Advanced EyeU-Net Model
PRESENTER: Mariem Qaddour

ABSTRACT. Segmenting blood vessels in medical images is critical for applications such as disease diagnosis and treatment planning. This study introduces EyeU-Net, a novel deep learning model based on the U-Net architecture, specifically designed for vessel segmentation. EyeU-Net utilizes an encoder-decoder structure with skip connections to capture both fine details and global context, enabling precise segmentation. The model was trained and evaluated on the dataset Drive, It achieved an accuracy of 0.8662 and an IoU of 0.6645, with also 0.8333 in Precision, outperforming existing approaches in vessel segmentation tasks. The results highlight the effectiveness of EyeU-Net as a robust and accurate solution for this challenging task, demonstrating its potential for clinical applications .

15:30-17:00 Session 11C: Artificial Intelligence and expert systems III
Location: Room 3
15:30
A Hybrid Technique based on Fusion of Bat Algorithm (BA) - Grey Wolf Optimizer (GWO) - Deep CNN for Classification of Military Ground Vehicles in SAR Aerial Imagery
PRESENTER: Liviu Rujan

ABSTRACT. This paper proposes a novel hybrid classification model obtained by fusion of two Swarm Intelligence techniques (Bat Algorithms (BA) and Grey Wolf Optimizer (GWO)) with a Deep Convolutional Neural Network (CNN). The proposed method is applied for Automatic Target Recognition (ATR) to classify military ground vehicles in SAR Aerial Imagery. We have evaluated the performances of the model, using MSTAR (Moving and Stationary Target Acquisition) dataset and choosing a Residual Neural Network (ResNet) as CNN classifier to be combined with BA and GWO. One obtains an accuracy of 91.60% using the proposed fusion of BA-GWO-ResNet by comparison with an accuracy of only 83.4%, obtained when one uses a standalone ResNet. We point out that the presented classification algorithm does not require a processing phase for object detection.

15:45
An Approach to Lung Cancer Detection and Classification from Chest CT-Scan Images Using Deep CNN and Decision Fusion as a Diagnostic Tool

ABSTRACT. Lung cancer is the deadliest disease in the world and its diagnosis as early as possible is necessary. First part of this paper is dedicated to the identification of lung cancer in chest CT-scan imaging using deep learning CNN for subject classification in four categories: normal, adenocarcinoma, large cell carcinoma, and squamous cell carcinoma. The GoogLeNet architecture is chosen, and the classification performances are evaluated for various training conditions. The second part of the paper has as aim lung cancer detection in chest CT-imaging, namely, to diagnose if a patient has lung cancer (any of the above mentioned three categories of lung cancer) or he is normal. For the lung cancer detection task, we propose an ensemble of two GoogLeNet modules performing a decision fusion according to Dempster Shafer theory. The two GoogLeNet modules of the ensemble have identical architecture but they use an asymmetric training technique controlled by a parameter denoted as . The influence of asymmetry parameter  on the performances of the CNN ensemble decision is shown. We have also pointed out the advantage of the proposed decision fusion model over a standalone classifier with the same architecture. We have obtained an Overall Accuracy of 96.37% for decision fusion with optimization of asymmetry parameter , versus 92.45% for a standalone classifier with balanced training.

16:00
Whisper Based Speech Recognition for Emergency Services

ABSTRACT. Artificial intelligence is increasingly integral to emergency response systems, offering capabilities such as real-time call transcription, keyword detection, and prioritization of life-threatening situations. It also facilitates rapid access to protocols, location data, and caller history, supporting more efficient and informed decision-making by emergency operators. This paper presents an enhancement to the speech transcription module of the ODIN112 emergency operator assistant system by replacing the Kaldi-based module with one built on Whisper. The Whisper-based solution leverages transfer learning from extensive multilingual pre-training, requiring significantly smaller Romanian-specific fine-tuning datasets compared to training ASR systems from scratch. Moreover, Whisper’s superior multilingual support is an essential feature in future developments of the ODIN112 system, for accommodating the languages of ethnic minorities and supporting non-native speakers (e.g., tourists).

16:15
Hybrid Ant Colony and Q-Learning Algorithm for Swarm Robots: Path Planning and Collision Avoidance in Unknown Environments
PRESENTER: Andrei Dutceac

ABSTRACT. This paper presents a hybrid approach combining Ant Colony Optimization (ACO) and Q-learning for swarm robot path planning and collision avoidance in unknown environments. Traditional ACO-based path planning methods face challenges such as local minima and suboptimal solutions when navigating into complex environments. By integrating Q-learning, the proposed method enhances adaptive decision-making, enabling robots to dynamically explore and optimize their routes. The epsilon-greedy strategy balances exploration and exploitation, preventing robots from getting stuck in dead zones and allowing for the discovery of more efficient paths. Simulations conducted in MATLAB demonstrate that the ACO-Q-learning algorithm generates multiple alternative paths, improves overall path efficiency, and enhances swarm coordination compared to standard ACO and Artificial Potential Field (APF) methods. The results highlight the algorithm's potential for real-world swarm robotics applications, where decentralized robots must autonomously navigate and adapt to changing environments

16:30
A New Approach to Diagnose Driver Drowsiness using an Ensemble of Deep CNN Classifiers with Decision Fusion

ABSTRACT. This paper presents a novel approach to diagnose driver drowsiness using an ensemble of Deep CNN classifiers based on Dempster-Shafer decision fusion. The proposed method firstly uses YOLOv5 vs DLIB C++ Library for face localization. The detected faces are then classified into one of three drowsiness states – low, vigilant and alert – using an ensemble of M Deep CNN classifier modules. The decision fusion based on Dempster-Shafer theory is applied to combine the outputs of the M independent CNN modules, where we have considered the cases M=2, 3…,7. We have chosen VGG16 as CNN architecture. Using the proposed decision fusion technique, we have obtained an improved overall accuracy (OA) up to 97.66% for an ensemble of M = 3 CNN classifiers, by comparison with the best accuracy of a standalone classifier of only 78.66%.

16:45
New Wave Logistics Framework: Enabling Sustainable Practices in Road Transportation
PRESENTER: Younesse Ouahbi

ABSTRACT. The logistics industry is focused on reducing carbon emissions and improving efficiency to meet environmental and regulatory requirements. NW Logistics, an AI-powered platform, aims to achieve these goals by integrating real-time CO2 emissions tracking, route optimization, and driver behavior monitoring. The system allows logistics managers to monitor fleet performance with increased accuracy and generates individualized reports for drivers. Early simulations show significant reductions in carbon emissions, improvements in route efficiency, delivery tracking accuracy, and driver safety, demonstrating the transformative potential of AI in sustainable logistics management

15:30-17:00 Session 11D: Electronic circuits and equipment
Location: Room 4
15:30
Integration of SystemC with PSpice Simulations Models

ABSTRACT. This paper focuses on the integration of SystemC with PSpice, by allowing the simultaneous simulation of the digital and analog behavior of an electronic system, providing a complete and accurate overview of its performance. The presentation will cover topics like configuring and connecting components, creating a co-simulation model, as well as interpreting the results obtained. As electronic systems become increasingly complex, it is important to accurately simulate their behavior. This requires a combination of analog, digital, and mixed-signal simulations, which can be challenging to perform using traditional analog simulation methods. Co-simulation with PSpice and SystemC could provide a powerful solution supporting complex simulations systems using both analog and digital models. By integrating these two simulation environments, users can leverage the strengths of each tool and accurately model the behavior of complex mixed-signal systems. The new Infineon Automotive Smart Power Switches are providing protection functions and enhanced diagnostic capabilities. The device offers an adjustable current limitation to offer higher reliability for protecting the system. In case of a short circuit to ground the PCB traces, connectors, as well as loads, can be protected. Furthermore, the device has a capacitive load switching mode implemented to charge capacitors. Having all of this in mind we developed a analog behavior model using PSpice and SystemC. In our example, creating a model containing a complex finite state machine to handle all the function of the device could be a challenge using circuit level implementation or Spice and analog behavioral modeling methods. While PSpice is a powerful simulation tool, it is primarily focused on analog circuits. In contrast, SystemC is specifically designed for modeling digital systems and provides a more intuitive and efficient way to create FSMs. Using SystemC modules we can avoid convergence issues generated by the digital hazards, also the simulation time improves significantly. The seamless integration of different modeling techniques into a single environment and simulator has been a significant advantage in overcoming the challenges of simulating complex mixed-signal systems. This has been demonstrated through the integration of SystemC with PSpice, which allows for the simulation of the behavior of such systems in a more efficient and accurate manner. As digitalization becomes an essential part of the automotive application design-in process, simulation models provide a valuable solution. The research will be helpful for engineers who need to develop and validate complex electronic systems by using co-simulation of SystemC with PSpice.

15:42
Optimized Current-Source based on Brokaw Architecture for Constant Input Transistors Transconductance
PRESENTER: Cristian Stancu

ABSTRACT. The demand for CMOS precision operational amplifiers in critical applications has steadily risen over time, driven by increasing requirements for enhanced accuracy and sensitivity. The input differential pair transconductance impacts important parameters such as the unity gain bandwidth and offset voltage drift. This paper focuses on designing and implementing an innovative current source that optimizes the bias slope to minimize the differential input transconductance fluctuation with temperature, alongside achieving lower cost and decreased die area. A total change in transconductance of only 7 µS over the entire studied temperature range is accomplished with this innovative approach. A comparison between the design provided in this paper and other works from literature is also carried out, with the first one showing superior performance. Corner simulations are also conducted for the proposed architecture to assess the circuit performance under process variations. The circuit layout is also provided.

15:54
Smart Over-Temperature Protection Connector For DC Electrical Circuits

ABSTRACT. The operating temperature of the direct current circuits is a very important parameter in terms of their reliability. Temperature increases due to the ambient environment, imperfect connections or imperfect functioning of some direct current circuit elements can cause serious destruction and damage. This paper presents an advanced smart device designed for over-temperature protection in direct current electrical circuits. Equipped with a microcontroller and Negative Temperature Coefficient sensors, the device monitors temperature, using Inter-Integrated Circuit or Universal Asynchronous Receiver / Transmitter Communication during the experimental phase. Once a specific temperature threshold is reached, using metal-oxide semiconductor field-effect transistors the device controls the load by switching it on-off. The experiments involve exposure to high currents over extended periods of time and rapid current variation, highlighting the sensor's precision and the efficacy of the protection mechanism.

16:06
Technical Review of Class TD Audio Power Amplifiers
PRESENTER: Andrei Militaru

ABSTRACT. This paper presents an extensive evaluation of power amplifier classes by analyzing Class TD amplifiers alongside standard classes A, B, AB, D and G/H. Class TD amplifiers serve high-power applications because they deliver efficient and high-performance audio amplification. The design of these amplifiers supports high power output together with excellent fidelity characteristics which makes them appropriate for professional sound reinforcement, large-scale public address systems and high-power studio monitoring applications. Class TD amplifiers combine AB class audio quality with class D power efficiency through their design to deliver strong performance without major heat dissipation. The initial section presents a basic outline of audio power amplifier configurations. A thorough explanation follows about Class TD amplifier operation principles together with their distinctions from other amplifier classes. The research evaluates essential performance metrics through a comparative study of efficiency, total harmonic distortion, linearity, complexity, cost, output power, energy consumption, signal-to-noise ratio and frequency response. The evaluation assesses both benefits and drawbacks of Class TD amplifiers when used in high-power applications. The paper finishes by discussing general findings about Class TD audio power amplifiers for high-power applications and their potential future development.

16:18
Study on the Development of an Application for Ultrasonic Microscanning of Metal Samples in Demineralized Water
PRESENTER: Popescu Larisa

ABSTRACT. The management of the operating time of a nuclear power plant, the estimation and prediction of the operating period of structural components, between two periodic inspections, occupies a central place, both in the exploitation activity and in the research and development activity that provides technical support to this field. For nuclear power reactors, the operating conditions specific to the active zone assume high temperatures and mechanical stress states, but also a corrosive environment and a strong radiation field. In this field, nuclear components must operate under conditions of maximum safety and nuclear security. Thus, for CANDU (Canada Deuterium Uranium) reactors in the Romanian nuclear power industry, maintaining the structural integrity of the pressure tubes, throughout the entire designed lifetime, represents one of the main factors in ensuring efficient exploitation under conditions of maximum security. One of the experimental means that can be used for the non-destructive investigation of materials is ultrasound and ultrasonic examination of materials is one of the most preferred nondestructive techniques. The technique being volumetric, through ultrasonic examination, inhomogeneities and discontinuities in the volume of the material can be highlighted and evaluated. This paper studies the possibility of creating an experimental laboratory system intended for the nondestructive examination of structural materials of nuclear interest (zirconium alloys) by ultrasonic beam microscanning in automatic mode.

16:30
Fuzzy control of skeletal muscles
PRESENTER: Cristian Ravariu

ABSTRACT. There are three types of striated muscles fibers. All of them are controlled by dedicated motoneurons types with which are making closely bounded structures named motor units. This motor units are working together like three subsystems of a large system named neuronal pool. This neuronal pool resembles with a gearbox with the motoneuronal subsystems being like three sprockets. You can say that every of these sprockets are dedicated to a certain speed domain, so there are three domains. All this domains are not strictly separated, on the contrary they interfere at the border behaving like a fuzzy system. The upper control and the tiredness complicate furthermore the functioning. The purpose to make light in this problem is a challenge which must be assumed.

16:42
Performance Evaluation of a New Efficient Energy Management Strategy for Fuel Cell Hybrid Electric Vehicles

ABSTRACT. In this paper, the performance indicators for fuel cell hybrid electric vehicles are evaluated using an energy management strategy based on a new algorithm (named SWA_RTO) proposed here. The analysis is done in comparison with a reference strategy (Static Feed Forward strategy) using performance indicators such as fuel economy, oxygen excess ratio, fuel efficiency, battery state of charge, and electrical efficiency of the fuel cell system. Significant fuel savings were achieved on the main European driving cycles (ECE-15, EUDC and NEDC), highlighting the potential of the new SWA_RTO strategy to choose the best strategy (which offers the lowest fuel consumption) based on the requested power. By controlling the air and hydrogen regulators, the fuel cell system generates a power that follows the requested power profile, so that the batteries operate in a sustained charge mode, increasing their lifespan.

18:00-20:00ECAI Gala Dinner