ECAI-2024: 16TH EDITION OF INTERNATIONAL CONFERENCE ON ELECTRONICS, COMPUTERS AND ARTIFICIAL INTELLIGENCE
PROGRAM FOR THURSDAY, JUNE 27TH
Days:
next day
all days

View: session overviewtalk overview

08:00-08:50 Authors Registration

Authors Registration

Location: Desk Hall
09:45-11:05 Session 2: ECAI KEYNOTE LECTURES I - part 1
Chair:
Location: AULA
09:45
Recent trends in artificial intelligence
10:25
Perspectives of Artificial Intelligence for Power Electronics Applications
11:05-11:35Coffee Break
11:35-12:15 Session 3: ECAI KEYNOTE LECTURES I - part 2
Chair:
Location: AULA
11:35
Computers and the Environment - From Energy Efficiency to Software Sustainability

ABSTRACT. Societal concerns about how human activity contributes to climatic changes continue to grow as weather events increase in intensity and duration. Now intrinsic to almost all human endeavors, computing technologies’ environmental impact has come into focus. In this presentation, sustainability and computing are discussed through three different lenses. First, the carbon footprint of computing devices (in all forms) is discussed together with relevant examples of how technology companies have evolved their consumer message. In addition, curriculum education on green computing is also highlighted. Second, energy efficiency in data centers is considered. As computing drives science and engineering, powers industries and healthcare, its backbone formed of data centers consumes a significant and growing portion of the world’s electricity. In this context. mechanisms to monitor energy usage and approaches to reduce consumption are introduced. Third, the sustainability of scientific software and data will be discussed by considering the viability of current approaches such as open-source licensing and community support.

12:15-12:45 Session 4: Invited paper
Location: AULA
12:15
IoT and AI Based Smart Event Monitoring Applications- Concepts, Recent Trends, and Future Directions

ABSTRACT. With recent technological advancements in sensors and high speed processor and high data-rate wireless radios, internet of things (IoT) and artificial intelligence (AI) are widely explored for automatic detection, recognition, localization and prediction of events in major practical applications of health (human, machine and structural) monitoring, environmental (air, water, noise, climate or weather) monitoring, precision agriculture (farm monitoring), industry (machine and robotics) automation, autonomous vehicles (land, aerial, surface and underwater), smart grid, smart city, and smart surveillance and defence security. In order to timely detect abnormality/anomalies and mitigate them, sensed data need to be processed continuously in wearable processors or edge computing devices. Both continuous edge data processing and IoT network based data transmission can lead to higher level of energy consumption, which demands frequent charging or replacement of batteries in long-term event monitoring applications. Recent signal processing techniques and deep learning networks can play major roles in improving accuracy and reliability of event monitoring and control systems but demands higher computational resources because of computationally expensive signal processing technique(s) and larger deep model size with increased computational burden. Exploring a lightweight, low-latency, and low-power DNN architecture is highly demanded for enabling real-time event detection and prediction and also maximizing lifetime of battery-operated IoT sensors and edge AI computing devices. Therefore, improving energy-efficiency of battery-operated automation systems is highly demanded to maximize battery lifetime in long-term, continuous event monitoring and control application scenarios.

This talk presents various energy consumption strategies which can be adopted for energy-constrained wearable computing or edge data analytics systems by exploring intelligent signal processing protocols including signal quality assessment and noise-aware signal denoising, quality-aware data compression and event-triggered IoT communication protocols. Resource-efficient signal processing and deep learning architectures can be developed by exploring analog and digital compressed sensing or imaging techniques. Compressed sensing (CS) is new efficient data acquisition technique with a sub-Nyquist sampling rate that allows perfect recovery of sparse signals from only few random measurements using the sparse recovery algorithm. This talk presents key concepts of analog and digital compressed sensing methods for IoT enabled sensing and control nodes, and design considerations for selecting best sensing matrix which plays major role in reducing hardware complexity of measurement generation architecture and can also enable direct measurement of essential parameters (features) for detecting or predicting events in compressed sensing domain. Different signal analysis and deep learning architectures will be presented with real-time implementation by directly estimating essential parameters from limited CS measurements and for development of lightweight, low-latency, and low-power CS based deep learning architectures for directly performing event detection, classification and prediction from a limited number of CS measurements without original reconstruction.

12:45-13:45 Session 5: Round Tables

Last updates in Electron Devices in connections with Nano-Bio-Electronics

Cristian Ravariu, Chairman of IEEE EDS15-Romania

Location: Room 1
14:00-15:30Lunch Break
15:30-17:00 Session 6A: Electrical engineering applications
Location: AULA
15:30
Equivalent electrical circuit design of a magnetorheological rotary brake
PRESENTER: Grazia Lo Sciuto

ABSTRACT. In this paper is proposed the equivalent electrical circuit that describes the input/output behavior of the designed and manufactured Magnetorheological Fluid rotary brake system. The input is represented by a current and the outputs are respectively the force and the magnetic field. The transfer function estimation was computed by means of orthonormal rational basis functions starting from the frequency response measurements carried out in laboratory during the testing and characterization of the designed prototype. Finally, the circuit synthesis was performed using the well-known method of Cauer.

15:45
Complex System for Monitoring Environmental Factors at the Galata Platform in the Black Sea
PRESENTER: Maria-Laura Rosu

ABSTRACT. Offshore wind represents a highly efficient, environmentally friendly, and scalable source of power. In the offshore area, wind turbines are usually anchored on the seabed at less than 50 meters depth. However, floating offshore wind technology is anticipated to revolutionize energy production in deeper ocean areas. Within the Black Sea FLoating Offshore Wind (BLOW) project, the main goal is to analyze the efficiency of capturing the wind potential in the Black Sea basin. It aims to achieve this by showcasing an innovative and cost-effective floating unit tailored for low-to medium-wind speed conditions in the Black Sea. The project's pilot initiative seeks to enhance regional energy security while improving the cost-effectiveness of wind energy. This is proposed through the integration of turbines and floats into a semi-submersible structure with a pyramid design. Additionally, the project will feature advanced elements such as an innovative mooring system, subsea monitoring with predictive maintenance, a digital twin for operational optimization, and grid integration control techniques. The BLOW project aims to implement a pilot project involving a 5 MW wind turbine in the Black Sea. The transition to sustainable energy in the region will be accelerated by stimulating synergies with the oil and gas sector, as well as developing cross-border policy. The energy cost price level estimated by BLOW varies between 87 EUR/MWh and 50 EUR/MWh (until 2028 and after 2030). The impact on environmental factors will be reduced by at least 40% through the innovative construction.

16:00
A Study of Intelligent Parking: Urban Efficiency Through Advanced Automated Systems Based on Green Energy Management

ABSTRACT. Solving the problems related to the availability of parking spaces in urban areas requires a smart and sustainable approach. We propose the implementation of a smart system that provides real-time information about parking availability and optimizes traffic management. Our system is based on two main components: solar panels and IoT sensors. Solar panels will generate green energy to power various equipment in parking lots, thus helping to reduce costs and increase sustainability. IoT (Internet of Things) sensors will collect information about the occupancy of parking spaces and transmit this data to our online platform. The platform will provide drivers with information about parking spots through a website, making it easy to quickly find a parking space. In addition, our system is made with advanced traffic management algorithms, which will optimize the flow of vehicles in parking lots and reduce the waiting time of customers. By efficiently integrating solar panels and IoT sensors, we are proposing an innovative solution for efficient parking management in urban areas, helping to reduce traffic congestion and increase user experience.

16:15
Robust Methodology to Extract the Electricity Consumption Patterns of End-Users Using Data Mining Techniques

ABSTRACT. In the paper, a Data Mining-based methodology to identify electricity consumption patterns that characterize the behaviour of end users in the low voltage active electric network of distribution network operators is proposed. It involves retrieving consumption data (active and reactive power profiles) from smart meters (at the central level, for example, house) and local sensors to collect information about household appliances and equipment to identify usage habits and preferences of end users. The methodology discovers the electricity consumption patterns and trends associated with each end user using data mining techniques based on clustering algorithms regardless of whether they are part of the end-users’ category. The proposed methodology used the data uploaded from the Pecan Street database sampled at 1 minute, ultimately identifying the consumption patterns of all equipment and appliances monitored at the level of end users. The patterns obtained will be integrated in the next stage where the energy consumption will be subject to an optimization process to reduce electricity bills for the end users.

16:30
Sustainability for Estimating Global Energy Efficiency of Data Centers
PRESENTER: Stefan Robila

ABSTRACT. The modern digital landscape has turned data centers into one of the most important technologies to date. They are now tasked with handling very intensive workloads and also are responsible for processing a large amount of data. This, in turn, has led to an important aspect to consider as technology evolves at the pace it is now, that is energy efficiency. Data centers now support more than just business critical workloads and applications; they are now the bridge between different technologies like IoT devices, cloud computing and artificial intelligence. Such technologies function at the highest level and require more and more data processing and storage, leading to more power being consumed. In this paper we investigate how global estimates of energy consumption of the data centers could continue to be improved as the center’s components continue to evolve. A model to predict just how much energy will be consumed in order to process and store all of the data needed to perform various tasks is discussed. Components like networking equipment such as switches play an important role in the efficiency of the data center, another component that is vital to all operations in a data center is the hardware architecture, for example the chip used in the switches. These combined paint a picture of how much energy and power will be needed to power data centers and the equipment inside of them.

15:30-17:00 Session 6B: Energy, Environmental issues, IoT, Energy Internet, Blockchain Technology and Smart Contracts
Location: Room 1
15:30
Techniques for Measuring and Optimizing Dissolved Oxygen Concentration in a Liquid Lead Environment
PRESENTER: Denisa Toma

ABSTRACT. Owing to their superior thermal conductivity, heavy liquid metals (HLMs) are becoming increasingly important as heat transfer and storage media in energy-related technologies such accelerator-driven systems, generation IV (GEN IV) fast reactors, and concentrating solar energy. Molten lead and lead-bismuth eutectic (LBE) are examples of HLM that are aggressive against structural materials, which is one of its main drawbacks. Steel alloying elements' high solubility in lead (Pb) or LBE at high temperatures can cause HLM to selectively dissolve and penetrate, along with structural and phase changes, a decline in mechanical characteristics, and eventually failure. The existence of dissolved oxygen in the liquid metal phase has a significant impact on steel's performance in liquid metals. A continuous layer made of the oxides of the steel's constituent elements may form on its surface if the oxygen content rises above a particular point, separating the steel from the liquid metal. Thus, diffusion through the oxide layer is required for the primary degrading mechanism of the liquid metal phase at low oxygen content, which is the dissolving of steel components. For the purpose of characterizing corrosion conditions and qualitatively analyzing the behavior of steel constituents, it is therefore sufficient to know the oxygen potential/activity, for instance through measurements using an electrochemical oxygen sensor. This paper aims to highlight the effects that dissolved oxygen has on liquid lead, to explain the need to monitor the residual oxygen concentration in the liquid metal, and also to present the main types of sensors with which this monitoring is done.

15:45
The Chemical Stability of Nasicon Solid Electrolyte for Seawater Batteries
PRESENTER: Adriana Marinoiu

ABSTRACT. Rechargeable batteries are seen as a key mediator in the sharing of alternative renewable energy sources. The goal of energy storage systems (ESSs) is to store renewable energy so it can be used efficiently at a moment's notice. The energy storage systems market has a significant role for lithium-ion batteries due to their increased energy density, capacity, and energy competitiveness. Because of the rising cost of lithium, the growing demand for lithium batteries in electric vehicles and energy storage systems, and the limited spread of lithium, it may be difficult to use them over a long period. Rechargeable seawater batteries are viewed as alternatives to Li-ion batteries because of the unending and complimentary sodium ion active materials. The purpose of this work is to test an electrochemical cell, of a hybrid battery, which contains a NASICON ceramic membrane as a solid electrolyte. Charge/discharge testing was performed at constant current of different values, using Pt/C as catalyst and seawater as catholyte. Testing of the stability over time of the solid electrolyte was performed in sodium chloride solution at different pHs. Also, the possible applications of batteries activated with seawater in the marine sector, were presented.

16:00
Membrane electrode assemblies fabrication by ultrasonic – spray technique
PRESENTER: Adriana Marinoiu

ABSTRACT. The following paper presents the ultrasonic deposition method of a carbon-based platinum ink solution onto catalytic membranes and its advantages in manufacturing fuel cells. Ultrasonic spray coating is a unique spraying method based on ultrasonic spray nozzles technology. The sprayed material is first in a liquid state, where the liquid can be a solution, a sol, a suspension, etc. The liquid ultrasonic coating is first atomized into fine particles using an ultrasonic atomization device, and then evenly coated on the surface of the substrate by a certain amount of carrier gas, thereby forming a coating or film. Catalytic ink is a mixture of catalyst powder, water, solvent and ionomer, in suspension form. The method used is effective for obtaining layers with low platinum loading and very good electrochemical characteristics.

16:15
Development, Implementation and Testing of the Application for Measuring and Controlling the Concentration of Oxygen in Molten Lead
PRESENTER: Denisa Toma

ABSTRACT. Nuclear research, worldwide, is oriented towards the realization of a new generation of nuclear systems with improved performance. The technological challenges for Generation IV systems led to the definition of a framework plan composed of four directions: durability, economy, safety and reliability, and resistance against proliferation and physical protection. Generation IV systems come with a precise and practical response to the demand for clean and safe nuclear energy in modern society, fully satisfying the requirements of sustainable development. Regarding generation IV reactors, the European Union's option is to develop liquid metal (sodium or lead) or gas-cooled systems. The choice of lead as a coolant adds to operational safety given the high boiling point of lead (1745oC). However, under certain high-temperature conditions, lead has a corrosive effect on structural materials. That is why the choice of structural materials for the manufacture of reactor components is very important, and it is also very important to control the purity of lead by monitoring the concentration of dissolved oxygen. Therefore, the development of oxygen control methods and techniques is one of the critical issues for nuclear systems using liquid heavy metals as a coolant. The development and implementation of software for the acquisition of the parameters of interest for the mechanical test tests in the liquid lead environment requires a period in which the developer evaluates the necessary equipment that must be used and analyzes the way of integrating them into the installation so that everything function correctly when the software is finished, and the user can work as simply as possible with the graphic interface provided by the programmer. This paper presents the development of an application for measuring and controlling the concentration of dissolved oxygen in the liquid lead environment and also the way to implement this application in the command and data acquisition software of the mechanical testing machine in the liquid lead environment. After the implementation of the application, some preliminary tests were carried out, the results of which are presented in this paper.

16:30
A Proposed Framework for Transparency of Supply Chain using Blockchain and IoT

ABSTRACT. In the global supply chain, ensuring the integrity and safety of sensitive products, such as vaccines, is paramount. This paper introduces a concrete and novel framework for product provenance and supply chain monitoring that leverages Internet of Things (IoT) sensors and blockchain technology to address these challenges. The proposed framework employs IoT sensors to continuously monitor and record in an efficient manner the keeping conditions of products during transit and storage. The blockchain solution implied is an Ethereum-based open-source client, our framework making use of smart contracts. Targeted measurements are temperature and humidity levels, but the framework is not limited to those. These data, along with comprehensive product provenance information, are securely and transparently logged on a public blockchain, enabling end-to-end traceability and verifiability. Furthermore, any third-party application is able to access this information for all stakeholders, including end users, who can verify the authenticity and handling conditions of their products in a trustful manner. Although integrating IoT and blockchain technologies pose significant challenges, our proposed framework finds a good balance between the two. Our initial evaluation demonstrates the framework's effectiveness in providing a trustworthy record-keeping and accessible monitoring for supply chain in the current digital era.

16:45
Cost Reduction of Integrated Renewable Energy Sources in Home Energy Management Systems (HEMS) using Particle Swarm Optimization

ABSTRACT. The purpose of this study is to optimize the 24-hour scheduling of various household appliances (schedulable appliances, entertainment appliances, and constrained appliances) to minimize the total cost of energy consumption. This approach is achieved by calculating and leveraging the lower cost of photovoltaic (PV) power supply during the day and minimizing reliance on grid power which is more expensive than PV power supply. Particle Swarm Optimization is used in this study which is inspired by the social behavior of swarm animals such as ants, bees and birds flocking. The optimization determines the optimal on/off schedule for each appliance, ensuring cost efficiency while considering the availability of renewable energy which is PV source in this case. It is different from conventional and traditional optimization methods such as gradient descent or linear programming and the comparison results are represented here.

15:30-17:00 Session 6C: Electronics, Computers, Sensors and Artificial Intelligence in the Information Age
Location: Room 2
15:30
Experiences in PCB manufacturing with ultra low-cost and home-made CNC machine

ABSTRACT. In the context of printed circuit board (PCB) manufacturing, accessibility and affordability are pivotal, especially for individuals and institutions with limited financial resources. The advent of ultra-low-cost and homemade computer numerical control (CNC) machines offers a promising solution to address these concerns. However, the plethora of online resources often presents solutions fraught with practical challenges. This paper delves into firsthand experiences in utilizing such technologies, navigating the delicate balance between cost-effectiveness and functionality in prototyping. By examining the inherent obstacles, it aims to offer valuable insights for those seeking to achieve PCB manufacturing goals with minimal resources.

15:45
A Smart UAV System to Assess the Health of a Vineyard

ABSTRACT. This paper presents the hardware and software components of a system designed to autonomously detect the vine's health state through the analysis of vine leaves texture, shape, and color. The detection component uses a deep neural network and its execution is supported by an i.MX 8M Plus processor. Based on the NXP HoverGames drone, that flies over the vineyard, the detection system can analyze, in real-time, the health state of the different sections of the vineyard. The system, referred to in this paper as agriHoverGames, obtained an average 84.7% classification performance when classifying leaves into three classes (i.e., healthy leaves and leaves affected by two distinct diseases). Although this project was initially implemented to identify two vine diseases, it actually provides a mobile platform capable of identifying any other type of disease as long as this one manifests through specific changes (in texture, shape, color, or size) in the fruits, flowers, or leaves of a plant or fruit tree and, more, these ones can be identified through the video information.

16:00
Enhancing Driver Safety through A Data Collection Platform

ABSTRACT. This paper introduces an innovative Data Collection Platform (DC Platform) developed to enhance driver safety by enabling effective aggregation detection data. Designed to interface with external data detection systems, the DC Platform provide advanced functionalities for efficient management and synchronization of data alongside geospatial information of monitored drivers. Based on the collected data the developed Platform enables the application of analytical tools to assess and predict risk patterns related to impaired parameters driving. Developed CD Platform was implemented through two different approaches: one hosted in a traditional host-based web server and the second one deployed in virtual environment in Docker containers. The platform's support for real-time data integration facilitates targeted and timely interventions, which may contribute to a reduction in the incidence of -related traffic accidents.

16:15
Constraints on Planar and Planar Interdigital Capacitors used in Sensors

ABSTRACT. Based on geometric considerations, we derive an approximate relation between the capacitance of a planar capacitor and the maximal frequency operation according to the basic formula of planar capacitors in electrostatics. The reciprocal formula results for maximal capacitance of a planar capacitor at a given frequency, for operation under the same electrostatic hypothesis. The formulae involve the square root of the ratio of the dielectric thickness to the relative dielectric constant, and a shape factor. Similar conditions are derived from geometric and EMI considerations. The results are extended to interdigital capacitors. Design considerations and limitations of the use of comb capacitive sensors in flexible circuits and in wearable devices are exposed. The main concern is the driving signal frequency and the electromagnetic interference (EMI). The discussion avoids the use of Smith charts, where the value of the inductance is required.

16:30
Detection of Vehicles and Travelers in Public Transport System Using Bluetooth and Wi-Fi
PRESENTER: Marius Minea

ABSTRACT. This paper is aimed on research regarding the assessment of Bluetooth and Wi-Fi technologies for detecting travelers in public transport systems and vehicles. The use of these technologies is much more environmentally friendly and less expensive than the conventional solutions. The study has focused on the usability of these two technologies, considering accuracy of detection, range, and electromagnetic compatibility. Results of conducted field experiments are presented, along with possible future developments.

15:30-17:00 Session 6D: Bio-medical applications & Advanced Medical Imaging Techniques
Location: Room 3
15:30
Kinematics Skill Modeling of Cardiac Catheterization via Deep Learning Method

ABSTRACT. According to advances in robotic surgery, the importance of data-driven techniques that incorporate deep learning methods is expanding quickly, with a focus on objective surgical skill evaluation. Unlike traditional evaluation where surgeons' skills are evaluated in the real surgery room, capturing users’ motion kinematics can be used as input for an AI model to assess their skills. For this study, a simulated mechanical setup has been provided for the trainees, focusing on cardiac catheterization procedures. This setup allows users to engage in hands-on practice while simultaneously capturing their hand movements for further evaluation. Trainees have the opportunity to engage in extensive practice on a mechanical setup as a pre-operation procedure, enabling them to develop a deeper familiarity and understanding. The task is to pass the tip of a commercial catheter through curves and level intersections on a plastic transparent blood vessel phantom. The objective is to guide the catheter's tip from the vessel entry point to the designated ablation target. By conducting various experiments involving both novices and experts, a deep recurrent neural network was employed to extract a skill model by solving a binary classification task. The trained model demonstrated a remarkable 92.3% accuracy in effectively discerning between the maneuvers performed by novices and experts, indicating the successful implementation of the proposed methodology.

15:45
Insights into Risk Management: Leveraging Digital Twins for Ophthalmic Diagnosis

ABSTRACT. The use of Digital Twin in ophthalmology, especially in the process of identifying glaucoma and providing personalized treatment scenarios, emphasizes ensuring the integrity and confidentiality of medical data. Thus, achieving an efficient approach to risk management becomes a priority for stakeholders. Identifying, evaluating, selecting, and adopting an appropriate risk response strategy, as well as monitoring at each layer of the Digital Twin architecture, contribute to improving the quality of medical services and protecting patient data confidentiality. Ensuring reliability and security in the implementation and use of the Digital Twin in ophthalmology facilitates both the early identification of pathologies and the establishment of efficient communication between the doctor and the patient.

16:00
Designing of a new Medical Diagnostic System based on Blockchain and Artificial Intelligence technologies

ABSTRACT. Research allows the integration of the most advanced technologies in the process of diagnosis and treatment of patients, such as Artificial Intelligence and Blockchain. The proposed platform is not only limited to providing an accurate diagnosis, but also allows a comparison to be made between the diagnosis, treatment plan and drug therapy prescribed to other patients with similar symptoms, existing in the database. The patient can choose the medical center and the doctor who will consult and diagnose him. As conclusions regarding the efficiency of this proposed system, the following are noted: the security of patient data, the accuracy of diagnosis and the benefits of using chat, which contribute to reducing the waiting time and improving the efficiency of the system, both for patients and for the staff of the medical center. The proposed platform allows a health system accessible to patients, to their requirements and obviously the transformation of the existing medical system.

16:15
Empirical Wavelet Transform in Epilepsy Diagnosis: A Multi-Features Approach to EEG Focal and Generalized Signal Classification

ABSTRACT. The diagnosis of epilepsy has increasingly relied on automated algorithms that detect the type of epilepsy with high precision. The objective of this paper is to accurately discriminate between focal and generalized EEG epileptic signals collected in two states: awake and sleep. This study developed and implemented a method based on Empirical Wavelet Transform (EWT) for detecting and classifying EEG signals related with epilepsy. Characteristics such as skewness, kurtosis, median, and the fluctuation index are computed after decomposing each EEG signal into five components by EWT. These derived features are then utilized in classifying EEG signals using the K-Nearest Neighbors (KNN), Naïve Bayes (NB), and Support Vector Machine (SVM) classifiers. The findings of this study demonstrate a maximum classification rate of 90.27% for data collected in awake state and 83.81% for data collected during sleep, achieved with KNN classifier in both situations. The results have been clinically validated, emphasizing the efficacy of integrating advanced statistical measures and automatic techniques to enhance the diagnostic processes in epilepsy care.

16:30
Laboratory rat tracking methods in biomedical experiments
PRESENTER: Mihaela Luca

ABSTRACT. Presented are two video analysis methods for tracking laboratory rats in three biomedical scenarios. In the first scenario, rat moves freely in a rectangular box, necessitating computation of the number of visits and visit duration in specific box areas. To assess the recognition of novel objects in the second scenario two toys are placed on the diagonal of the box, requiring tracking of visits and visit durations for each toy. The third scenario involves a Y-maze environment, with calculation of visits and visit durations for each branch. Initially, a tracking method using a voting procedure was proposed for the first scenario. The method combines results from binarized body detection and identification of the main cluster in the difference image of two consecutive frames. Subsequently, a deep learning-based body localization technique was applied across all three scenarios.

15:30-17:00 Session 6E: Software, databases, and computer applications I
Location: Room 4
15:30
SCAWA: Enabling Smart Card Authentication for Web Applications

ABSTRACT. In the evolving landscape of digital security, the integration of robust Multi-Factor Authentication (MFA) methods stands as a critical defense mechanism against unauthorized access, augmenting traditional username+password approach. In this paper, we present SCAWA, a publicly available implementation for smart-card authentication in the context of web applications. Our proposal is based on NexU Github project implemented by Nowina Solutions, and adapted it to fit our needs. For developer convenience, we integrated our implementation into Keycloack, a popular open-source Identity-Management platform. In the paper we present solution's architecture, implementation details and achieved results. Using SCAWA, developers can seamlessly integrate smart-card logon for their web applications, by only implementing an usual OAuth 2.0 flow. Through this contribution, we aim to lower the entry barrier for implementing sophisticated and secure authentication and authorization methods. Thereby, the result of our work aims to be fostering a more secure digital environment and enabling a higher adoption rate for secure authentication methods.

15:45
A Blockchain-based Framework for Content Provenance and Authenticity

ABSTRACT. In the current social and technological context, the proliferation of content creation led to easy spreading of misinformation and fake content. This poses an important threat to individuals and even countries or democracy itself. This paper presents a novel proposal for ensuring content provenance and authenticity by leveraging blockchain technology. Besides blockchain, we make use of digital signatures and signature preservation services, integrated through standard interfaces. The proposed framework is designed to accommodate various content formats: text, image and video. After a comparison with similar solution, we also detail a discussion analysing limitations, opportunities and advantages or our approach. To sum up, the proposed framework outlined in this paper offers a robust and standard-compliant mechanism to ensure the provenance and authenticity of digital content, thereby addressing the pressing challenge of disinformation.

16:00
An Algorithm for Automatic Analysis of the Etymology of Romanian Words

ABSTRACT. The paper presents a computational method and the related software tool to determine the etymology of words (lemmas), as well as examples of results obtained for several Romanian literary texts. The problem of etymologic analysis is almost lacking in the AI literature; hence the relevance of this paper. The results can be used in research in linguistics and computational linguistics.

16:15
Cybersecurity of Online Financial Systems using Machine Learning Techniques

ABSTRACT. With the evolution of technologies and increasing the use of internet banking applications, cyber-attacks are gaining extended dimensions. The developed system aims to use Machine Learning techniques to combat fraud from banking systems. The architecture of the solution described in this paper integrates components such as Java Server, Web Customer, Mobile Client and Python Server, where the Random Forest model is hosted. The system has been shown to improve False Negative Rate in detection, offering security measures by encryption, authentication and secure communication, but also an efficient user interface in terms of usability. Our solution demonstrates efficiency in the detection of fraud in the online financing systems, with future directions involve additional customization based on the user’s behavior, real-time data analysis, but also a logging system based on the characteristics of the mobile device.

16:30
Security Analysis and Architecture of a Blazor-Based Web Application

ABSTRACT. As technology advances, the ingenuity of security attacks and vulnerability exploits increases. These attacks target mainly web applications and strive to access and obtain user information for malicious use. A secure user login and registration interface was used to protect these data. This paper presents the architecture and security analysis of a web application developed using Blazor, AuthAPI, and RabbitMQ middleware. The application employs Blazor for the front end, AuthAPI for user authentication, and a Fake API to demonstrate secure message processing with RabbitMQ. A comprehensive security evaluation was conducted using SQLMap, Snyk, and OWASP ZAP to identify and mitigate potential vulnerabilities. The findings confirm the robustness of the implemented security measures and suggest areas for further enhancement.

16:45
System for monitoring the performance of occupational safety and health in enterprises

ABSTRACT. The Occupational Health and Safety field generally concerned with the physical and mental safety and well-being of employees at the workplace itself. In order to help with proper integration, especially in micro and small enterprises, as they are the majority, the European Union has created a complex legislative framework with clear and detailed directives and developed various e-tools. Thus, in the first part of the paper some of the e-tools, their functionalities and categories of applicability are highlighted. In the second part the proposed prototype system is described, together with a detailed highlighting of the performance metrics, from the OSH point of view, which are the main elements in calculating the performance of a company's employees, of the equipment in use or of the company itself.

15:30-17:00 Session 6F: Poster session
Location: Poster Hall
AI and Prompt Engineering: The New Weapons of Social Engineering Attacks

ABSTRACT. As Artificial Intelligence technologies continue to advance, their integration into social engineering tactics poses new and evolving threats to cybersecurity. This paper provides an in-depth exploration of the intersection between Artificial Intelligence and social engineering, examining the risks and challenges associated with the malicious use of Artificial Intelligence-driven techniques. The research assesses the capabilities of Artificial Intelligence of creating sophisticated and targeted social engineering attacks to exploit human vulnerabilities. The results revealed some intriguing insights into the efficacy of Artificial Intelligence-generated phishing emails compared to those composed by humans. Our motivation behind this work is to assess the impact of Artificial Intelligence used in social engineering for warfare on the Internet-connected world and recognize the attacks because, in addition to the kinetic attacks, malicious forces can employ non-kinetic attacks as well, such as cyber-attacks and social engineering attacks to disrupt critical infrastructure. In the ongoing cyberwar, the urgent recognition of phishing attempts is of utmost importance.

Automated Recognition of Structures in Scanning Electron Microscopy Images Using Specialized Algorithms in MATLAB and Python: An Overview

ABSTRACT. Applied physics serves as the bridge connecting theoretical principles with their practical implementations in everyday situations. Informatics plays a crucial role in this subject because of its comprehensive and intricate capacity to simulate and forecast physical phenomena. This research highlights the indispensability of computer-based tools for analyzing the Scanning Electron Images of the obtained materials, particularly materials synthesized by plasma. Herein, the conducted research highlights the advantages of image processing via Matlab and Python algorithms, and frameworks, in order to analyze the synthesized structures. Furthermore, the gathered materials offer fresh perspectives on physical processes by employing image processing techniques. Optimization approaches were used and future directions were identified.

Edge-detection method for low-quality color pictures in lossy compression image file formats

ABSTRACT. The objective of this article is to propose a robust edge detection algorithm appropriate for the processing of color images originating from image acquisition systems with unknown parameters, with the extreme case of low-quality color pictures in lossy compression image file formats taken from the internet, and to assess the performance of the proposed solution in comparison with the Canny edge detection algorithm, using a proof-of concept software developed in Python. The output of the software is intended to feed unassisted AI processes related to artificial vision; thus, it also involves a method of establishing a hierarchy in the detected edges and an identification strategy for reestablishing the continuity of the curves that can be assimilated with mathematical primitives and to de-fractalize the noise-induced features.

Employing the simulation platforms Flux 2D and FEMM4.2 to numerically optimize the eddy current phenomenon during the edge-hardening process

ABSTRACT. This paper aims to develop advanced numerical methods for an in-depth examination of edge hardening process, using eddy current heating. We present a distinct procedure for heat treatment, in our attempt to achieve a particular thermal profile, defined by a higher level of heating in the tip area of the workpiece, compared to the remainder of the piece surface. In order to ensure the necessary mechanical strength, this targeted method applies selective hardening to the tip area, while maintaining the elasticity of rest of the part, so as to ensure the required mechanical strength. The fundamental objective of the work is to present advanced models for optimizing the inductor input parameters in order to achieve a differentiated and efficient heating of the workpiece tip. The precise control of the hardening process and the enhanced mechanical performance of the workpiece are two major implications of this approach.

Employing Comparative Study Between Frontend Frameworks. React Vs Ember Vs Svelte

ABSTRACT. This paper presents a comparative study of three frontend frameworks: React, Svelte and Ember. The article presents which are the differences and similarities between frameworks, how things are handled in a framework in parallel to another, and why one is more popular and used than another. This study will have as a point of reference for comparison a ToDo application written in all three languages.

Access Control Based on Self-Sovereign Identity

ABSTRACT. In the rapidly growing interest in digital identity management, there is an important shift toward adopting verifiable credentials, often based on blockchain technology. This transition highlights a growing interest in the concept of Self-Sovereign Identity (SSI), a paradigm that empowers individuals with control over their own digital identities through the use of verifiable credentials. This paper introduces a novel access control framework that leverages the principles of Self-Sovereign Identity to significantly enhance the security of system access based on the users' credentials. By integrating the decentralized and user-centric nature of SSI, our proposed framework aims to address existing vulnerabilities in traditional access control systems. It provides a robust, scalable, and controlled mechanism that ensures higher levels of trust and security in digital interactions. This approach aligns with the broader digital identity trends favoring transparency, user autonomy, and trust.

Comparative study of experimental data and numerical simulations using 2D FLUX for hardening the teeth edges in a portable inductor gear

ABSTRACT. The aim of this paper is to model the semi-finished gear wheel tip using a coil and the resources provided by the FLUX 2D program. In this case, the inductor was assumed to be supplied with a voltage at its terminals, using a coil for modeling. The problem is axisymmetric. Important information about the process of surface heating by induction was obtained through the cooperative analysis of numerical simulations with Flux 2D and experimental data in edge hardening. These data were applied to a semi-finished gear wheel, with the treatment focusing on the tip of the ferromagnetic steel gear wheel. The information was helpful in the design of such equipment. The scientific novelty of this paper lies in the innovative approach of modeling the semi-finished gear wheel tip using a coil and the FLUX 2D program. The study assumed an axisymmetric problem and provided valuable insights into the process of surface heating by induction through the cooperative analysis of numerical simulations with Flux 2D and experimental data in edge hardening. This approach was applied specifically to the tip of a ferromagnetic steel gear wheel, offering practical information that can be instrumental in designing similar equipment. The integration of simulation and experimental data enhances the accuracy and efficiency of the gear wheel hardening process, demonstrating a significant advancement in the field of induction heating and material treatment.

Flexible system architecture used to collect and store signals acquired by IoT devices

ABSTRACT. Part of the effort to develop predictive maintenance systems, especially when Digital Twins are used as part of this effort, special attention is paid to the data acquisition using IoT devices, data that uses as inputs for the predictive algorithms. Usually when a new IoT device is connected to a data management system, several configurations and sometimes even code changes should be made, which increase the implementation costs and the implementation duration. The goal of this paper is to propose an architecture for a data management system that permits to connect a new IoT device to it such that the configurations to be made automatically on demand and to allow the IoT device to push the acquired data without any other intervention.

Ladder Logic Implementation for Romanian Railway Interlockings
PRESENTER: Florin Bădău

ABSTRACT. Interlockings are critical systems for the railway network which ensure the safety of train operations. While previous generations of interlockings use hardware logic to implement safety functions, electronic interlockings employ different techniques and architectures to comply with strict reliability and redundancy standards. Ladder Logic has been used for decades for industrial automation, which has witnessed a transition from relay-based to PLC-based systems. This paper proposes a standard for translating typical Romanian relay interlocking schematics into Ladder programmes. The proposed model is evaluated against a physical relay interlocking.

Assessing performance of long-range ZigBee for road infrastructure communications

ABSTRACT. The rapid advancement of Intelligent Transport Systems necessitates reliable and efficient communication between Vehicles and Infrastructure (V2I). ZigBee technology, known for its low power consumption and robust mesh networking capabilities, is a promising solution for such applications. This paper presents an empirical evaluation of long-range ZigBee communication in an interurban environment. This study examines how distance and antenna type affect communication performance for vehicle-to-infrastructure (V2I) systems. There were two main tests conducted on a straight, obstacle-free road. The first test, a radio range test, measured Received Signal Strength Indicator (RSSI) values to assess signal strength over different distances. The second test, a throughput test, evaluated communication reliability by measuring the average transfer ratio test and the number of successfully transmitted packets. The tests determined the maximum operational range of the communication modules, the overall performance of the technology, and the most suitable antenna type for V2I applications.

15:30-17:00 Session 6G: E-session I
Location: E-session room
15:30
Autoencoder-Based Image Steganography With Least Significant Bit Replacement

ABSTRACT. Information confidentiality is critical to the progress of sustainable smart cities. Cover images are commonly used in classic image steganography to safely hide/conceal hidden information or images. This paper proposes a new autoencoder (AE) neural network-based image steganography algorithm. With AE, the model employs the Least Significant Bit (LSB) Replacement technique, embedding a secondary image within the LSB of the pixel values in the cover image. During the extraction phase, the decoder uses the learned features to reconstruct both the original cover image and the stego secondary image, allowing for the covert integration of a whole image within the AE framework. The model was trained and evaluated using the MNIST and CIFAR-10 datasets. The suggested method's effectiveness was measured using three performance assessment metrics: mean square error, peak signal-to-noise ratio, and structural similarity index. The best results were 0.0018, 47.89, and 0.96, respectively, which are promising and outperforming the findings of the most significant bits technique. The model was successful in concealing the secret, as seen by several similarities between the cover and stego images.

15:40
Efficient Water Quality Monitoring Using Unmanned Aerial Vehicles and Internet of Thing Technologies

ABSTRACT. Climate change, human activities, and manufacturing have significantly impacted water quality. Consequently, water quality is increasingly threatened, leading to pollution in many lakes that renders them unsuitable for drinking water treatment or agricultural activities. The need for emergent and frequent monitoring of water quality is vital. Traditional methods of water quality monitoring typically involve sampling water and analyzing it in laboratories. While this approach guarantees the accuracy of water parameter assessment, it is time-consuming and incapable of capturing real-time changes in water quality. Moreover, in certain hazardous areas (e.g., toxic lakes), accessing water samples can pose risks to human safety. This paper proposes an efficient solution for water quality monitoring using unmanned aerial vehicles (UAVs), enabling real-time and spatial water quality monitoring. The UAV is equipped with a commercial controller, IoT system, and water quality mapping tool to provide an efficient method for water quality monitoring and management.

15:50
Nonlinear Model Predictive Control of Quadrotor Using Direct Multiple Shooting

ABSTRACT. Quadrotors have been applied to many areas. Control of the quadrotor for tracking the desired trajectory is challenging. This paper proposed to apply the nonlinear model predictive control to guide the quadrotor while getting collision free by using the concept of the hyperplane moving constraints. To address the NMPC, the multiple shooting method has been applied to discretize the optimal control problem to the nonlinear program (NLP) and it is efficiently solved by the standard NLP solver. Numerical computation has been shown that the application of the multiple shooting method gives the more efficient NLP with a less number of optimization variables.

16:00
Securing the Internet of Medical Things: AI-Based Intrusion Detection
PRESENTER: Mariam Ibrahim

ABSTRACT. Recent advancements in the Internet of Medical Things (IoMT) have had an influence on traditional medical treatment and developed data communications in the Smart Healthcare scenario. Unfortunately, this has created a fruitful ground for adversaries. As a consequence, classic intrusion detection (ID) schemes as well as innovative detection strategies for IoMT applications have been implemented. Examining the call sequences made by the system processes is one way for determining a typical system behavior. In this paper, an ID system built on Multinomial Naive Bayes was developed. The suggested ID model performed well considering accuracy, detection rate, and false alarm rate.

16:10
Feedback Linearization Control and Current Control Mode for Boost Converter
PRESENTER: Pham Hong Duong

ABSTRACT. Linear control techniques have proven effective in controlling DC/DC converters, such as the Boost converter. This approach relies on averaged small signal models and can be implemented using either voltage mode control or current mode control structures. However, linear controllers are often greatly affected by changes in load and model parameters because they are designed based on a small signal model that is established by the linearization method. Consequently, to control DC/DC converters, it is required to study nonlinear control strategies. Feedback linearization control is one of the popular nonlinear control techniques and it has been widely used in the field of power electronics control. This paper suggests a technique for designing a current mode control structure for a boost converter. The internal current loop controller is designed using the feedback linearization control method, and the voltage outer loop controller is designed using the affine parameterization approach. The simulation results on Matlab/Simulink show that the boost converter in the current control mode using the proposed design technique has a good dynamic response and is less affected by fluctuations in the load and parameters model.

16:20
Construct a Fuzzy Rule Table with the Direct Torque Control Principle

ABSTRACT. Direct torque control (DTC) is one of the popular induction motor control structures in the industry. This control structure is simple, flexible, and high performance. However, large torque ripple and poor performance in low speed regions are disadvantages of this structure. Therefore, overcoming these disadvantages of the DTC structure is always an issue of concern in the field of motor control. In order to address the drawbacks of the conventional DTC structure, this paper investigates the DTC structure employing a fuzzy controller (FDTC). The fuzzy controller in this structure will replace the stator flux controller, torque controller, and switching table in the classic DTC structure. The paper proposes fuzzy rules are built based on the principle of direct torque control and are derived from the DTC table in 12 sectors. This ensures the stability and robustness of the FDTC control structure. Simulation results on Matlab/Simulink show that the fuzzy rules suggested in this paper, when combined with the FDTC structure, perform exceptionally well and the torque ripple and stator flux ripple are significantly reduced compared to the traditional DTC structure. Furthermore, even at low speeds, FDTC has a strong dynamic responsiveness.

16:30
Mushroom Classification using ANN, KNN, Naive Bayes, RF, SVM and the Gradient Boosting Algorithm XGBoost
PRESENTER: Irum Matloob

ABSTRACT. Of the millions of different varieties of mushrooms that exist worldwide, one is edible and the other is toxic. Differ- entiating between edible and deadly mushrooms is challenging and requires knowledge. To this end, various machine learning models were employed to assess the toxicity and edibility of mush- rooms. We also developed a model for classifying mushrooms using various machine learning algorithms. The Kaggle website provided the dataset that was used for this task. The dataset consists of 8124 samples with a total of 23 features, which are divided into two categories: edible and poisonous. According to the distribution of mushrooms according to their classes, 51.8% of the dataset’s mushrooms were labeled as edible while 42.8% were poisonous. Principal component analysis (PCA), data pre- processing, and exploratory data analysis (EDA) were some of the stages that were included in the project. K-Nearest Neighbours (KNN), Random Forest, Naive Bayes, Support Vector Machine (SVM) and Gradient Boosting Algorithm XGBoost were some of the ML algorithms that were applied in comparison. The ideal value of K for KNN was established through experimentation. The accuracy scores for each algorithm were determined, and the results indicated that KNN, Random Forest, and XGBoost attained the highest accuracy, with accuracies of 100%, 100%, and 99.9%, respectively. Accuracy rates for SVM and Naive Bayes were 99.5% and 92.43%, respectively. ANN also gained highest training accuracy of 100% with the 100% test accuracy.

16:40
Hydrogen Production and Storage Methods: Recent Trends and Technologies

ABSTRACT. Hydrogen is an emerging technology that offers a sustainable energy pathway for the transportation industry and buildings. It can help reduce environmental pollution and climate change. In this context, this paper investigates hydrogen production and storage methods to calculate carbon emissions (CEs) of an electrical energy system and assess the environmental benefits of carbon reduction. An analytical model for carbon emission flow (CEF) in the electricity network is addressed to evaluate the carbon emission. The case study results are presented to evaluate CO2 emissions and average emission intensity concerning hydrogen production. This paper provides a thorough investigation of the hydrogen energy landscape with future implications. By emphasizing this unique focus, it is envisaged that the literature will be enriched and the understanding of hydrogen as a promising energy source will gradually increase.

15:30-17:00 Session 6H: E-session II
Location: E-session room
15:30
Photoplethysmography Signal Quality Assessment Using Neighbour Edge Restricted Horizontal Visibility Graph and Machine Learning Classifiers
PRESENTER: Zahir Khan

ABSTRACT. Photoplethysmography (PPG) signals are vital for monitoring pulse rate, blood pressure, and more, but they are prone to motion artefacts and noise, leading to unreliable data. Assessing PPG signal quality is crucial for reliable healthcare and accurate medical diagnoses. By transferring the PPG time domain signal to a Horizontal Visibility Graph (HVG) network and combining extracted features from HVG with machine learning algorithms, we can classify the PPG signal into clean (or high quality) and noisy. We have proposed a new version of HVG called Neighbour Edge Restricted Horizontal Visibility Graph (NERHVG) by invoking some extra conditions for joining edges in HVG for PPG signal quality assessment (SQA). We have used the average degree (AD) of graphs extracted from HVG, and NERHVG algorithms as features in 3 different machine learning classifiers such as Random Forest (RF), Gaussian Naive Bayes (GNB), Decision Tree (DT) to classify 4 standard untrained PPG datasets (DS). The classifier models DT, RF, GNB associated with graph feature AD of HVG, NERHVG algorithms are named as: DT-HVG, DT-NERHVG, RF-HVG, RF-HVG, RF-NERHVG, GNB-HVG and GNB-NERHVG. After all the performance of HVG and NERHVG algorithms using AD feature are compared over the mentioned classifier models. It is observed that the NERHVG algorithm outperformed the HVG algorithm with the AD feature in all 4 datasets with a maximum accuracy of: 99.09%, 95.03%, 96.56% and 84.63% using the GNB classifier.

15:40
Effective Sparse Reconstruction Algorithms for Compressed ECG Sensing with Deterministic Binary Block Diagonal Sensing Matrix

ABSTRACT. For low-power, energy-efficient wearable edge health devices and health monitoring systems, compressed sensing (CS) achieves significant advances by sampling at sub-Nyquist rates for sparse signals such as electrocardiograms (ECG). The paper attempts to explore the best sparse reconstruction algorithm, along with a sparsifying matrix composed of a discrete cosine (C) and a discrete sine (S) basis [C S] and a deterministic binary block diagonal (DBBD) sensing matrix. The six different sparse reconstruction algorithms used for the recovery of CS ECG are orthogonal matching pursuit (OMP), approximate message passing (AMP), L1- minimization (L1-min), compressive sampling matching pursuit (CoSaMP), iterative hard thresholding (IHT), and iterative soft thresholding (IST). The ECG signals from 48 records of the MIT-BIH arrhythmia database (mitdb) were tested for CS reconstruction at compression ratios (CR) of 2.4, 3, 4, 4.8, 6 and 8. Further results showed that the combination of the OMP algorithm, the DBBD sensing matrix, and the basis [C S] surpassed the performance of other recovery algorithms in terms of lower average percentage root mean square difference (PRD). This combination achieved a lowest average PRD of 1.04 for the mitdb record 213, at a CR of 2.4. Although L1-min exhibits competitive performance with OMP in terms of PRD, OMP is computationally simpler than L1-min with a shorter recovery time.

15:50
Noise-Aware Atrial Fibrillation Detection for Resource-constrained Wearable Devices
PRESENTER: Nabasmita Phukan

ABSTRACT. Atrial fibrillation (AF) is characterized by RR intervals of unequal lengths, fibrillatory waves, and absent P-wave. The AF raises the risk of ischemic stroke. So, early diagnosis is essential for detection of AF. Due to the intermittent nature of AF, early diagnosis is achieved through continuous electrocardiogram (ECG) monitoring. This research work presents a lightweight, single-stage, and noise-aware AF detection method, developed using 1D- convolutional neural network (CNN), which is implemented on computing platform with limited resources in terms of memory space and battery capacity. With 5 datasets, the 5-layer CNN with optimal hyperparameters (kernel size: 4x1, number of kernels: 8, 16, 32, 64, and 128, loss function: sparse categorical cross entropy, and optimizer: adaptive moment estimation) demonstrated an accuracy, sensitivity, and specificity of 99.89%, 99.95%, and 99.81%, respectively with model size 3.15 MB and latency of 0.30 ms for ECG segment of 5 s duration. The CNN model is deployed on Raspberry Pi 4B computing platform and detects AF with an accuracy and sensitivity of 99.67% and 99.62%, respectively. Our results show feasibility for implementation of the method on wearable health monitoring devices for reduction in false alarm rates and increase in performance.

16:00
ECG Quality Detection and Noise Classification for Wearable Cardiac Health Monitoring Devices
PRESENTER: Achinta Mondal

ABSTRACT. Electrocardiogram (ECG) signals are continuously acquired from the body surface and analyzed or transmitted with the help of wearable devices for continuous cardiac health monitoring under resting, ambulatory, and exercise conditions. Automatic checking of ECG signal quality has become most essential to reduce false alarms and improve trustworthiness of automatic ECG diagnosis. Furthermore, identifying the types of ECG noise sources can also improve noise removal effectiveness with selection of noise-specific denoising approach with reduced computational load. In this paper, we present convolutional neural network (CNN) based ECG quality detection and classification method by exploring optimal hyperparameters to achieve lightweight CNN model with acceptable performance in classifying noises into electrode movement artifacts, muscle artifacts, and random noises. The proposed CNN-based method achieves an accuracy of 94.87% in detecting quality of ECG signals and achieves a sensitivity of above 98% in identifying three types of noises with two convolutional layers and three dense layers with best activation function of exponential linear unit. The proposed method has a model size of 1409 kB and computational time of 71.62 ms for processing 5 s ECG signal. The proposed CNN-based ECG quality checking and noise type identification has great potential in automated cardiovascular disease diagnosis with reduced false alarms by discarding severely corrupted signals and enabling noise-specific ECG denoising in achieving better noise reduction capabilities by selecting noise specific signal processing techniques.

16:10
CNN based Heart Rate Classification Using ECG Signal Without R-peak Detection for Rhythm-Aware Health and Emotion Monitoring

ABSTRACT. Heart rate is an important vital sign in health and wellness monitoring to identify various kinds of cardiac arrhythmias, such as sick sinus syndrome with slow heart rates (bradyarrhythmias) and atrial fibrillation with fast heart rates (tachyarrhythmias). Therefore, in this paper, we present a one-dimensional convolutional neural network (CNN) based heart rate classification (HRC) method using the electrocardiogram (ECG) waveform without R-peak detection with the major objective of developing heart rhythm-aware health and emotion monitoring systems. The proposed CNN-ECG-based HRC method consists of two major stages: preprocessing and CNN architecture for directly classifying the ECG signal into normal rate ECG, slow rate ECG, and fast rate ECG signals without the use of R-peak detection. The trained CNN model is obtained using the ECG signals (normal, slow, and fast heart rates) taken from the Massachusetts Institute of Technology Beth Israel Hospital arrhythmia (MIT-BIHA) database. On different kinds of untrained ECG signal databases, the CNN-based method achieves an overall accuracy of 89.18%, 92.74%, and 88.98% for the Apnea-ECG (APNEA-ECG) database, the MIT-BIH polysomnographic (MIT-BIH SLP) database, and the MIT-BIH atrial fibrillation (MIT-BIH AFib) database, respectively. Evaluation results show that the deep ECG waveform-based HRC method achieved promising results for the ECG signals having similar ECG morphological patterns within 10-second ECG signals.

16:20
A Unified ECG Paper Digitization Framework with R-peak Detection for RR Interval Analysis and Deep ECG Learning Applications

ABSTRACT. Electrocardiogram (ECG) signals are commonly acquired using the ECG recording system which produces printed ECG waveforms with essential ECG parameters and patient information in most of the hospitals and clinics. To preserve old ECG records with essential clinical information (arrhythmias) and utilize large volumes of paper ECGs for deep learning based ECG analysis systems, there is a demand for an ECG paper digitization method to extract digitized ECG signals from the printed ECG papers. In this paper, we present a fully-automated ECG paper digitization framework that generates one-dimensional ECG signals and detects R-peaks, which can enable automated ECG signal analysis under low-resource settings. The proposed framework includes the major steps of: preprocessing, edge detection, ECG lead segmentation, 1-D ECG signal generation, and R-peak detection for beat-to-beat interval (BBI) analysis. We evaluate the performance of the proposed framework using scanned ECG papers with different image qualities (low-quality scans, skewed ECG papers, dark colors, and different ECG paper colors), ECG papers with different kinds of PQRST complexes and numbers of leads in addition to the printed texts on the ECG paper. The performance is validated in terms of number of the R-peaks and the Pearson correlation coefficient (PCC) metric. The proposed framework achieves a Pearson correlation coefficients greater than 0.93. Evaluation results demonstrate that the proposed unified framework with R-peak detection has great potential in beat-to-beat interval analysis and deep learning based based ECG analysis applications.

16:30
Monitoring, control and optimization of wastewater treatment plants: a brief review
PRESENTER: Popescu Gheorghe

ABSTRACT. Wastewater treatment processes are essential components in ensuring water quality and protecting the environment. In order to obtain effective results in these installations, the monitoring, control and optimization of process-specific parameters in wastewater treatment plants are very important. In this context, this paper presents an analysis of the different methods used to monitor, control and optimize process-specific parameters in wastewater treatment plants. These methods include classical rule-based control, methods based on mathematical models and simulations, evolutionary algorithms, machine learning techniques and artificial intelligence. Each method is examined in terms of advantages, limitations and areas of application, providing a comprehensive view of the available solutions. In addition, aspects related to the practical implementation of these methods in real wastewater treatment plants are discussed, as well as future trends in their research and development. Appreciation and proper application of these methods can contribute significantly to the efficiency and sustainability of wastewater treatment processes, with a positive impact on the environment and the community as a whole. The importance of wastewater treatment and the role of optimization algorithms in improving the efficiency of these biological wastewater treatment processes in sewage treatment plants is very high today. The discharge of mechano-biologically purified wastewater into natural outfalls manifests itself on various levels, from affecting human health to complex problems of an ecological, technical and economic nature. Thus, as a new contribution to the literature, this study aims to review and analyze the history, current issues, and future directions of control of specific process parameters in wastewater treatment plants in the context of sustainable development. The study carried out in this paper will certainly lead researchers and industry partners towards the development of process optimization in sewage treatment plants.

15:30-17:00 Session 6I: E-session III
Location: E-session room
15:30
Improvement Of An Untrained Brain-computer Interface System Combined With Target Recognition
PRESENTER: Jihong Xu

ABSTRACT. In the current commonly used Steady State Visual Evoked Potential (SSVEP) paradigm, the stimuli are mostly white flashing blocks superimposed on a black background, which is monotonous and easy to cause subject fatigue with prolonged flashing stimuli. The stimulus paradigm is mostly divorced from the actual control environment, and lacks a direct connection with the control task. The mainstream classification algorithms usually analyze the data with a fixed window length, which is lack of generalizability to different subjects, and the classification performance index needs to be further improved. In this study, the SSVEP stimulus paradigm was improved by combining the YOLOv5 algorithm, which changed from the traditional black background to the actual control environment. It superimposed SSVEP stimulus blocks of different frequencies at each recognized target location. The stimulus paradigm was not stripped from the control scene, and the Filter Bank Criterion Correlation Analysis (FBCCA) algorithm was chosen to analyze it. The FBCCA algorithm was further improved by using a dynamic window strategy, which automatically adjusts the window length of each experiment according to the characteristics of each subject. This improves the versatility of the algorithm and increases the recognition accuracy and Information Transfer Rate (ITR). After the improvement, the offline experimental data were analyzed. The improved algorithm achieved an average accuracy of 87.08%, which was 17.29% higher than the original algorithm. Additionally, the average ITR was 74.28 bits/min, which was 36.51 bits/min higher than the original algorithm.

15:45
TIACE: A Transformer-Inspired Attentional CNN Encoder for Enhanced ECG Classification with Signal Processing

ABSTRACT. The role of Artificial Intelligence in the healthcare industry continues to expand, with significant potential for Electrocardiography (ECG) analysis and Arrhythmia detection. Despite many quality researches on ECG classification, there has been a lack of correlation between local and global feature segments extracted from ECG signals. Moreover, there has not been many user-friendly interface, which hinders accessibility for general public. To address this challenge, we have proposed a new approach called TIACE, an acronym for Transformer-Inspired Attentional Convolutional Encoder. Our method leverages a transformer-inspired combination of 1D Convolutional Neural Networks (CNNs) and Multi-head Attention networks for enhanced ECG classification. TIACE ensures that spatial features from CNN layers and informative features from Attention layers, are combined and preserved for classification. Furthermore, we have integrated signal processing techniques to ensure quality and filtered signal for the model. Additionally, We have deployed TIACE on the Cloud using Docker containers, ensuring accessibility for remote healthcare monitoring and rapid diagnoses, a significant advancement towards real-world integration. Our method, TIACE, achieved an accuracy of 98.72% with an F1 score of 99.16% outperforming all established state-of-the-art algorithms evaluated across multiple performance metrics

16:00
IMPLEMENTATION OF DATA WAREHOUSE WITH SNOWFLAKE SCHEMA IN ELECTRIC VEHICLES REALM

ABSTRACT. As electric vehicles (EVs) produce less pollution compared to conventional fuel-powered vehicles, governments seek to improve the use of EVs as urban transportation service to protect the environmental and economic sustainability. Therefore, it is necessary to have an optimal and sufficient charging infrastructure in order to enable the transition to EVs at a rapid pace and their widespread adoption. This will assist in providing great driving experience so that drivers are able to recharge their EVs comfortably and quickly, maximizing the return on investment and minimizing the impact on the network. Actually, this case requires the skilful application of sophisticated tools and technologies to optimize EV charging infrastructure sites. In this sense, the goal of this paper is to propose a data warehouse system in the field of electric vehicles to integrate, analyze, and utilize data from various sources to assist policymakers and administrators who benefit from analysis results in making efficient decisions for forward planning and strategies which assist in optimizing charging infrastructure, providing best services for drivers, and improving the overall efficiency and effectiveness of electric vehicle operations and development.Data warehouse (Snowflake schema), Electric vehicle, Charging station

16:15
A Deep Learning-Powered Web Service for Optimal Restaurant Recommendations Based on Customers Food Preferences

ABSTRACT. The surge in food images on social media demands the advancement of effective classification algorithms for applications like restaurants recommendations, personalized health management, nutrition analysis, and dietary monitoring. Recently, food classification has witnessed substantial progress through Deep Learning driven by the availability of large-scale food datasets and enhancements in Deep Learning models. In this paper, a web service designed to assist users in identifying the optimal restaurant for their desired dish based on quality, price, and location parameters utilizing a Deep Learning (DL) engine for food image classification. The research focuses on evaluating the efficiency of several DL algorithms, namely You Only Look Once (YOLO) V8, YOLO V5, ResNet 50, ResNet 18, Inception V3, VGG 16 and MobileNet, utilizing a Jetson Nano board for training purposes (100 epochs). Food 101 dataset used in the training process while LabelImg tool is employed for annotation. The annotated version of the dataset used for training of YOLO V8 and YOLO V5. The findings reveal YOLO V8 attains a notable accuracy of 96.3%, surpassing YOLO V5, ResNet 50, ResNet 18, Inception V3, VGG 16 and MobileNet, which achieve 89.7%, 89.35%, 67.23%, 76.01%, 78%, and 57.90% accuracy, respectively. Consequently, the research advocates for the adoption of YOLO V8 as the optimal DL algorithm for the proposed web service. Future enhancements will include the integration of textual data as a feature to enhance the efficiency of the detection process. The web service will be deployed on Amazon Web Services (AWS) which opening up possibilities for expansion through the development of a mobile application accessible on platforms such as the App Store or Google Play.

16:30
Assessing Cybersecurity Awareness among The Hashemite University Students In Terms Of Computer Usage
PRESENTER: Ashraf Aljammal

ABSTRACT. Internet usage among information technology users has expanded substantially, as have cybercrimes with potentially disastrous repercussions. Therefore, internet users have to employ the available security measures and policies during internet usage. Nowadays, University students make up a sizable portion of internet users. This study assesses the cybersecurity awareness among The Hashemite University students In terms of Computer Usage. A questionnaire survey is used to assess students' cybersecurity knowledge while using their PCs and connecting to the internet, concentrating on their behaviors and use of security measures.

16:45
Development of a Laboratory Testbed for Cybersecurity Evaluation of Distribution Substations Using Open Source Tools

ABSTRACT. This paper details the development of a laboratory testbed for cybersecurity testing of distribution substations, focusing on the GOOSE communication protocol specified by IEC 61850. The testbed uses open source tools to replicate the operational conditions of a real substation, employing Intelli- gent Electronic Devices (IEDs) with a Technology Readiness Level (TRL) of 7. This ensures high fidelity to real-world applications while maintaining a safe, isolated environment. The increasing digitization and automation of substations in- troduce new cybersecurity vulnerabilities that could disrupt power distribution, highlighting the need for robust security measures. The developed testbed provides a crucial platform for testing and improving cybersecurity defenses to protect critical infrastructure from evolving threats.

17:00-17:30Coffee Break
17:00-18:00 Cultural program

Guided tour through the Hall of Lost Steps & Library