ISPA 2022: THE SEVENTH INTERNATIONAL CONFERENCE ON THE IMAGE AND SIGNAL PROCESSING AND THEIR APPLICATIONS
PROGRAM FOR SUNDAY, MAY 8TH
Days:
next day
all days

View: session overviewtalk overview

08:30-08:45 Session 2: Official opening

Honorary  Chair, ISPA 2022

Professor Belabbas Yagoubi,

Rector of Abdelhamid Ibn Badis University of Mostaganem, Algeria

08:45-09:30 Session 3: Plenary conference I
08:45
The emerging role of deep learning in multimodality medical imaging

ABSTRACT. This talk presents the fundamental principles and major applications of artificial intelligence (AI), in particular deep learning approaches, in multimodality medical imaging. To this end, the applications of deep learning in five generic fields of multimodality medical imaging, including imaging instrumentation design, image denoising (low-dose imaging), image reconstruction quantification and segmentation, radiation dosimetry and computer-aided diagnosis and outcome prediction are discussed. Deep learning algorithms have been widely utilized in various medical image analysis problems owing to the promising results achieved in image reconstruction, segmentation, regression, denoising (low-dose scanning) and radiomics analysis. This talk reflects the tremendous increase in interest in quantitative molecular imaging using deep learning techniques in the past decade to improve image quality and to obtain quantitatively accurate data from dedicated combined PET/CT and PET/MR systems. The deployment of AI-based methods when exposed to a different test dataset requires ensuring that the developed model has sufficient generalizability. This is an important part of quality control measures prior to implementation in the clinic. Novel deep learning techniques are revolutionizing clinical practice and are now offering unique capabilities to the clinical medical imaging community. Future opportunities and the challenges facing the adoption of deep learning approaches and their role in molecular imaging research are also addressed.

10:00-12:20 Session 4A: Image Processing
10:00
Transfer Learning for Plant Disease Detection on Complex Images
PRESENTER: Amina Aarizou

ABSTRACT. In the last few years, the use of deep Convolutional Neural Networks (CNN) for the detection and classification of plant diseases from leaf images became an active research area that shows very good results. However using deep learning classifiers requires large labelled datasets. The large and freely available dataset of plant diseases widely used by researchers is the PlantVillage dataset. The main issue related to this dataset is that it contains only laboratory images, which reduces the classifier performance when tested on complex field images. In this paper we study the use of both laboratory and field images for training deep CNN on classifying healthy and unhealthy plants. We combine laboratory and complex field images taken respectively from the PlantVillage and the EdenLibrary datasets, to build a dataset containing 54,000 images equally distributed on two classes: ’healthy’ and ’unhealthy’. This dataset is used to fine- tune three state-of-the-art image classifiers that are pre-trained on the ImageNet dataset: AlexNet, ResNet34 and DenseNet121. The experiment results show that using combined dataset signif- icantly improves the classification accuracy for complex images.

10:20
Contactless Palmprint Recognition System Using ICANet-Based Deep Features
PRESENTER: Abdelhakim Fares

ABSTRACT. Feature extraction is an important task in image based pattern recognition applications due to a large amount of different features existing in the image and its multiple application areas. Due to this necessity, a very considerable effort has been made by researchers in this direction, leading in many cases to excellent classification results. In this paper, the impact of deep learning techniques on the performance of these systems will be evaluated. For reliable assessment, a contactless palmprint-based biometric system has been developed, which is a typical pattern recognition application. In this study, a simple and lightweight deep learning architecture (ICANet) was used for the feature extraction process. The experimental results of ICANet are compared to other lightweight deep learning (PCANet and DCTNet). The results of the comparison prove the effectiveness. The experimental results of ICANet were compared to lightweight deep learning (PCANet and DCTNet) where the comparison results showed the efficiency of ICANet in terms of classification rate.

10:40
Combining hand-crafted and deep-learning features for single sample face recognition

ABSTRACT. Single Sample Face Recognition (SSFR) is considered one of the most challenging issues in biometrics. This paper suggested a hybrid model to overcome the SSFR problem using two-dimensional face images. Two kinds of features were employed for recognition: the first type was extracted using the robust Multi-block Color Binarized Statistical Image Features (MB-C-BSIF) descriptor, also called hand-crafted features. The other was deep-learning features derived by employing a Convolutional Neural Networks (CNN) model on each face image. This is the first study that combines hand-crafted with deep-learning characteristics for the SSFR issue. We explored whether combining both features can improve recognition performance. Comparative experiments using the AR database indicate that performance improvements can be attained by combining both features, and the fusion of VGG-16-19 with MB-C-BSIF methods achieved the highest accuracy among all the combinations.

11:00
Securing medical images : A crypto-watermarking and biometrics based scheme

ABSTRACT. The protection of medical images during their transmission is imperative in healthcare applications. In order to achieve the needed security, it is essentiel to provide authentication, integrity control, non repudiation and data confidentiality. Therefore, we propose in this paper a crypto-watermarking and biometrics based scheme. An AES-CFB encryption algorithm using a 256 bits key generated from hashed extracted biometric feature. The simulations were applied on a medical image and the performance was analysed based on the used methods caracteristics. The results show that the proposed scheme provide all security requirements and can resist to several attacks.

11:20
Autoencoder-based Local Descriptor for Target Recognition in Infrared Images
PRESENTER: Billel Nebili

ABSTRACT. Target recognition systems consist of two stages: feature extraction and classification. To improve the performance of these systems, we require high-quality features. The features extracted from RGB images are generally rich in information, which is not the case for infrared images. The processing of this later is challenging, mainly due to their properties of low contrast, competitive background and lack of edges. Traditional hand-crafted feature extractors are limited by these challenges. In this work, an autoencoder-based local descriptor is explored for target recognition in infrared images. Firstly, we generate the local descriptors using a convolutional autoencoder. Next, a global representation is built from these local descriptors, based on the bag of visual words model. Finally, we fed the global descriptor to a classifier. The evaluation of this approach is carried out on two thermal datasets: FLIR thermal starter dataset and VAIS dataset.

11:40
SwinT-Unet: Hybrid architecture for Medical Image Segmentation Based on Swin transformer block and Dual-Scale Information
PRESENTER: Sarra Atek

ABSTRACT. The fast development of Convolution Neural Networks (CNN) based on U-shaped architecture has shown innovative improvements in the fields of image segmentation. However, these approaches cannot learn global information in images due to the locality of convolution operation. This paper deals with the design of a hybrid method of medical image segmentation. Taking advantage of Shifted windows (Swin) transformer block to extract fine-grained feature and the Transformer Interactive Fusion (TIF) module to establish global dependencies between features of different scales, the proposed approach consists of a dual-Scale encoder–Swin transformer U-shaped architecture (SwinT-Unet). The effectiveness of this method has been evaluated on the Synapse multi-organ CT dataset. The suggested segmentation demonstrated more efficient than the results of some other current methods.

12:00
Classification of Fundus Images based on Multifractal Features

ABSTRACT. This paper presents a new approach for the classification of fundus images based on multifractal analysis. The first step of the proposed method consists to remove image background in order to reduce computation time. The next step is the multifractal analysis which consists to compute the multifractal spectrum of each image using the generalized fractal dimensions and Legendre spectrum. Then, five multifractal features are extracted from the multifractal spectrum, in parallel, GLCM characteristics and entropy are extracted from each image. Finally, ten extracted features are fed to three classifiers (SVM, KNN and DT). The proposed approach was tested on a mixed database containing 1778 images including 816 normal images and 962 abnormal images, where abnormal cases are a set of thirty nine pathologies. The SVM classifier gives the best results with a sensitivity of 94.85%.

10:00-12:20 Session 4B: Telecommunication
10:00
Design of Algeria Flag shape Antenna Based On A New Circular SIW-WUVM Topology
PRESENTER: Turkiya Abes

ABSTRACT. A new design of circular antenna using Substrate Integrated Waveguide Without Upper Vias Metalization (SIW-WUVM) is proposed. For comparison and demonstration, the Algeria circular flag-shaped antenna based on both SIW-WUVM and conventional circular SIW with upper metalized vias (CSIW) operating at 24 GHz are designed and simulated. the good antenna characteristics obtained such as gain and radiation pattern illustrate and demonstrate the validity of the direct integration of technology SIW without upper metalized holes. This design allows the miniaturization of antenna dimensions in RF, which is useful for wireless communication systems.

10:20
A New Distributed-STBC Scheme for Cooperative Relaying in Wireless Networks
PRESENTER: Hakim Tayakout

ABSTRACT. Space-time encoded communications aim at improving the quality and reliability of a wireless high-data rate link by exploiting both the temporal and spatial signal dimensions. However, these techniques are impeded by typically low profile requirement of the terminals, making impractical the deployment of multiple antennas, more particularly at the relay side. In this paper, we propose a new and simplified distributed space-time block Coding (D-STBC) scheme in which the STBC code is artificially generated by relying on three single-antenna nodes in the cooperative network and using less time slots than the conventional counterpart. The proposed scheme is then generalized to encompass both multiple cooperative relays and multiple receive antennas cases. Furthermore, our proposal is investigated with both decode-and-forward (DaF) and amplify-and-forward (AaF) relaying protocols. At the destination, zero-forcing detector is adopted for the D-STBC decoding, and maximum ratio combining (MRC) of the relays signals is performed when retaining multiple relays configuration. It is shown that, with one relay, our proposed scheme exhibits similar performance in terms of data reliability with the conventional STBC alternative, with the advantage of resorting to only one antenna per transmitting/retransmitting node. Moreover, the viable capabilities of the proposed scheme are fully pointed out with an increased number of relays.

10:40
Seasonal Adjustment for traffic modeling and analysis in IEEE 802.15.4 networks
PRESENTER: M'Hammed Achour

ABSTRACT. Seasonality capturing is one of the utmost important processes when modeling any data signal. It helps to isolate repetitive behavior from the signal variations, hence, to have a comprehensive view of the different components of that signal. However, sometimes this procedure shows expensiveness in terms of space and time due to the long size of the season that incorporate all possible scenarios. This is strongly the case in the periodic traffic that follows the beacon-enabled mode of IEEE 802.15.4 standard. IEEE 802.15.4 superframe alters the traffic periodicity in a deterministic way to the extent that the seasonality capturing can take months or even years for simple network configurations. In this work, we introduce an exponential adjustment that reduces enormously the season and makes traffic modeling and performance analysis possible tasks. Quantitative and qualitative evaluations of our approach reveal its comprehensivity and effectiveness with minimum alteration of network configuration.

11:00
Investigation on different Rain Conditions over 40 Gbps DPSK-FSO Link : ALGERIA Climate
PRESENTER: Amar Tou

ABSTRACT. Free Space Optical (FSO) links have recently become attractive means of transmission thanks to their simple, fast and economical deployment. However, their fragility in the face of severe weather conditions that may reduce visibility to few meters is a challenge. Thus, more knowledge of the FSO channel under all weather conditions like rain, snow and fog is therefore necessary in order to provide solutions to make this type of connection more reliable, in particular with the choice of an appropriate modulation format. This paper presents the performance analysis of 40 Gbps Return to Zero-Differential Phase Shift Keying -FSO (RZ-DPSK-FSO) system under different rain conditions taking the weather of ALGERIA as example of study. According to OptiSystem simulation results in terms of Bit error rate (BER) variations versus the range, it has been observed that as atmospheric condition changes from clear weather to rain conditions, link distance range decreases. Furthermore, the outcomes showed that the proposed system based on DPSK modulation format exhibits acceptable performance levels.

11:20
A Dual Image Watermarking Scheme Based on WPT And Chaotic Encryption for Medical Data Protection
PRESENTER: Hadjer Abdi

ABSTRACT. Medical data security has always been a major concern for researchers due to its importance. In this context, digital watermarking is a security technique widely used to protect the integrity and authenticity of the medical image. We propose in this paper an image watermarking algorithm based on Wavelet Packet in which we were able to insert a watermark image and the electronic patient record EPR inside the medical image. Then to increase the scheme's security and confidentiality, the watermarked image was encrypted using a chaotic logistic map. Medical images from various modalities, as well as normal and pathological images, were used to test our method. The Peak Signal-to-Noise Ratio, Normalized Correlation, And Bit Error Rate were also designed to evaluate the robustness and imperceptibility. According to the results, our approach outperforms other watermarking systems for both watermarked images and retrieved information when compared to other approaches and under a variety of common attacks.

11:40
Bidirectional transmission over optical fiber with Intermediate frequency for 5G system

ABSTRACT. This work demonstrates a promising technology for the 5th generation (5G) system in the future. The proposing full duplex transmission Radio over Fiber (RoF) with Wavelength Division Multiplexing (WDM) use Non Return to Zero (NRZ), Alternate Mark Inversion (AMI) and Carrier Suppressed Return to Zero (CSRZ) modulations formats. These techniques confirm an efficiency transmission and high speed communication system with Intermediate frequency (IF) signal. These models convert 10 GB/s data rate with an external optical modulation, the system effect a transmission of 32 channels for the downlink and the uplink over long distance of bidirectional optical fiber. The simulations experimental improve minimum Bit Error Rate (BER) performance and high Quality factor (Q-factor) in the reception.

12:00
Centralized Wideband Cooperative CRs Combined with PCA Anomaly Detector in the Presence of Jammer Attacks

ABSTRACT. In this work, a centralized wideband cooperative compressive sampling (CWCCS) scheme based sub-Nyquist sampling technique combined with principal components analysis (PCA) anomaly patterns detection problem is presented. In the presence of jamming attacks and assuming Additive White Gaussian noise (AWGN), two hypotheses: first(means absence of jammers), and second (means presence of jammers) are considered in this paper. The received primary wideband analog signal at each cognitive radio (CR) receiver transmitted by primary user (PU) is transformed into a digital signal using an analog-to-information converter (AIC) based on the sub-Nyquist sampling theory via centralized collaborative strategy and then all CR receivers share the minimum number of samples. All these compressed measurements from each CR are collected in the form of compressed sampling matrix which is considered directly as the input of PCA detector. The multivariate PCA anomaly detector proposed in the level of FC (Fusion Center) is based on the characteristics of principal components. The simulation gives us good results in the presence of different jammers.

13:20-15:20 Session 5A: Signal Processing
13:20
Enhancement of a compact support time-frequency distribution derived from a polynomial kernel using image processing

ABSTRACT. The polynomial Cheriet-Belouchrani distribution (PCBD) is considered to be one of the best time-frequency distributins (TFDs) available for the analysis of time varying signals. As for all quadratic distributions, due to the unavoidable smoothing effects of the kernel, the recently proposed PCBD suffers from degradation of time-frequency localization. The aim of this paper is to enhance the readability and improve the concentration of this distribution. The obtained time- frequency map is enhanced using a specific method based on two-dimensional mean filter, automatic binarization and morphological image processing techniques. The enhanced PCBD is compared to the original distribution using several tests on real-life and multi-component signals with linear and nonlinear frequency modulation (FM) components including the noise effects. Also, comparisons are made by considering a selection of the best-known time-frequency representations (TFRs).

13:40
Non-contact estimation of breathing parameters based on CW radar
PRESENTER: Hamidou Ghilassi

ABSTRACT. The present paper addresses the problem of contactless respiration monitoring system realization based on continuous wave radar (CWR). The developed system is composed of two parts, hardware and software parts. In hardware part, the main purpose is to realize a CWR by using low cost components. The software part is dedicated to develop algorithms for drawing the breathing rhythm and estimating the respiration frequency. Firstly, the received signal is processed and its phase is exploited in the estimation of the respiratory rhythm. Secondly, the respiratory frequency is estimated based on either the periodogram method, the autocorrelation method or on the combination of both methods. Numerical simulation, together with an experimental study have shown the efficiency of the proposed solution in estimating the respiratory rhythm and frequency.

14:00
Speech Steganography based on Double Approximation of LSFs Parameters in AMR Coding
PRESENTER: Hamza Kheddar

ABSTRACT. A steganographic method for VoIP applications based on line spectral frequency (LSF) modification, is proposed in this paper. The aim of this research is to securely transmit secret binary data into AMR cover bitstream. The proposed scheme is Newton-based interpolation of the LSF coefficients of the four sub-frames, alongside with the original linear interpolation. Newton interpolation gives a perfect similarity to linear interpolation when there is no secret information is being transmitted. Besides that, Newton interpolation provides a high flexibility, which doesn’t exist in linear interpolation, to modify the eight LSF coefficients from within ten, excluding the first and the tenth LSFs. The results show that after a slight modification of ±0:01 (D) during LSF interpolation using our proposed scheme, the intelligibility is perceived, the maximum steganographic quality change is 0.08 MOS. The proposed scheme provides a high capacity of 1.05 kbps, a high undetectability, and security level against statistical steganalysis. Hence, our proposed steganographic method is effective, secure and, avoid rising the suspicious to eavesdropper who is listening to the channel.

14:20
The Effect of the Width of non-stationary Spikes on their Detection using the Kalman Filter

ABSTRACT. Kalman filter (KF) has been used to the detection of nonstationary Epileptic spikes (ES’s) and this was to consider that the nonstationary Electroencephalogram (EEG) can be modeled by time-varying autoregressive model (TVAR). KF estimates the autoregressive (AR) coefficients then we calculate another signal with these coefficients. The Epileptic spikes in this estimated signal are accentuated and the background activity is attenuated. The detection step is based on thresholding to localize the ES. Knowledge of this detection method is insufficient to apply it, it is necessary to know the conditions under which it gives results that can be trusted. In this paper we are going to be interested in the effect of the spike width on the result of detection for that we used a synthetic EEG which contains triangular spikes which constitute nonstationarities.

14:40
Epileptic Activity Prediction Within Pre-ictal Epochs: Application on Selected Channels

ABSTRACT. Automatic epileptic seizure prediction (AESP) is a challenge in epileptology since it has the potential to improve seizure incidence control while also providing a better understanding of seizure origin. We created a system using the CHB-MIT Scalp EEG Database that utilizes the Discrete Wavelet Transform (DWT) and a kNN classifier to investigate effective channels among the numerous channels within the CHB-MIT Scalp EEG database. Pre-ictal and normal signals were decomposed with a 5-level DWT and then classified using selected key features. We attained an accuracy of 98.27%, a sensitivity of 100%, and a specificity of 96.66%, indicating the effectiveness of the deployed technique. Additionally, it outperforms previous results in terms of classification accuracy, computation time for selected channels, pre-ictal period, and time available for seizure avoidance techniques.

15:00
Study of the Performance of Speaker Verification System based on Multi-resolution Cochleagram Features in Noisy Environment
PRESENTER: Ahmed Krobba

ABSTRACT. In this paper we propose to use a multiresolution cochleagram features for noise robust Speaker Verification (SV). Specifically, we study the performance of an i-vector based SV system, when tested in noisy conditions using a MRCG based on front-end. The performance evaluation of the proposed methods and their extended variants are carried out on NIST 2008 corpus under noisy conditions, using various noise SNR levels which are extracted from NOISEX-92. Experimental results show that the proposed methods provide better representation of speech spectrum. Moreover, we obtained a significant improvement in performance under noisy conditions when compared to the MFCC and GFCC feature extractions

13:20-15:20 Session 5B: Biomedical engineering
13:20
Miniature Circular Implantable antenna for Wireless Biomedical Applications
PRESENTER: Amaria Saidi

ABSTRACT. This paper presents a miniature circular implantable antenna suitable for biomedical applications. The suggested antenna operates in the industrial, scientific, and medical (ISM) bands at a frequency of 2.4-2.5 GHz. The proposed antenna design has been carried out using some methods of miniaturization to obtain a size of 4*4*0.254 mm^3. These methods are the use of a high-dielectric-constant substrate, a shorting pin, and slots in the radiation patch. To design and simulate the proposed antenna, the HFSS stimulator is utilized. Return losses, gain, and radiation pattern are estimated in one-layer and three-layer tissue models to demonstrate the performance of the proposed antenna. In order to observe the sensitivity of electromagnetic energy to human tissues, the specific absorption rate (SAR) for 1 g and 10 g in one and three layers is also determined.

13:40
Heart Outer Surface Segmentation from Computed Tomography Images
PRESENTER: Asmae Mama Zair

ABSTRACT. Volumetric heart segmentation is an essential operation in cardiology; it makes it possible to locate the cardiac lesions, quantify them, and follow their evolution for treatment. From the study of heart outer surface segmentation, several characteristics can be extrapolated and interpreted as coronary arteries calcifications (CAC) lesions. Image segmentation is the appropriate tool for such localization. In this work, we have realized an automatic system, which segments, the heart outer surface using only native computer tomography acquisitions (CCT). A multi-level threshold, mathematical morphology, and 3D connected component labeling are performed for this purpose. In the end, the 3D view is realized. We used 32 patients from the OrCascore dataset. We compared our segmentation with ground truth volumes, plotted by two experienced cardiologists. Our approach achieved an accuracy, F1score, DICE, and a JACCARD coefficient of 96.99%, 92.01%, 91.81%, and 85.23% respectively in a short computation time.

14:00
Affordable Custom made Physiological Measurement Instrumentation Based on Raspberry PI
PRESENTER: Sabrina Difallah

ABSTRACT. Physiological measurement is in the center of several implications in healthcare, accurate recording and processing those signals allow extraction of several health characteristics information that scientific community needs. Biopotential, bioimpedance and optical measurement are non-invasive approaches to measure and evaluate human body conditions and activity. This paper presents the feasibility of acquiring non-invasively physiological signals using an affordable custom-made measurement device based on Raspberry PI. The acquired signals waveform presents a satisfactory result and are acceptable, encouraging and converge to literature result. Thus, it shows that the device can be used for relatively several biomedical applications for supporting developers in research and engineering education.

14:20
All Optical Biosensors for Glucose and Urea Detection Based on Photonic Crystal Nano T echnology
PRESENTER: Iman Ouahab

ABSTRACT. Photonic sensing nano-technology is widely used for diagnosing of diseases in clinical settings. In this paper, we design and simulate an all optical biosensors based on two-dimensional (2D) photonic crystals intended for biomedical applications. The sensing operation is performed by refractive index detection, where each wavelength represents a specific concentration. The first sensor application is to detect the level of glucose concentration for any fluidized analyte. While the second, is used for measurement of urea level in urine. The simulation is performed using FDTD and PWE method. The proposed biosensors are very compact with competitive characteristics in terms of quality factor, sensitivity and transmission rate.

14:40
Validation of machine learning for Clinical Assessment of lumbar spine pathologies Using Graphic User Interface
PRESENTER: Sarah Arab

ABSTRACT. Degenerative pathologies of the vertebral column represent an important part of the activity in neuro and spine surgery, in particular, lumbar pathologies. Human has certain limitations in terms of accuracy and effective time to perform the diagnostic prediction. The diagnosis by the radiologist is not always as accurate as the use of computer-aided technology which becomes necessary to overcome these limitations in diagnosing diseases and will help a neurosurgeon to make rapid and effective decisions. The purpose of our work is to introduce machine learning computer-aid technology methods to perform the diagnostic of pathology on the column vertebral based on K-Nearest Neighbor classifier. Several patients with lumbar spine pathologies will be presented to evaluate and validate the performance of the proposed technique. The simulation results show that the proposed technique can correctly predict the pathologies with a significantly higher recognition rate (100%), training accuracy,and test accuracy in the range of (82.6%~100.0%), (70.2%~95.2%), respectively. This method can help to diagnose a patient effectively based on the statistics performance metrics: accuracy, precision score, sensitivity, F1-score, and specificity. To conclude, the proposed model method works better indicating a pretty good prediction and high recognition rate detecting a new patient disease effectively. This will result in reducing the heavy physician workloads and diagnostic time. We recommend that the proposed technique to be integrated for clinical support decision systems and will serve as primary screening and diagnosis.

15:00
Augmented Reality for COVID-19 Aid Diagnosis: Ct-Scan segmentation based Deep Learning
PRESENTER: Kahina Amara

ABSTRACT. The virus new variants of Coronavirus disease 2019 (COVID-19) continue to appear, making the situation more challenging and threatening. The COVID-19 pandemic has profoundly affected health systems and medical centres worldwide. The primary clinical tools used in diagnosing patients presenting with respiratory distress and suspected COVID-19 symptoms are radiology examinations. Recently emerging artificial intelligence (AI) technologies further strengthen the power of imaging tools and help medical specialists. This paper presents an Augmented Reality (AR) tool for COVID-19 aid diagnosis, including Computerised Tomography Ct-scans segmentation based Deep Learning, 3D reconstruction, and AR visualisation. Segmentation is a critical step in AI-based COVID-19 image processing and analysis; we use the popular segmentation networks, including classic U-Net. Quantitative and qualitative evaluation showed reasonable performance of U-Net for lung and COVID-19 lesions segmentation. The AR-COVID-19 aid diagnosis system could be used for medical education professional training and as a support visualisation and reading tool for radiologist.

15:20-16:20 Session 6: Poster Session I
Location: Poster Session
Epileptic Disease Prediction Using Graphic User Interface –Machine Learning Algorithm
PRESENTER: Aissa Boudjella

ABSTRACT. In this investigation, with primarily studies analysis of the performance metric characteristics we have developed a graphical user interface application to perform the diagnostic of pathology on Epileptic Seizure disease based on the K-Nearest Neighbor classifier. The system is implemented and simulated in Anaconda, and its performance is tested on a real dataset that contains 178 features, with the total of 4600 instances and two (02) classes. Each class, abnormal and normal class consists of 2300 instances, and 2300 instances, respectively. The simulation results achieved (87.7±4.3)% precision score, (83.7±7.9)% recall, (83.0 ±8.5)% F1-score,(69.3±17.2)% specificity, (94.9±7.3 )% Q, and (67.4%±15.7%) Kappa which gives better training and test accuracy in the range of [78.48%~100%] and [75.96~91.51], respectively. The proposed method validates the effectiveness of recognizing normal and abnormal tumor tissues from the statistical proprieties of the brain MRI image. The performance of the accuracy prediction of the class status maybe optimized by combining the dataset set size with the k-neighbors parameters. It can be improved by adjusting k and the size in the range of [2~10], and [15%~35%], respectively which results in increasing the interval of the accuracy in the range of [81.90%~100%] for the training set, and [82.80%~100% ]for the test set . For quality analysis, the proposed methodology can serve as a test platform for measurement and verification to be used as performance metrics guideline that tells us how much better the proposed model is than making prediction. These results will be applied to design a better GUI maximizing the accuracy prediction helping the medical doctors to diagnostic a patient effectively in a reduced time lapse taking a rapid decision

Generalized Pareto distribution exploited for ship detection as a model for sea clutter in a Pol-SAR application
PRESENTER: Hichem Mahgoun

ABSTRACT. Ship remote sensing can be regarded as an essential tool exploited for the purpose of managing maritime traffic. It can contribute promptly in the monitoring network system by increasing ships safety at sea. This work aims to verifier a new model for sea clutter based on analytic distribution. Our objective lies in integrating it to develop a constant false alarm procedure for ship detection. Generalized Pareto distribution was exploited for this objective on Polarimetric-SAR images. We were able in this paper to compute the detector based on the aforementioned distribution and assess the performances related to ship detection capabilities. The set of data processed in this paper was acquired by RADARSAT-2 satellite over a port area of the province of Vancouver located in the coastline region of Canada. It was verified in this paper that the Generalized Pareto distribution is a valid model of sea reflectivity. Second, the derived procedure of ship detection related to this stochastic process achieves a high detection probability (93%) for the HH polarization while showing a small level of false alarm (0.5%).

Improving ViBe-based Background Subtraction Techniques Using RGBD Information
PRESENTER: Ihssane Houhou

ABSTRACT. In this paper, we propose a framework for improving Background Subtraction techniques. This framework is based on two types of data, RGB and Depth. Our study stands for obtaining preliminary results of the background segmentation using Depth and RGB channels independently, then using an algorithm to fuse them to create the final results. The experiments on the SBM-RGBD dataset using four methods: ViBe, LOBSTER, SuBSENSE, and PAWCS proved that the proposed framework achieves an impressive performance compared to the original RGB-based techniques from the state-of-the-art.

On the framework of cardiac arrhythmia characterization using morphological and statistical features
PRESENTER: Manel Labdi

ABSTRACT. The electrocardiogram (ECG) signal represents the electrical activity of the heart. Long signal recordings, namely, the Holter method is widely used in cardiology. As it is deemed an essential tool for detecting cardiac arrhythmias since it may involve pivotal data concerning the nature of diseases that can affect the heart. The ECG signal denotes, mainly, the amplitudes and waves durations of the heartbeat signatures, the duration of the cardiac cycle, the intervals’ and the segments’ durations, which are of paramount importance in automatic analysis of the ECG signal. These features can provide important information to help providing more accurate clinical decision-making about the cardiac status of the patient under examination. The objective of this work is to study two ECG features kinds, namely morphological characteristics and statistical characteristics. Feature extraction is performed for each segmented beat signature from the ECG signal. The morphological characteristics vector comprises 13 parameters whereas the statistical characteristics vector consists of 7 parameters. Four types of arrhythmias are considered in this study, namely Ventricular Extrasystole (VE), Atrial Extrasystole (AE), Right Bundle Branch Block (RBBB), Left Bundle Branch Block (LBBB) and the normal beat (N). The two types of features are extracted from ECG signals acquired from the MIT-BIH database and fed to a Support vector machine (SVM) classifier. The performance of the proposed method is assessed using Global Accuracy, F1-score and positive likelihood (Gρ+) metrics. The yielded results indicate a detection rate exceeding 98% for all classes of arrhythmias considered in the case of morphological features.

Recognizing the style of a fine-art painting with EfficientNet and Transfer learning
PRESENTER: Baya Lina Menai

ABSTRACT. Due to the digitalization of the content of museums and art galleries in the last decades, a huge amount of digital fine art paintings became available on the internet. Therefore, the number of digitized paintings databases has increased rapidly, and it became difficult to manipulate their content manually. With the great performance of deep learning approaches and computer vision techniques, we became able to categorize paintings automatically. In this paper, we aim to investigate the effectiveness of the pre-trained EfficientNet models family for the task of identifying the style of a painting and propose custom models based on pre-trained EfficientNet models. We used transfer learning to fine-tune different types of pre-trained EfficientNet models from B0 to B6 on ImageNet. In addition, we added other layers to the base architectures of EfficientNet models to create our custom models. Finally, we analyzed the effect of retraining the last eight layers of our custom models. In our experiments, we used the standard fine art painting classification dataset Painting-91 with the same experimental setup. The deep retraining of the last eight layers of our custom models achieved the best performance with a difference of 5% than the base models.

New Type of Power Dividers of Microstrip Transmission Lines

ABSTRACT. A miniature power divider is proposed in this work. The power divider uses the microstrip-line transition characteristics , it was realized using stubs structure.The proposed power divider is realized on FR4 dielectric substrate and the simulations are performed using HFSS. Insertion losses are better than 3.34 dB on each of access lines and a very good isolation between the output ports, it is better than 26 dB in the measurement phase. The developed power divider occupies only 15.6  18.8 mm2.

TH-UWB Cooperative Relaying Network with Filtering

ABSTRACT. Cooperative communication has become one of the major axes of research on time hoping ultra-wideband (TH-UWB) wireless communications. Cooperative technology is utilized to increase the system's capacity. The effect of discoloration and interference in a dense broadcast environment can be combated by using channel shortening equalizers (CSEs) at the relays and destination to take advantage of multipath propagation. In this paper, a less complex CSE technique such as zero-forcing (ZF) is used to reduce the complexity of the cooperative system and enable the rake receiver implementation with fewer fingers.

New Circular Polarization Flexible Ku Microstrip Antenna Design for Direct Broadcast Satellite Application.
PRESENTER: Wahiba Belgacem

ABSTRACT. In This paper, we propose a new circular polarization flexible microstrip antenna operating at (DBS) Ku band for space application. The proposed Ku antenna is designed with Rogers’s substrate, which resists in the condition on space environment. The flexible substrate characteristics is h=1.575 mm thickness, 2.2 relative permittivity and 0.009 loss tangent. The overall dimension of the proposed antenna is 10× 15× 1.575mm3. The study parameters are affected with integrating a new rectangular slot in the middle of rectangular patch for miniaturized of the size of design proposed, lower the manufacturing cost, ease of integration into a payload of satellite. It operates at the uplink direct broadcast service (DBS) frequency range from (17.3-17.8GHz). The simulations results obtained using CST Microwave studio Software are found to be satisfactory the achieve high gain gain is 6.47 dBi at 17.47 GHz, omnidirectional radiation pattern, circular polarization and a good return loss factor are improved at 17.47 GHz is -40.5 db. The results obtained are verified by comparing them with literature publish work.

DEEP CONVOLUTIONAL NEURAL NETWORKS FOR DETECTION AND CLASSIFICATION OF TUMORS IN MAMMOGRAMS: A survey
PRESENTER: Kadda Djebbar

ABSTRACT. Although its proven track record as a breast cancer screening tool, mammography is time-consuming and has known limitations, such as limited sensitivity in women with thick breast tissue. In the last 10 years, improvements in neural networks have been used in mammography to assist radiologists in increasing their efficiency and accuracy. This review seeks to offer the existing knowledge base of convolutional neural networks (CNNs) in mammography in an orderly and systematic manner. The survey begins with classic Computer Assisted Detection (CAD) and then moves on to more recently develop CNN-based models for computer vision in mammography. It then offers and examines the literature on mammography training datasets that are currently accessible. Following that, the study summarizes and examines recent research on CNN for mass detection and classification, including the presentation and comparison of quantitative approaches for those tasks and the advantages and disadvantages of the different CNN-based approaches. Finally, this study identifies prospective future employment prospects in this subject. The information supplied and discussed in this survey might serve as a blueprint for creating CNN-based solutions to enhance mammographic detection and classification of breast cancer.

Direction-of-arrival estimation with new versions of Min-Norm algorithm based on Nyström method
PRESENTER: Naceur Aounallah

ABSTRACT. Direction-of-arrival estimation is a significant problem in many telecommunication systems which their architecture is composed with antenna arrays. In literature, several algorithms were proposed as a challenge, for different scenarios, to detect rapidly and accurately the direction of impinging signals on the antenna array. In this paper firstly, we use the Nyström method to develop a low computational Min-Norm version for solving the problem of direction-of-arrival estimation by spectral peak searching. Then, we generalize the root version of the proposed algorithm. Finally, we give some simulation results for demonstrating the efficiency and the performances of the new proposed algorithm versions.

Blind Digital Modulation Classification for Cooperative STBC-OFDM Systems based on Random Subspace and AdaBoost Classifiers
PRESENTER: Hakima Moulay

ABSTRACT. Software defined radio (SDR) allows software to control physical layer modulation and waveforms for wireless communications. Intelligent radio is the fusion of machine learning (ML) with cognitive radio (CR) which enables SDR to more complex and autonomous tasks, such as channel estimation and automatic modulation classification (AMC). In this paper, we investigate Adaptive Boosting (AdaBoost) and Random Subspace Classifier (RSC) using cooperative space-time bloc coding (STBC) Orthogonal frequency Division Multiplexing (OFDM) applying Amplify and Forward (AF) protocol in wireless communication systems Based on the results, the best-performing algorithm is determined to provide a simple AMC method, which is then evaluated employing HOS originated from received signal to discriminate between modulations type and order namely {2PSK,8PSK,8PAM,16QAM}.

U-Net-based COVID-19 CT Image Semantic Segmentation: A Transfer Learning Approach
PRESENTER: Abdesselam Ferdi

ABSTRACT. Deep learning (DL) algorithms are widely applied in many disciplines such as medical imaging, bioinformatics, and computer vision. In medical imaging, DL models have been used to perform image segmentation, classification, and detection. During the outbreak of the COVID-19 pandemic, DL has been extensively used to develop COVID-19 screening systems. RT-PCR is the gold standard method for COVID-19 screening. However, DL has been proposed to detect patients infected with COVID-19 through radiological imaging in CXR and chest CT images. This paper proposes transfer learning to train modified U-Net models to segment the COVID-19 chest CT images into two regions of lung infection (ground-glass and consolidation). The proposed modified U-Net models were constructed by replacing the encoder part with pretrained convolutional neural network (CNN) model. Three pretrained CNN models, namely, efficientnetb0, efficientnetb1, and efficientnetb2 were used. The proposed models were evaluated on the COVID-19 CT Images Segmentation dataset available in an open Kaggle challenge. The obtained results show that the proposed Efficientnet-b2_U-Net model yielded the highest FScore of 0.5666.

A Robust dynamic EEG Channel selection using Time-frequency Extended Renyi Entropy
PRESENTER: Fayza Ghembaza

ABSTRACT. Epilepsy is a chronic disorder characterized by repeated seizures that may be detected by examining Electroencephalogram (EEG) data. Several analysis approaches represent seizure-related non-stationary content within EEG signals in long-term multi-channel EEG data. However, the over-dimensionality issue inherent in processing many EEG channels hampers the required performance. Therefore, many channel selection algorithms define the most relevant channels to overcome this dimensionality issue. Nonetheless, these techniques adopt static selection towards EEG channels, making them unable to follow the dynamic behavior of the cerebral activity. In this paper, we propose a dynamic channel selection algorithm based on the time-frequency extended Renyi Entropy (RE) and apply this algorithm on high-resolution quadratic time-frequency distributions, namely; the Spectrogram (SP), the Smoothed Pseudo Wigner-Ville Distribution (SPWVD), and the Choi-Williams Distribution (CWD) for a comparison purpose. With the combination of SPWVD and KNN classifier, the suggested algorithm produces encouraging results, reaching an accuracy of 99.23% and sensitivity of 100%, which makes it a reliable technique for selecting relevant channels to reduce EEG dimensionality.

Creation of an application under MapBasic for the drive test reports automation in cellular networks

ABSTRACT. Telecommunications networks have a very important impact in our society. In order to best satisfy the needs and interests of customers, it is imperative to ensure the proper functioning of the network, that’s why operators provide the best services to offer subscribers a good quality of communication. It is within this framework that the problems of planning and optimizing cellular networks are addressed. This paper aims to optimize the radio part of 2G/3G/LTE networks, by creating a macro developed under Mapbasic to extract figures that represent the distribution of radio coverage and radio link quality in a geographical map. These figures are obtained by classifying the samples recorded during the drive test operation, then exported by TEMS Investigation in order to display and process them in the SIG Mapinfo software. This macro allows you to generate reports after the analysis of the drive test operation. One example of applications assisted by our macro is applied in the network operator - Algeria, and which is applied in the GSM network for cluster site after swap operation. The validation of the results obtained confirms the effectiveness of our developed tool.

Accurate visible light positioning system using neural network

ABSTRACT. Visible light indoor positioning has significant advantages, such as low cost, high durability, and environmental protection. In this paper, we investigate the use of artificial neural networks in an indoor positioning system based on visible light. The position of the receiver is estimated by a typical back propagation neural network, whose inputs are the powers of the lights received from several LEDs. Several experiments have been carried out to evaluate the performance of the proposed method, in terms of accuracy. The obtained results show that this method achieves a high accuracy positioning, with an average error of 0.7 cm at a quarter area (100 cm×100 cm ×160 cm) of the testing space.

PSO and CPSO Based Interference Alignment for K-User MIMO Interference Channel
PRESENTER: Fatiha Merazka

ABSTRACT. This paper investigates how to use a metaheuristic based technique, namely Particle Swarm Optimization (PSO), in carrying out of Interference Alignment (IA) for $K$-User MIMO Interference Channel (IC). Despite its increasing popularity, mainly in wireless communications, IA lacks explicit and straightforward design procedures. Indeed, IA design results in complex optimization tasks involving a large number of decision variables, together with a problem of convergence of the IA solutions. In this paper, the IA optimization is performed using PSO and Cooperative PSO (CPSO) more suitable for large scale optimization. A comparison between the two versions is also carried out. The cooperative based approach seems promising.

Non-Sliding Window Decoding of Braided Convolutional Codes
PRESENTER: Imed Amamra

ABSTRACT. This paper proposes a new approach for window decoding of Braided Convolutional Codes (BCC) using the iterative LogMAP algorithm. The approach is based on a window of component decoders which operate in parallel as they exploit the tail-biting termination of the component convolutional encoders, to obtain decoupled circular trellises. Using a non-sliding parallel window scheme for decoding Blockwise BBC, instead of the progressive sliding window technique, achieves a significant reduction in computational complexity with less decoding iterations for a particular error correction performance. Simulation results, over AWGN channel, show that the performance of the proposed approach exceeds those obtained by other algorithms presented in the existing literature with much less decoding iterations and an improved gain of up to 0.4dB. Additionally, the proposed algorithm has much less implementation requirements and complexity compared to the sliding window scheme.

Koch Snowflake Fractal Patch Antenna with ENG Metamaterial loads for WiMAX and Wi-Fi
PRESENTER: Abderrahim Annou

ABSTRACT. In this paper, a compact fractal antenna using metamaterial concept with the dual band characteristic is presented and investigated. The proposed antenna is developed by loading a new Complementary Split Ring Resonator (CSSR) cell into the square radiating patch center and then the Koch snowflake fractal, is introduced along the square patch edges. The reflection/transmission method has considered extracting the intrinsic properties of CSSR cell as metamaterial. The antenna has two resonance frequencies that matched with the Electric Negative metamaterial (ENG) characteristic, at 2.64 GHz and 5.6 GHz. Aforementioned antenna is designed on a Rogers 5880RT substrate, using CST microwave studio. The obtained results show that the antenna covers the main wireless, IEEE 802.16 WIMAX and IEEE (802.11n, 802.11ac ) 5 GHz ISM bands with sufficient gain and high efficiency (more than 90% typically). The metamaterial is used for controlling the resonances and Koch snowflake for improving.

Improving LTE network retainability KPI prediction performance using LSTM and Data Filtering technique
PRESENTER: Hamza Chekireb

ABSTRACT. The theme discussed in this article is the possibility of predicting the counters of the KPIs by analyzing the variation of their values over the 7 days preceding the date to be predicted. As the KPIs and more precisely their counters are a sequence of numerical values that evolve in time, the problem posed in this article is indeed the prediction of the values of a Time series. Therefore, the approach used in this paper to solve this prediction problem is a combination of Long Short-Term Memory Networks and a training data filtering, the combination proved to have a good ability to predict efficiently the time series data, thus a good performance in terms of predicting the value of the counters of the retainability KPIs.

MACHINE LEARNING ON THE DIAGNOSIS OF BRAIN TUMORS

ABSTRACT. Brain tumors are an uncontrolled proliferation of abnormal cells in the brain. They are classified according to world health organization based on: 1) their originating cell as astrocytoma meningioma oligodendrocytoma, 2) their grade of malignancy from grade I to grade IV, begin tumors with low grade and II, and malignant tumors with high grade II and IV, 3) the number of cells in mitosis, and 4) their rate of proliferation as well as the presence of area of necrosis. In this investigation, we will focus on evaluating the performance of machine learning models in detecting abnormal class of brain tumor. Our aims is to use a computer aided technology to perform the diagnostic of pathology on the brain tumor based on the Cluster K-Nearest Neighbor (CKNN) classifier, and compare the obtained results with the anatomopathologist diagnosis. This study will serve as a guideline that tell us how much better a model is than making correct predictions. Several patients having a brain tumor have been diagnosed to assess the performance metrics of the proposed. To conclude, this method works better indicating a pretty good prediction and high recognition rate in detecting a patient disease effectively which results in reducing the heavy physician workloads and diagnostic time.

SNCF workers detection in the railway environment based on improved YOLO v5
PRESENTER: Yahia Hathat

ABSTRACT. In nearest past years object detection techniques becomes the magic key to solving several problems in computer vision, in this work, we introduce our enhanced YOLO v5 detector for detecting SNCF (National Society of France Railroad) workers in the railway environment. Our contribution in this work is presented by creating a new dataset about SNCF workers to use for training our model detector and improving YOLO v5 by reducing the number of its parameters where we reduce the number of classes in YOLO layers to only one class, that ensure to augment the speed of detection and increase the accuracy of our detector. Finally, we apply the four versions of YOLO v5 (S, M, L, X) and compare them. We achieved a high speed in the detection of SNCF workers in YOLO v5-S with 0.1 ms and high precision in YOLO v5-X with a rate of 0.9731 %.

Automatic surface defect recognition for hot-rolled steel strip using AlexNet convolutional neural network
PRESENTER: Said Benlahmidi

ABSTRACT. Quality control of the surfaces of rolled products has received wide attention due to the crucial role that these products play in the manufacture of various car bodies, planes, ships, and trains. The process of quality control has undergone remarkable development. Previously, it was based on the human eye and characterized by slowness, fatigue, and error. To overcome these problems, nowadays quality control is based mainly on computer vision. In this context, this paper aims to use AlexNet convolutional neural network in order to develop a modified model for recognition surface defects in hot rolled steel strip by using the technique of transfer learning. The experimental results of this model gave a recognition rate of 98.6%, which is good results compared to some of the models issued in recent studies and research work.

Brain Tumor Segmentation on MRI using a GVF Snake Model

ABSTRACT. One objective of neuroimaging is to study the brain structures of healthy and pathological subjects. The considerable variation of structures requires implementing specific study methods, often addressed through Magnetic Resonance Imaging (MRI). Currently, segmentation constitutes a big step in the treatment and interpretation of medical images. Many approaches have been proposed for segmentation tasks. Accordingly, several methods have resulted. Among these methods, we find the Gradient Vector Flow GVF Snake Model method, which presents the subject of our work. The basic idea of GVF Snake Model is to evolve an initial contour according to specific equations to reach the desired object boundaries; this method helps us get a closed and skinny contour (one pixel of thickness). We opted for a contour segmentation method to refine the initial segmentation; parametric deformable models have been successfully applied to tumour segmentation, so we use the active contours guided by GVF with constraints from spatial relations. The results show that our method significantly improves brain tumour extraction and segmentation. Through this humble search, we can finally look forward to the generalization of this method on 3D space to have the global information of the pathology to help the practitioner in the diagnosis.

Fashion Classification using Machines Learning, Deep Learning Models and Transfert Learning
PRESENTER: Soraya Zehani

ABSTRACT. Fashion is the way we present ourselves which mainly focuses on vision, has attracted great interest from computer vision researchers. It is generally used to search fashion products in online shopping malls to know the descriptive information of the product. The main objectives of our paper is to use deep learning (DL) and machine learning (ML) methods to correctly identify and categorize clothing images. In this article, we used ML algorithms (SVM, KNN,Decision tree (DT), random forest (RF)) and DL algorithms (CNN, AlexNet, GoogleNet, LeNet, LeNet5) that we trained and tested with Tensorflow and Scikit-Learn libraries that support deep learning and machine learning in Python. The main metric used in our study to evaluate the performance of ML and DL algorithms is the accuracy. The best result for ML is obtained with the use of ANN (88.71%) and for DL is obtained for the GoogleNet architecture (93.75%). The results obtained showed that the number of epochs and the depth of the network have an effect in obtaining the best results.

A New lightweight solution against the version number attack in RPL-based IoT networks

ABSTRACT. IoT networks run a routing protocol named RPL (Routing Protocol for Lower Power and lossy networks) defined by IETF. This protocol is suitable for wireless constrained networks, where devices have a limited processing and storage capabilities. However, the security concern for IoTs is still a challenging concern, since devices are connected to the internet and exposed to various threats affecting the network topology and resources. The version number attack is one among the well-known detrimental attacks against RPL-based networks, whereby falsifying the current network version number, an intruder tries to directly affect the network lifetime by creating topology volatility and thus exhausts the devices resources. In the present work, we propose a lightweight decentralized solution, performed by each node and allowing it to decide about the accuracy of the received version number from its neighborhood. The aim of our proposed algorithm is to keep the same version number delivered by the sink node toward the entire network. The performed simulation results using Cooja simulator under Contiki OS, obviously show that our proposed solution promises better insights, where the energy saving is about 58%, whereas the control overhead has been reduced by 81%, depending on the attacker position.

Localization of an Unmanned Aerial Vehicle with an Improved Technique based on New Form of the Smooth Variable Structure Filter
PRESENTER: Fethi Demim

ABSTRACT. This paper focuses on developing a robust solution for the Simultaneous Localization and Mapping (SLAM) problem to increase the autonomy of Unmanned Aerial Vehicles (UAV). The original contribution of this work is the proposition of new filter based on Smooth Variable Structure Filter (SVSF) compared to Extended Kalman Filter (EKF) to solve the Inertial Navigation Systems (INS)/3D laser UAV navigation problem. Simulation results for 3D flight scenario are presented to demonstrate the advantages of the hybrid SVSF localization compared to EKF localization based technique. In order to achieve better trade between optimality and robustness of the UAV navigation, the novel form of SVSF is proposed as an alternative. This latter doesn't make any assumptions about noise characteristics and provides accurate estimation results. Furthermore, in this article a new SLAM algorithm with significant robustness face parameters uncertainties and modeling errors is proposed. Experimental scenarios, under realistic conditions are presented and the obtained results confirm the effectiveness of the SVSF approach compared to EKF.

CNN-based Concrete Cracks Detection Using Multiresolution Analysis
PRESENTER: Ahcene Arbaoui

ABSTRACT. This article proposes an efficient method of auscultation of concrete structures with a non-destructive method using an ultrasound device (Pendit L-200). For the purposes of this work, we have prepared different defects of dosage of the constituents of the concrete, namely sand, different gravels, cement and water, and this with progressively increasing proportions, keeping the others as recommended by the standards. For the defects considered, the transverse intrinsic signals at the center and transversal of the specimens are taken in order to constitute a database. The key element of this method is the multi-resolution analysis based on wavelets. This analysis is coupled with an automatic identification scheme of the types of dosage defects based on deep learning by convolutional neural network (CNN): a technology that is nowadays at the cutting edge of machine learning, especially for all pattern recognition applications.

Low level Syntax Elements Study in Intra HEVC/H.265 Video Codec
PRESENTER: Wahiba Menasri

ABSTRACT. Ultra High Definition Television (UHDTV) imposes extremely high throughput requirement on video encoders based on VVC (Versatile Video Coding) and High Efficiency Video Coding (H.265/HEVC). HEVC adopt many advanced developed techniques in order to compress and decompress the video sequence for ensuring the real time requirements with keeping the quality. The decoded video is obtained according to the syntax elements language specified by the HEVC standard. Consequently, HEVC adopt 74 syntax elements at the low level which are divided into four partitions: including quad tree partitioning, intra and inter prediction, transform level, quantization and loop filtering. This work propose a profound study of HEVC low level syntax elements. The four partitions of syntax elements are defined, detailed and specified (role). After that, the encoding process and order of each syntax element is given by diagram blocks followed by an examples of encoding process syntax elements for 4x4 and 8x8 transform blocks (references blocks) by using Matlab software. This work give the only detailed study that demonstrate the main functionality of the all intra syntax elements encoding process in CABAC HEVC.

Real-Time FPGA Implementation of Digital Video Watermarking Techniques using Co-Design Approach: Comparative Study
PRESENTER: Redouane Kaibou

ABSTRACT. This paper presents a Genesys-2 FPGA implementation of three video watermarking techniques in both spatial and frequency domains followed by a comparative analysis. The video acquisition is realized using OV7670 camera used for real-time watermarking with each technique both in visible and invisible schemes with Video Graphics Adapter (VGA) display validations. The techniques implemented are based on Least Significant bit (LSB), additive spatial and additive frequency watermarking. All implementations have been realized in Software/Hardware (SW/HW) co-design using Vivado-HLS tool achieving low area, high speed of up to 130 MHz and low power consumption that did not exceed 827 mw all along with good watermarking performances of imperceptibility with fairly good robustness for most of the geometric and image processing attacks. Keywords—Watermarking, FPGA, DWT, Vivado, Image Autentichation.

Kinship Verification System based on the Color Spaces Analysis

ABSTRACT. Metric learning has attracted wide attention in face and kinship verification and a number of such algorithms have been presented over the past few years, this system has a number of applications such as organizing collections of images and recognizing resemblances among humans and finding of missing children. In this work , we propose a novel approach based on the Weber Local Descriptor (WLD) with color spaces, and the Multi- Lavel (ML) representation, Moreover, the use of Rank features (TTest) to reduce the number of features and the support vector machine (SVM) for the kinship classification. Our approach consists of six stages which are : (1) Face preprocessing (2) applied the color spaces (3) features extraction using WLD , with face representation , (4) pair features representation, (5) features selection and (6) classification using SVM. The proposed approach is tested and analyzed on five publicly available databases (Cornell , UB KinFace, Familly 101, KinFac W-I and W-II).

16:20-18:20 Session 7A: Application I
16:20
Analysis of the U-shaped geometry of a frequency selective surface using the WCIP

ABSTRACT. We have studied a frequency selective surface (FSS) for a U-shaped geometry by a method called the Wave Concept Iterative Procedure (WCIP). The development of this method based on Fast modal transformation (FMT), for the FSS frequency response is presented in the results of a single resonance obtained by the WCIP method exhibit a resonance frequency at 5.6 GHz with a bandwidth of 2.75 GHz, when the structure is excited in the x direction by a plane wave at normal incidence, a resonance frequency at 11.6 GHz with a 414.61 MHz band when the structure is excited in the y direction. The simulation of the results obtained by the WCIP method are compared to the HFSS results and a good agreement is to score.

16:40
Assessment of drought conditions in Algeria using satellite images on the period (2000-2021)

ABSTRACT. Algeria suffers recently from severe droughts which can have negative effects on agriculture as well as water resources management. Drought is a complex phenomenon that is hard to quantify and predict. Remote sensing is an effective tool that provides large coverage products at different spatial resolutions and different environment quantities. During this work, we use remote sensing data to study the evolution of the vegetation cover based on the MODIS VI products. The SVI index is used to map the vegetation cover based on the EVI index which shows better reliability compared to NDVI. Four different cities are selected to investigate regional effects. Statistics are calculated based on the study period (2000-2021) to identify both the temporal and spatial distributions of the drought. Results reveal that SVI starts degrading for, almost all cities, from late 2019 till the end of 2021 which is in coincidence with the severe drought remarked on the last year. The obtained results can be used to improve the management of water resources and the earlier intervention to avoid severe impacts.

17:00
Machine Learning-Based Classification of Chest X-Ray MRI Images into Covid-19 -Graphic User Interface
PRESENTER: Aissa Boudjella

ABSTRACT. To predict the best performance metrics for the diagnostic pathology of Covid-19 MRI image feature extractions, primarily studies analysis are required to determine the optimal setting parameters such as the knn nearest-neighbors, test size, and random state. In this investigation, metrics that tell us how much better a model is than making predictions are presented. The system is implemented and simulated in Anaconda, and its performance is tested on a real dataset that contains 6 features and two (02) classes. Each class, an abnormal class (a patient having Covid-19), and a normal class (a patient without Covid-19) consists of 343 instances (images), and 234 instances (images), respectively. At constant random state 66, the performance of test measurements obtained from the simulations results under various test sizes [10%~50%] is carried out when the nearest neighbor k changes from 1 to 20. For quality analysis to examine and validate the proposed technique, based on the performance metrics, the simulation results achieved an average of train accuracy, test accuracy, precision score, sensitivity,F1-score, and specificity in the interval of (100.0±0.0~74.3±0.9)%,(82.9±3.4~71.6±2)%, (82.5±3.5.~66.2±2.8)%,(82.0±6.4~60.1±4.1)%, (80.6±2.3~66.3±3.4)%,and (90.0± 2.3~71.8±3.4)%,respectively. The KNN classifier combined with the optimal setting parameters shows better performance, predicting the normal and abnormal class labels accurately. Based on these results, we can further improve the accuracy performance in the range of k= [1, 2, 3, 4, 6, 7] and test sizes [10%~35%]. In this investigation, with primarily studies analysis of the performance metric characteristics we have developed a graphical user interface application to perform the diagnostic of pathology on Covid-19 disease based on the K-Nearest Neighbor classifier.

17:20
Three-dimensional photometric reconstruction of a single view using machine learning techniques
PRESENTER: Lyes Abada

ABSTRACT. In the computer vision field, detecting the structure and geometry of the three-dimensional world from two-dimensional images remains a big problem despite the research progressing rapidly. Three-dimensional (3D) reconstruction is the inverse problem of Image Formation, which consists to obtain a 3-dimensional representation of an object from one or more images and thus of recovering the lost dimension during the process of image formation. 3D reconstruction is very useful in many applications such as visual inspection of surfaces, augmented reality or also medical diagnostic assistance. There are several techniques that aim to reproduce human vision and build a 3D model. Among the 3D reconstruction techniques we find the shape from shading (SFS) which is known to be ill-posed since the solution is not unique. This forces us to model this problem in a different way so that it becomes well-posed. This pushes us to move to a more general method without constraints to reduce the complexity of the generation of 3D objects. This method is the Photometric Stereo (PS). In this paper, we propose a 3D reconstruction method based on machine learning with using different architecture and parameters.

17:40
Mean Teacher for Weakly Supervised Polyphonic Sound Event Detection: An Empirical Study
PRESENTER: Zhor Diffallah

ABSTRACT. Sound event detection refers to the task of categorizing the types of events occurring in an audio recording, in addition to pinpointing the start and end times of each occurrence. This task has recently grown in popularity as a result of its aptitude to enhance a myriad of applications. Building sound event detection systems heavily relies on the representational power of deep neural network architectures. Deep network architectures require a large amount of strongly annotated audio data, where the exact temporal locations of each sound event are indicated. However, manually annotating audio recordings with the type of events present and the corresponding time boundaries is both costly and laborious. To mend to this, learning from weak labels has been adopted in an attempt to bypass the labeling barrier. In this paper, we examine the effect of incorporating weakly-labeled data into the training process of sound event detection systems. Moreover, we analyze the behavior of the Mean Teacher framework under various deep learning configurations. Our experimental results reveal that training a well calibrated Mean Teacher structure; on weakly-labeled data, can improve the predictive performance of sound event detection systems.

18:00
Classification of Breast Cancer Histopathological Images using DensNet201
PRESENTER: Hossena Djouima

ABSTRACT. Diagnosing and classifying breast cancer tumors is a rather complex activity for pathologists due to the heterogeneous nature of the tumor cells. The wide use of artificial intelligence (AI) and the rise of Deep Learning (DL) have led to promising results in terms of breast histopathology images classification. The outcomes depend largely on two main factors, namely, the number and quality of images. BreaKhis dataset shows an imbalance in the image classes distribution, thus generating the performance degradation of the classifier model due to a biased classification towards the majority class. In this paper, a Deep Convolution Generative Adversial Network (DCGAN) is applied to give the number of images consistence in the minority (benign) class with that of the majority (malignant) class. Data augmentation is a technique used later to create more data from the limited ones. The DenseNet201 pre-trained model is chosen and used with the concatenation of features from various DensNet blocks. Instead of considering all the layers of the pre-trained network, the features are extracted from the lower layers of DensNet201, via a global average pooling (GAP). These features are passed to the softmax classifier to classify breast cancer. The model is evaluated using a two-class BreaKhis, provided at four magnification levels 40x, 100x, 200x, and 400x. The proposed method yielded test accuracies of 96%, 95%, 88%, and 92% respectively for each magnification factor. As indicated in the results, the proposed method based on data augmentation by DCGAN and feature concatenation using DenseNet201 pre-trained models could produce an efficient prediction for breast cancer image classification.

16:20-18:20 Session 7B: Applications II
16:20
Image reconstruction and enhancement for radar imaging using IR-UWB signals
PRESENTER: Hamza Abadlia

ABSTRACT. Impulse response ultra wide-band (IR-UWB) radar has many applications toward remote sensing of individuals, as well as the imaging of subjects behind a wall. In the present paper, we first introduce the theory behind the IR-UWB radar for through the wall imaging (TWI). Then, we present the system realization which combines a single radar with an adequate rail displacement mechanism. Finally, we depict the real-life image reconstruction using spatial filtering followed by spectral processing of the synthetic aperture data for signal enhancement. Experimental results demonstrate the effectiveness of IR-UWB radar and the processing technique.

16:40
Keypoint-based copy-move forgery detection in digital images: a survey
PRESENTER: Azeddine Bensaad

ABSTRACT. Copy-move (called copy-paste) is one of the most common image forgery, where one or more areas of an image are copied and pasted into another location of the same image. The objective of such a forgery is to hide useful elements and perform area duplication in some sections of an image. Thereby, copy-move forgery (CMF) poses a serious threat to society and forensic experts. Many methods have been proposed for copy-move forgery detection (CMF), which can be categorized into keypoint-based and block-based methods. Generally, the performance of keypoint-based methods is relatively higher in terms of computational efficiency, complexity, and robustness against many transformations. In this paper, a comprehensive survey of the recent keypoint-based methods based on Scale Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF), Oriented FAST and Rotated Brief (ORB), KAZE, and Binary Robust Invariant Scalable Keypoints (BRISK) is presented.

17:00
Contribution of spectral indices of chlorophyll (RECl and GCI) in the analysis of multi-temporal mutations of cultivated land in the Mostaganem plateau

ABSTRACT. The coastal strip of Mostaganem is an area with an agricultural vocation, characterized in recent decades by profound changes in land use, the results of biophysical processes and complex interactions between societies and their environment. These processes transform the landscape on several spatial and temporal scales. In this context, over the past few years, the evaluation of changes in cultivated areas has become one of the major priorities; it makes it possible to develop action plans and to propose management approaches that lead to sustainable agricultural development. Therefore, the use of remote sensing techniques and Landsat multispectral data for the diachronic mapping of agricultural land is a necessity. This work highlights the considerable contribution of the application of spectral indices of chlorophyll in the delimitation of cultivated areas and the analysis of their changes. In this work, we applied two spectral indices, on the one hand the RECl index (Red-Edge Chlorophyll Vegetation Index) which shows the photosynthetic activity of the plant cover and on the other hand the GCI index (Green Chlorophyll Vegetation Index) Index which estimates the chlorophyll content of the leaves of various plant species and detects the physiological state of the vegetation. These applications allow the evaluation of the state of vegetation growth and the monitoring of ecological changes in the environment of agricultural areas. The comparison between the results of the two indices shows that the GCI index is very effective in terms of automatic extraction of agricultural areas and discriminates between plant formations compared to the RECI index which includes all formations with important chlorophyll activity (agriculture, forest and natural vegetation). The analysis of the time series (1989, 2011, and 2021) globally confirms a trend of stability of the cultivated areas between “1989 to 2011” and a trend of improvement at the regional scale between “2011 to 2021” as a result of the improvement in rainfall conditions from 2011 and the application of effective action programs carried out by the state for the improvement of the agricultural sector. However, diachronic analyzes by the GCI index revealed significant increases in agricultural areas between 2011 and 2021 corresponding to 58% and stability between 1989 and 2011 with an increase of 0.17%. On the other hand, plant biomass grew by 22% between 1989 and 2011 and between 2011 and 2021 an increase of 7.25%.

17:20
Combination of MUSIC inversion algorithm with SKP decomposition for forest height estimation in a Tomography SAR application
PRESENTER: Hichem Mahgoun

ABSTRACT. This research work treats of an application of Tomography-SAR (Synthetic Aperture Radar) which aims to compute forest height. Tomo-SAR is a method based on the acquisition of N SAR images of the same zone obtained at different altitudes to estimate the reflectivity map of the studied area. The proposed algorithm is build upon the combination between the MUSIC (Multiple Signal Classification) inversion and Sum of Kronecker Product (SKP) decomposition. The LIDAR (Laser Detection and Ranging) dataset of the study area was used as ground truth for the validation of the procedure. Results of the proposed inversion are verified qualitatively and quantitatively, by comparing the estimated vegetation with the digital surface model extracted from the LIDAR dataset. After analyzing the obtained results, we conclude that the height of the forests estimated by the proposed inversion presents a bias compared to the LIDAR data which converges towards an overall underestimation of the vegetal structures, which means that we need to extract and combine other parameters from the SKP decomposition to enhance the generated DSM.

17:40
Hardware in The Loop Simulation for robot Navigation with RFID
PRESENTER: Isma Akli

ABSTRACT. The main goal of this article is to propose a software architecture allowing the integration of RFID (Radio Frequency Identification ) sensors in the motion strategy for a simulated mobile robot. RFID systems are composed of antennas, readers and Tags. Passive RFID Tag consists of a memory storing information. The reader reads/writes information from/on the Tags through the antennas. In robotic context, the Tags contain information about the environment surrounding the robot such as obstacles, humans, other robots, and different types of objects etc. The RFID system reads Tags data, and controls the movement of the simulated mobile robot consequently (stops, changes its velocity, regulating its speed, setting a new destination …) based on the information stored in the Tags. Hardware In the Loop Simulation (HILS) system is proposed. The RFID device is the hardware system and the virtual mobile robot is the Simulated system.

18:00
Ensembling Residual Networks for Multi-Label Sound Event Recognition with Weak Labeling
PRESENTER: Hadjer Ykhlef

ABSTRACT. Sound event recognition is concerned with the development of systems that are able to identify and distinguish events. In realistic settings, sound events originate from different sources and often overlap, which can make the design of such systems more challenging. To mend with this, state-of-the-art recognition systems substantially rely on training multilabel deep neural networks. This process usually requires a large set of labeled audio data. However, most existing datasets are usually small in size or large but unlabeled. Moreover, hand-labeling is a very costly and time-consuming process. In this paper, we design a recognition system that learns from both the labeled and the unlabeled audio clips following the semi supervised learning paradigm. Our system operates in three main stages: (1) Train a baseline, a Residual Network (ResNet), on the labeled data. (2) Use the baseline to generate pseudo-labels of the unlabeled data. (3) Resume training the baseline on both the labeled and the unlabeled data, along with the inferred pseudo-labels. To demonstrate the efficacy, we have conducted experimental comparison on FSDKaggle2019 dataset made of sound clips with annotations of varying reliability. We have tested the improvement over the baseline, and have performed comparison with a multitask-ResNet model trained using the unverified labels. In addition, we have studied ensembling various variants of our approach. The experimental results indicate the superiority of our system over the other alternatives.