ABSTRACT. The one challenge that students face as they take their first step into the working world is to navigate through the sea of unknowns. Education has equipped us with the very basic foundation of application of theories through continuous learning. However, there is a huge gap of unknowns between what education has prepared us and what is needed to be successful in our career. Thus, the answer to “How do you know what you do not know?” is important. Join me as I discuss the 3 steps that encompasses the technical, mindset and soft skill aspects to bridge the unknowns.
Memristor Applications with Analog Neural Networks
ABSTRACT. Memristors are a broad class of resistive devices that can be used as a non-linear resistor, as a memory or in emulating neuron response. They consume low area of chip, and are compatible with CMOS logic. This makes it an interesting class of devices for a variety of applications. In this talk, an overview of the advances in the use of memristor devices for building neural networks are presented. In particular, the issues and challenges faced when designing analog neural networks for practical applications are discussed. We look at why analog computing is useful for near sensor computing and how it can be effectively used for building low power intelligent applications. The case studies involving imaging applications and tactile sensing is presented for understanding the usefulness.
ABSTRACT. Semiconductor industry has had significant advancements in the past decade due to the advent of highly complex Mobile SoCs, 5G Wireless Networks, High-speed online gaming, Automotive applications and Peta bytes of memory storage requirements which continue to increase as the world becomes digitally connected. These inturn necessitated breakthroughs in high-speed yet low-power design architectures, while pushing the transistor technology to 3nm and into potentially sub-nano in the future. These advancements in turn have given rise to dedicated AI/ML chipsets, which enable a really low-cost of prediction for software applications. A single modern high-end chip has 10s of billions of transistors, which was seemingly difficult to manufacture just a few years back.
FPGA IMPLEMENTATION OF ERROR DETECTION AND CORRECTION IN SRAM EMULATED TCAMS
ABSTRACT. Ternary content-addressable memory (TCAM) is often used to categorize packets in network devices Packet forwarding, software- define networking, and security are just a few of the applications for this technology. TCAMs may be implemented in networking Applications- specific integrated circuits (ASICs) either as a stand -alone device or as an intellectual property (IP) unit. TCAM block may be added to FPGAs, despite the fact that they don’t come pre-configured. For SDN applications, FPGAs were an excellent solution because of their versatility, and most FPGA suppliers provide SDN development kits. The logic blocks of FPGA must be emulated as TCAMs in order to implement TCAM capabilities. Modern FPGAs include a plethora of memory blocks that may be used to construct TCAMs, which some of these systems may take use of. Because mild mistakes may lead to data loss, this is undesirable. It is possible to remedy faults by using error correction methods such as a parity check, although this increases the overall memory capacity by a single bit per word.
Here, Major concern is with safeguarding the memory utilized to simulate TCAMs. As only a subset of the potential memory content is authentic, a parity bit may be utilized to repair most single-bit mistake. In this project, a novel methodology implemented to safeguard memory utilizing for simulating TCAMs using parity check, Block Parity and Hamming code techniques. The Proposed algorithm designed, verified using Xilinx Vivado tool and implemented on Xilinx Nexsys DDR Based ARTIX-7 FPGA.
FPGA BASED HARDWARE ACCELERATOR FOR CONVOLUTION NEURAL NETWORK
ABSTRACT. Optics are employed as frame relay in a diverse range of products. The necessary image processing should be local using edge computing platforms, considering the limitations and resource restrictions for mobility and privacy concerns. Convolution Neural Networks (CNN) with specialized speed can accomplish these goals with just enough versatility to tackle numerous vision tasks. Because some CNNs take a lot of repeated image processing, engineering control accelerator for such a function is a difficult task. In this work, it is demonstrated to develop solutions to quantify facial expression recognition trained CNNs using Facial Expression Recognition dataset (FERD), profiling the model for the high computation task, and performing parallelism also optimizing the calculations on a ZYNQ framework (7200). By implementing parallelism performance in the CNN model, face emotion detection rate has been reduced to 0.02 seconds for each image. Classification per second was increased from 2 to 50.
Class Topper Optimization with Regrouping Strategy to Solve Combined Emission and Economic Dispatch Problem
ABSTRACT. In this article, a class topper optimization with regrouping strategy (CTO-GS) is proposed to solve combined emission economic load dispatch problem. The suggested method is an enhanced version of the existing class topper optimization (CTO), which incorporates the idea of regrouping technique. To assess the effectiveness of the suggested technique for resolving CEED problems, two test cases are considered. To demonstrate the superiority of the suggested strategy, the obtained results for the two cases are compared with those obtained using other optimization strategies. The results conclude that the proposed method is effective in solving CEED problem.
PASTA based Design of Hybrid ARPS Motion Estimation Algorithm
ABSTRACT. The paper introduces an effective Motion Estimation (ME) VLSI design, which uses PASTA technique (Parallel self-timed adder). Block-based proposed technique plays a paramount role in visual protocols by use and reduction of time losses in a video series between adjacent pixels. Adaptive Rood Pattern Search (ARPS) is one of the most common rapid methods for rotation estimates with relatively small hardware resources without affecting the speed at which HD videos are transmitted in real-time. The architecture suggested only takes advantage of spatial similarity. Furthermore, by putting PASTA (parallel self timed adder) in ARP module, HARPS (Hybrid Adaptive Rood Search) will be designed. The primary objective of this paper is to develop the algorithm ARPS movement estimate and the HARPS hybrid method through evaluating the delay and areas dimensions. The proposed design is implemented in Xilinx vivado tool; Altera Cyclone III EP3C55F484C6 is used. The proposed design has 15% less of area and 10% less of power consumption when compared with the existing designs.
An optimal Fuzzy-PI Stabilizer to compensate Low Frequency Oscillations using Kho-Kho Optimization Technique
ABSTRACT. Due to low frequency oscillations (LFOs) are one of the primary reasons of power transmission is unpredictable. LFOs are generally classified as the disturbances with small signal stability in the linked power system. Thereby, it is important that these oscillations in the linked power system are regulated. This is why power system stabilisers are employed to dampen the oscillations and maintain the stability of the system. Here, in this work, an optimal proportional integral (PI) stabiliser based on fuzzy logic is developed, which is capable of suppressing the LFO in linked systems. The parameters of the suggested stabiliser are calibrated using a Kho-Kho optimization (KKO) scheme. To optimise the controller variables, a fitness function is constructed using the integral time square error (ITSE). An OMIB with an IEEE type DC-1 exciter unit is considered in order to illustrate the efficacy of the recommended optimal controller design. Comparison of findings obtained with those of other approaches are used to demonstrates the superiority of the recommended strategy.
High performance Single Walled CNT-ZnO NW hybrid heterostructure for UV photodetection
ABSTRACT. In this work, the optoelectronic properties of Single Walled Carbon nanotubes (SWCNTs) especially in UV photodetection has been explored by fabricating a heterostructure (HS) incorporating SWCNTs with Zinc Oxide (ZnO). The structural analysis confirms the formation of the proposed SWCNT/ZnO NW HS. The inclusion of SWCNTs has drastically improved the performance of the HS by giving high responsivity value upto 50 A/W with rise and fall times in the order of milliseconds. The HS fares much better on comparison with existing UV based SWCNT/metal-oxide based HS.
Implementation of N-bit Universal BIST for Testing of Memory and Arithmetic Operator Using FPGA
ABSTRACT. A Linear Feedback Shift Register are very
widely used in Built-In Self-Test (BIST) and encryption
applications. It is one of the finest pseudo-random
number generators to test various devices such as MicroProcessors, Micro-Controllers, DSP Processor,
Arithmetic operator and Memory devices. To test
different bit size modules each individual BIST module
with respective bit size need to design, which leads to
increase in design time, validation of individual BIST
and also the cost of production. A novel design of N-bit
Universal BIST for Testing Arithmetic and Memory
devices is implemented with run-time configurable from
2-bit to 32-bit sequences. The design target is to
minimize the latency and various design constraints. The
logic will be implemented in such a way only one LUT
will be storing different tap point equations to reduces
the BRAM and CLB’s utilization at FPGA
ABSTRACT. The Evolution of Technology made Human life easier where we can compare the timeline of using landline and smart phone, right now every human being is surrounded by smart gadgets where human interference has reduced to perform mechanical works. The physical work effort is high for advertising or passing a notice throw a paper, this kind of problem is outplayed by using micro controller based digital LED boards, in this system the message is programmed into the microcontroller which is wired to LED board the microcontroller will pass the message to LED board where led board will display the message. This LED displays are becoming primary need in heavy crowded places such as malls, railway stations, educational institutions to display information regarding offers, platforms or important notices. To change the message in microcontroller we have to reprogram it every time we can outplay this scenario by integrating it by new wireless IOT technology. This paper will project the development made in “IOT BASED REAL TIME LED DISPLAY BOARD” by using ESP32 and ARDUINO.
ABSTRACT. Persons with impairments do not have many advantages; this paper concentrates on their issues. It can also be utilized by someone who has been in an accident or a variety of other scenarios. The goal of this endeavour is to alleviate their pain. This paper shows how to construct a low-cost machine control system that uses IoT technology and includes features like speech control and hand gestures. The internet of things refers to the interconnection of machines, transportation, buildings, and other entities that are equipped with electronics, software, sensors, actuators, and network connectivity to collect and share data. IoT technology is utilized in this paper to control items wirelessly over the internet. A node MCU development board was used as the computer module. The concept also aims to provide customers with a vocal control and hand motions interface via an app. MIT software inventor uses an online Speech-to-text infrastructure to provide speech control. When the gadget is used, it listens for the user's voice and recognizes the hand angular positions. When a defined phrase is recognized, it triggers a corresponding action to turn the machines on or off. Differently-abled people control machines far more easily via vocal control and hand gestures. This concept uses an app to manage machinery in industries and to make it easier for personnel to control it. This app is used from any location on the planet.
Software Test Automation of Electronic Control Unit
ABSTRACT. Nowadays in the automotive sector, a lot of emphases is given on making a vehicle superior and robust in every environmental situation. Customer need for luxury features and comfort is also increasing day by day making hardware and software systems much more complex. Dedicated Electronic Control Unit (ECU)is assigned to perform a specific function that handles electrical systems or subsystems in a vehicle. In a vehicle complex network of ECU’s is present communicating with each other using various communication protocols such as CAN, LIN, MOST, etc. Vector CANoe is widely used software because of the multi-bus concept which allows various protocols to operate simultaneouslyalsoCANoe testing offers a very accurate, reusable, and simple method of testing. Manually testing such complicated software is a tough procedure and required a significant amount of time. Various automation techniques are present currently such as DSPACE, CAPL, vTESTstudio, etc. In this paper, a novel method is discussed to automate the software testing process using Excel, Vector tools, NI test stand, and LabVIEW.We can also integrate CAPL scripting in this automation which helps in some test cases. A precise report is generated containing all the detailed information such as start time, end time, and recorded value. Reports are stored in cloud with the help of codebeamer
Development of numerical model for centrifugal casting solidification
ABSTRACT. Centrifugal castings are used to manufacture a wide variety of axis-symmetric parts, in which molten metal is introduced into a pre-heated mould rotating at a certain RPM. Current work focuses on the mathematical modelling of the centrifugal casting process to study the effect of pouring temperature and mould pre-heat temperature on the rate of solidification. The Finite Element Method is used to analyze the temperature distribution. When compared with results from FEA, maximum percentage error was 4.79% and the minimum percentage error of 0.62% have been observed. This comparison is done for temperature distribution in cast and mould regions after about 30 seconds from the time molten metal is introduced into the mould. This shows that the results obtained from this research agree with the results from the finite difference method. It is observed that an increase/ decrease either in pouring temperature or mould pre-heat temperature directly affects the solidification time significantly. The results obtained represent temperature distribution in both the mould and the molten metal regions. The molten metal is introduced into a pre-heated mould at a temperature of 1500°C and the mould is pre-heated at 250°C to minimize thermal shock. It is observed that there was a drop-in temperature from 1419.9°C at a distance of 4.5cm to 1074.9°C at a distance of 6.5cm in cast region and 574.4°C at a distance of 6.5cm to 351°C at a distance of 15.5cm in mould region from the centre of the mould after about 30 seconds from pouring the molten metal into the mould.
Experimental Investigation for Enhancing Mechanical Properties of Friction Stir Welding AA6351-T6 Alloy with Copper Powder Inclusion at Welding Zone
ABSTRACT. Present research work is focused on Friction stir welding (FSW) to butt joint of 6 mm thickness AA6351-T6 aluminum alloys with copper powder inclusion at welding zone. Straight cylindrical tool pin profile is used to carry out joining process with parameters and to evaluate the mechanical properties like tensile strength, yield strength, Vickers hardness and microstructural characterization of the weldments as per standards. It is observed that Ultimate Tensile Strength (UTS) of 228 N/mm2 and Yield Strength (Y.S) of 201 N/mm2 respectively found at parameters of 1200 rpm and 30 mm/min. Vickers hardness test performed at various zones of weldments on all the specimens and got (93 HV) at the weld zone, parameters of 1200 rpm and 30 mm/min as highest among all other specimens. Scanning Electron Microscopy (SEM) was carried out on tensile rupture surfaces of the specimens.
An Experimental Study on Flexural and Water Absorption Behaviour of Double Sandwich Composites
ABSTRACT. The development of the composite materials is done for balancing the properties for various applications. Due to the properties such as corrosive resistance and superior strength the applications of these materials are endless. Most of the previous research were carried using light weighted cores of various forms and thin face sheets of various compositions. With the uplift of the past works, present research work focuses on finding out the fabrication and behaviour of double core sandwich composite. Single sandwich and Double sandwich composites are fabricated with hand-layup process by using polyethylene and elastomer foams as core, fine woven and unidirectional glass fibres reinforced with epoxy as facesheets. Material properties such as flexural strength, shear stress of core, load vs displacement curves and water absorption of all the possible configurations were found out according to ASTM standards. It is observed that the moisture percentage is 13.88 in a configuration fabricated with polyethylene foam with woven fibre reinforced epoxy face sheets after the duration of 24 hours. On the other hand, the flexural strength of 33.43 MPa and 42.46 MPa was identified for unidirectional polyethylene foam and unidirectional elastomer foams respectively.
ABSTRACT. In recent years, the government, universities, and industry have paid attention to the distributed approach in machine learning (ML) and cybersecurity for the developing Internet of Things (IoT). Federated cybersecurity (FC) is a strategy for making the Internet of Things (IoT); more secured and effective in the future. This new approach has an ability to detect problems regarding security, implement solutions, and across the IOT systems it manages effectively to limit the propagation of vulnerabilities. Forming a federation of shared information is one way to attain a cybersecurity. Federated learning (FL) is a machine learning paradigm which is especially effective for securing the sensitive IoT environment. The origin of FL, and also FL for cyber security are presented in this article. The various security assaults, and responses are also discussed as an outcome of this article. Experiments are carried out in Google Colaboratory, using well-known python libraries. The results shows that FL provides highest level of security in comparison with centralised learning.
MOBILITY AND TRUST AWARE ADAPTIVE LOCATION AIDED ROUTING FOR MANETS
ABSTRACT. Abstract:Mobile Ad Hoc Networks (MANETs) are more susceptible to security threats due to the nature of openness and decentralized administration. Even though there exist several standard cryptography and encryption methods, they induce an additional computational and storage burden on the resource constrained mobile nodes in MANETs. To sort out this issue, this paper proposes a simple trust management mechanism called as Mobility and Trust AwareAdaptive Location Aided Routing (MTALAR). Initially, MTALAR founds the request zone whose sides are parallel to the line connecting the source and destination nodes. Next, the source node finds a trustworthy route through multi-hop nodes based on a new factor called as Mobile-Trust Factor (MTF). MTF is the combination of communication trust and mobility. Communication trust ensures a correct detection of malicious nodes and mobility ensures a proper protection for innocent nodes. After route discovery, the source node periodically measures the MTF of the multi-hop nodes through HELLO packets. Based on the obtained MTF values, the source node declares the corresponding node as malicious or not. Extensive simulations performed on the proposed method prove the superiority in the identification of malicious nodes.
Composite User Behaviour Assisted Rumour Detection over Online Social Media
ABSTRACT. Recently rumour spreading on Online Social Media has become a serious problem which infects the damages to society at both organization and individual levels. Hence, the rumour detection has been emerged as an active research which identifies the rumors automatically. In the rumour detection system, the features have a major role which describes the characteristics of rumour related posts. In this paper, we propose a composite user behaviour related features to describe the characteristics of rumours. Under the composite user concept, we referred the behaviors of both users such as author and reader. Totally, we derive ten features from both users’ behavior and fed to machine learning algorithm to train the classifier. Here, two classifiers namely Support Vector Machine and K-Nearest Neighbour are used. For performance evaluation, a standard benchmark dataset is considered and the performance is assessed through precision, recall and F1-score.
GSM BTS Range Increment-Proof of Concept using SDR
ABSTRACT. 2G Technology was introduced in year 1991,a second
generation cellular communication and GSM stands for Global
Sytem for Mobile Communications(GSM) which is used in
2G Technology for mobile communication.The GSM frequency
operates in two bands 890-915MHz & 935-960MHz which are
uplink and downlink frequencies.The GSM consists of Base-
transreceiver(BTS) station,Core Network:Home Location Regis-
ter (HLR),Visitor Location Register(VLR)& AUC-Authentication
Unit Center and Mobile Station(MS).This paper explains the
proof of concept for incrementing the range of a BTS to improve
the communication range of one cell GSM Base Transreceiver
System (BTS) up to 50 meters using power amplifier and
attenuator with feasible antennas
Modeling and Simulation of channel models for link and system level simulations for selected LTE propagation scenarios
ABSTRACT. This paper presents a channel models for link level and system level simulations of local area, metropolitan area, and wide area wireless communication systems. The models have been evolved from the LTE channel models. The covered propagation scenarios are indoor, indoor-to-outdoor, outdoor-to-indoor, and rural networks. The proposed channel model follows a geometry-based sto-chastic channel modelling approach, which allows creat-ing of an arbitrary double directional radio channel model. The channel parameters are determined stochas-tically, based on statistical distributions extracted from channel measurement. Different scenarios are modelled by using the same approach, but different parameters. The parameter tables for each scenario are included in this deliverable for both line-of-sight (LOS) and non-LOS (NLOS) conditions.. The novel features of the pro-posed models are its parameterisation, using of the same modelling approach for both indoor and outdoor envi-ronments, new scenarios like outdoor-to-indoor and indoor-t-outdoor, elevation in indoor scenarios, smooth time and space evolution of large-scale and small-scale channel parameters, and scenario-dependent polarisation modelling. The models are scalable from a single single-input-single-output (SISO) or multiple-input-multiple-output (MIMO) link to a multi-link MIMO scenario including polarisation among other radio channel dimensions. The proposed models can be applied not only to LTE system, but also any other wireless system operating in 2 – 6 GHz frequency range with up to 100 MHz RF bandwidth. The models supports multi-antenna technologies, polarisation, multi-user, multi-cell, and multi-hop networks.
Model for capturing noise free traces for Side Channel Power Cryptanalysis based on SAKURA-G FPGA and Case study of AES
ABSTRACT. Side Channel Cryptanalysis is one of the minimum requirements for design of new cryptosystem. The main research
is focused on power attack of Side Channel Cryptanalysis in which capturing of noise free traces is most important
aspects. This paper presents methodology for side channel power attack based on SAKURA-G FPGA, Mixed signal
oscillator, hamming distance model and statistical analysis. This method is tested with stranded cipher: Advanced
Encryption Standard (AES) and recovered secret information of it. In this methodology, using SAKURA-G FPGA,
captured clearly visible & noise free power traces.
DESIGN OF AREA AND POWER OPTIMIZED VLSI ARCHITECTURE OF ALU DESIGN USING SIGNED MULTIPLIER
ABSTRACT. Technology in this digital age relies on
Arithmetic Logic Unit (ALU) procedures to determine the system
performance. Due to the fact that, the ALU is a necessary
component of any Central Processing Unit (CPU), and its
importance is equal to that of the computer CPU. In order to
increase the device flexibility and reusability, this work deals with
the design of a 32-bit ALU utilizing a structured Hardware
Description Language (HDL). In addition, a signed multiplier and
a few more additional arithmetic and logical functions are used to
create ALU'slogic.This research focuses on the multiplier's optimal
design, by directly recoding the total to its Modified Booth’s (MB)
form.The MB encoding unit is converted into a single data route
block. There is just one adder at the conclusion of the add- multiply
(AM) component (final adder of the parallel multiplier), which in
turn, the recoding process critical routelatency is decreased and
decoupled from the bit-width of its inputs. In this paper, a novel
method for direct recoding of two integers into the MB form is
presented and ALU is implemented in Verilog HDL, synthesized by
Xilinx ISE 14.7.It is observed that, there is no significant increase
in area, power, and delay overhead.
Design of Area Efficient Adders using Quantum-dot Cellular Automata(QCA)
ABSTRACT. The Full adder circuits are the basic unit of
digital arithmetic and logical circuits. The Quantum-dot
Cellular Automata (QCA) is one of the emerging nanoelectronics technology, to overcome the scaling problems of the
CMOS technology at nano-scale. Majority gates and inverters
are the basic QCA circuits, which are used in this paper to
design the full adder, Ripple carry adder and Carry skip adder.
In this paper, the multilayer QCA adders occupy less area as
compared to the coplanar QCA adders. QCA based coplanar
and multilayer adders are designed using the coplanar and
multilayer full adder respectively and the total energy
dissipation and the functional simulation is carried out using the
QCADesigner-E 2.0.3 and QCADesigner 2.0.3 software
respectively.
Deep fully Connectedness Convolutional Broken Stick Regressive Brent-Kung Adder Enhancement for Efficient VLSI Design
ABSTRACT. Very large scale Integration (VLSI) was used to create hardware which initiates by the explanation of algorithm as well as employs computer program. In digital systems, multiplier and adder are the fundamental components in design of integrated circuit applications. High speed devices play an essential role in VLSI applications. Adder enhancement is used in miscellaneous application in new VLSI system like multiplier design, ALU and etc. An efficient adder circuit design is used to optimize various metrics. But, design complexity was not reduced by existing VLSI design. To handle such limitations, Deep Fully Connectedness Convolutional Broken Stick Regressive Brent–Kung Adder Enhancement (DFCCBSRBKAE) Technique is introduced in VLSI circuits. Input layer, two hidden layers as well as output layer comprises four layer of DFCCBSRBKAE Technique. The carry inputs are transferred to input layer. Input data is transferred to hidden layer 1. Broken-stick regression analysis is performed for pre-processing data. Next, pre-processed data is transferred to hidden layer 2. Carry generation and post-processing is carried out in hidden layer 2 and the output results are covered by the convolution. At last, outcomes were attained within output layer. This in turn helps to perform an efficient adder enhancement of digital multiplier with minimal power and time consumption. The performance of DFCCBSRBKAE Technique is determined by power and time consumption. DFCCBSRBKAE Technique reduces the area, speed, and delay and power consumption of adder enhancement compared with conventional methods.
Design and Development of a Fully Balanced Differential Op-Amp For Biomedical Applications
ABSTRACT. The Op-Amp topology is used to construct and analyze high-voltage and low-voltage circuits. The utilization of power is low. The suggested design is based on the Fully Differential Operational Amplifier, a newer op-amp structure. Op-Amps are a type of signal that is based on Mixed-Amps. One of the most frequent Op-Amp is the two-stage Op-Amp. The majority of CMOS Op-Amps are made for specific applications. With 0.16 mW of power output, the result reveals an Op-Amp gain of 44.74dB. Cadence Software is used to implement this design, which uses 45 nm CMOS technology.
ANALYSIS OF SAR ADC BLOCKS USING LT SPICE FOR LOW VOLTAGE APPLICATIONS
ABSTRACT. power-efficient successive approximation register analog-to-digital converters are frequently employed in electrical applications in today's digitized world. In comparison to other ADC designs, this one is rather small. Low-power, high-resolution applications are excellent for SAR ADCs. This work discusses about simulation of a SAR ADC (Analog-to-Digital Converter) sub blocks based on a CMOS technology that produces signals internally for low-power applications using LTspice. A 10-bit low voltage SAR ADC will be shown using 16nm CMOS technology in this research. It seems reasonable to utilize a serial interface in this architecture. Standard CMOS technology is used with a low-cost, low-power VLSI implementation in this design. SARADC is widely employed in most electronic applications in low-power devices. In high resolution and low power applications.
CONVOLUTION MERGING TECHNIQUE FOR IMAGE ENCRYPTION APPLICATION
ABSTRACT. Multimedia apps on the Internet have grown in popularity in recent years. Because of the rapid development in the usage of multimedia information, data storage and transmission security has become increasingly crucial. Encryption is a useful tool for keeping multimedia data private. Various approaches for encrypting photos to make them more secure are discovered on a regular basis. We further developed a method to merge two types of images which is grayscale and RGB image. Initially, the image is converted into a digital format with the help of MATLAB. It is then fed to the top image module for merging and a digital output file is generated. This file is then reverted back to analog form for visualization purposes with the help of MATLAB. For computing RGB image, the image is split into 3 classes containing the reds, greens and blues of the image. These classes are separately convolved using the image multiplier module designed in Xilinx ISE. This results in the generation of 3 digital output files which are Output_red, Output_green and Output_blue. These files are then combined to form a single-color image and visualized using MATLAB. The structure is shown below. For a grayscale image, a single image multiplier is sufficient to compute or perform image merging. Here, we don’t need to split the image into RGB because the image consists different shades of black. Upon processing, a single digital file is generated and this file is visualized using MATLAB
Implementation of N-Point FFT/IFFT processor based on Radix-2 Using FPGA
ABSTRACT. A Fast Fourier Transform is an efficient algorithm to compute the discrete Fourier Transform (DFT). It is one of the finest operations in the area of digital signal and image processing. The operation requires a high computational module i.e., (N2 complex multiplications and N*(N-1) additions). This makes the computational and implementation very difficult. Implementation of N-point FFT/IFFT of data width 32-bit (16-bit real and 16-bit Imaginary) with run-time configurable Radix-2 Architecture, FFT size and data type i.e., (Fixed Point). Compile time configurable data and twiddle factor precision. The design target is to minimize the latency and design constraints. The logic will be implemented in such a way only one memory will be used for entire computation process. Hence, this gives a Novel architecture design for N-point FFT.
Classification of Glaucoma Using K-Mean Clustering with Optic Disk and Cup Segmentation
ABSTRACT. Image division is significant in advanced picture handling since it assists with portioning the picture. Numerous division approaches have been utilized in the field of picture handling to find ailments. One of the dangerous common explanations behind disabled vision is glaucoma. The optic cup and optic circle's size, shape, and structure are utilized to analyze it. The cup size fills in glaucoma people with harmed eyes though this circle district continues as before, bringing about an ascent in the CDR (Cup to Disk Ratio). The distance between the visual field and the optic cup district is estimated utilizing a glaucoma symptomatic instrument called CDR. When contrasted with other picture division draws near, K-mean division is the most frequently utilized approach. It's one of the most essential unaided learning methods for managing the most notable trouble of collection. It is predominantly reliant upon confounded fractional differential conditions. This strategy is not difficult to create, and it can look at countless clinical photographs in a brief period, which was troublesome with the past technique. In contrast with the coefficient of connection and the region under the bend, the recommended strategy yields a lower CDR mistake. The cup to circle proportion (CDR) could have for quite some time been the main calculate glaucoma determination.
Image Filtration using Graph-based Low Pass Filter
ABSTRACT. This paper is based on filtering various noises from the image data through the graph-based low pass filter. The linear noise (salt & pepper noise) and nonlinear noise (Gaussian noise) are taken for the research purpose, those noises are randomly added to the image data. In this paper, the graph filter is proposed to remove the linear as well as non-linear noise. This graph-based low pass filter also allows the noise from the bulk image dataset. In the graph-based low pass filter, the image dataset is converted into a structure like a graph and then the image is considered as matrix data. The pixels in the data connected with neighboring pixels that demonstrate the direction and position of the pixel with a connection of neighboring pixels results in angle. These two properties made Graph Signal Processing more adaptable for image data. In the simulation, the performance of proposed model is compared with the three conventional filters (Volterra filter, median filter, and stack filters). The performance measurements of the models are based on the Peak Signal to Noise Ratio, Mean Square Error, Root means square error, and Structural similarity index.
Efficient Re-configurable Multiply and Accumulate Unit for Convolutional Neural Network
ABSTRACT. Conventional Multiply and Accumulate (MAC) unit consists of multiplier, and accumulator. In hardware realization of these systems a multiplier play a vital role. In this paper, we have explored Booth Multiplier, array multipliers, ripple-carry array multipliers with row bypassing technique, Vedic multiplier, Wallace-Tree multipliers, and DADDA multipliers in terms of area, delay, and power. The combination of Radix-4 Booth multiplier and carry-s adder has shown better performance in terms of area, and delay in the design of the MAC Unit. The design is then synthesized on ARTIX-7 FPGA using the Xilinx Vivado. Further, we tested the MAC unit to implement a 2D convolution engine application.
A Light Weight Deep Learning based Technique for Patient-Specific ECG Beat Classification
ABSTRACT. An electrocardiogram (ECG) is an important medical tool in diagnosing different cardiac disorders. In general, the length of long-term ECG records is 24–48 hours, and it is not easy to analyse these records manually. Therefore, an intelligent computer-based automated tool is required to analyse these records. Deep learning techniques have shown early promise in analysing complex ECG signals, particularly in classifying heartbeats and detecting arrhythmias. Despite this, there is still a significant bit of movement required in terms of the analysis of data. Additional training is made possible by bi-directional LSTMs (Bi-LSTMs) since they traverse the input data twice (i.e., left-to-right and right-to-left). Bi-LSTMs outperform standard unidirectional LSTMs because of their fixed sequence-to-sequence prediction and increased training capacity. In this work, a series of connections between convolution neural network (CNN) and Bi-LSTM based deep learning techniques are proposed to classify ECG beats in a patient-specific way as per the Association for the Advancement of Medical Instrumentation (AAMI). The performance of the proposed work is tested with the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) and acquired real-time datasets in the proposed work. The proposed method performs better compared to the state-of-the-art techniques.
Hermitian Multi-Wavelet Kriging Regressive Feature Transformation for Tumor Detection
ABSTRACT. Automatic identification of brain tumors (BTs) using MRI images is a noteworthy process for avoiding earlier death. BT is an unrestrained increase of cancerous cells in the brain.The process of finding the tumor margins from healthy cells is still a difficult job with the inconsistency and complexity of the location, size, shape, and texture of these lesions. Some approaches make use of automatic detection of BTs. But, the time and accurate detection of the BT from the multi-spectral MRI scans mages are challenging issue. A novel technique called Hermitian Multi-wavelet Kriging Regressive Feature Transformation based Brain Tumor Detection (HMKRFT-BTD) technique is proposed for better brain tumor detection (BTD) with less time. The segmented image is provided to the HMKRFT-BTD to perform the brain tumor detection. Initially, the Hermitian Multi-wavelet transform is applied to decompose the input into several sub-bands. For each sub-band, features of the image are acquired to detect tumor for minimizing the time consumption. Finally, Kriging Regression is used to examine the extracted features with the testing disease features. Accordingly, normal or tumor malignancy is correctly identified. Experimentation is carried out on factors such as brain tumor detection accuracy (BTDA), false-positive rate (FPR), and tumor detection time (TDT). The qualitative and quantitative results indicates HMKRFT-BTD is more effective to achieve a higher tumor detection accuracy and lesser false positive rate as well as detection time than the existing state-of-the-art approaches
Improved Rice Leaf Disease Detection using Fusion of Otsu Thresholding and Thepade SBTC Features
ABSTRACT. Rice leaf disease has many morphs, including Rice
Blast Healthy Leaves, Sheath Blight, Sheath Rot, Hispa, Blade
Blast, and Leaf Smut, among others. One of the important
difficulties that farmers are concerned about is detecting the type
of polluted illnesses in rice leaves. Identifying symptoms and
understanding this class of disorders are crucial functions of
The International Rice Research Institute (IRRI). IRRI is an
organization whose mission is to reduce poverty and hunger
among the populations and individuals who depend on ricebased agri-food systems. The proposed technique in the paper
does rice leaf disease identification using the fusion of features
generated with Thepade sorted block truncation coding
(Thepade SBTC) and Otsu thresholding. These features are
utilized for training the machine learning algorithms alias
Simple Logistic Regression (SLR), Random Tree (RT), J48,
Random Forest (RF), BayesNet, Naive Bayes, and an ensemble
of the algorithms. The experimental validation is done using a
rice leaf dataset having four categories alias Healthy, Brown
Spot, Hispa, and Leaf Blast. The accuracy of rice leaf disease
identification is used as the performance metric. The proposed
feature fusion has given better accuracy of rice leaf disease
identification. The ensemble of machine learning algorithms
(with majority voting) has shown better rice leaf disease
identification
Estimation of PSD and Non-Linear Entropy Parameters of FHRV
ABSTRACT. Cardiotocography (CTG) records the changes in FHR in response to uterine contractions and helps identify fetal distress symptoms. More than three decades of research on electronic fetal monitoring (EFM) revealed that the study of FHR variability helps in assessing fetal well-being during labor. Based on the characteristics of FHR changes neonatal outcomes were predicted. Research studies have not demonstrated any significant reduction in overall perinatal mortality with the use of continuous EFM. As a result, the area of fetal heart rate monitoring still faces challenges related to the absence of standard terminology, disagreements over how to interpret FHRs, and significant differences in how to define and handle unsettling FHR patterns. Various algorithms were developed to extract and analyze several features from the recorded fetal heart rate signal. In the present study, FHR tracings were analyzed in the frequency domain to estimate the power spectral density in the different frequencies of FHR signals. The nonlinearity in these signals due to FHR variability was measured by calculating the statistical non-linear entropy parameters.
COMPRESSION AND DENOISING OF MEDICAL IMAGES USING AUTOENCODERS
ABSTRACT. In medical image analysis Image compression
and denoising is an important processing step for remote
analytic. A number of algorithms are proposed in the literature
with varying degrees of denoising performances. In this paper, a
3-layer autoencoder model to compress and denoise gray scale
medical images is proposed. Gaussian noise with noise factor 0.5
is added and pass through 3 convolution and 3 max pool layers.
Reverse process is made for image denoising. The proposed
model learns to denoise data with a high PSNR and the high
compression ratio after training. Most of the denoised images
show over 30 dB PSNR with 4:1 compression ratio. Images can
be combined to increase sample size and overall denoising
performance.
Internet of Things based Portable Preterm Baby Incubator
ABSTRACT. Recent years, due to the advancements in technology, the medical industries are stepping towards greater heights. However, there are lots of premature babies who have lost their lives and this is due to the lack of proper monitoring of the incubation that may lead to accidents and it is essential to ensure safety which is important for infant. In this work, a cost-effective embedded device is designed and developed to monitor various physiological and physical parameters such as pulse rate of the baby, humidity, essential gas, temperature, and fingerprint sensor. Also, the acquired data from all the sensors are transmitted using Bluetooth technology and the data are displayed using android Blynk application which enhances the ease of monitoring sensor parameters. This work appears to be of high clinical significance since baby incubators play a vital role which helps to save the lives of premature babies.
COPULA BASED MODEL FOR TEMPERATURE AND PRECIPITATION DATA – KARIMNAGAR
ABSTRACT. The Present study deals with the analysis of two interdependent variables temperature and precipitation(rainfall). When the component distributions have different distributions for each component, the joint distribution can be obtained using a technique called “Copula analysis” when the variables are interdependent. There are approximately 226 Copula families all together. Out of these families, we can find the best-fitted distribution along with parameter estimates. The selection of the best distribution of Copula is based on AIC and BIC criteria. The distribution with the least value of AIC is identified as the best distribution. The research utilized chronicled monthly average precipitation and monthly average temperature information for 102 years for Karimnagar district, Telangana state. The outcome of the research implies that temperature and precipitation are negatively associated. Utilizing the inferred bi-variate Copula distribution we can find the estimated values of temperature for the observed time point and the same also can be used for forecasting the temperature of the future. It has been observed that the fitted copula distribution is yielding good estimates and forecasts with accuracy. The results and analysis are presented in this paper.
An Interactive Application for Detection and Treatment of Phobias Using Virtual Reality and Machine Learning
ABSTRACT. Phobia is a fear that occurs in humans when subjected to certain objects, situations, etc in their daily routine. It causes anxiety and increases heart rate for a couple of minutes until the person confirms there is no danger. Phobias are very common among people, where some of them know their fears while others choose to refrain themselves from knowing their phobias. A phobia can become critical when a person continuously undergoes a particular situation which leads to anxiety and finally causes depression. To overcome one’s fears, we intent to propose a solution that could help people to diagnose their specific phobias and treat them accordingly. Our novelty for this solution lies in proposing a PhobiCure mobile application which can accurately detect specific phobias of a particular person while we intend to use VR system having inbuilt eye tracking cameras to detect eye movements of a person encountering fear. Simultaneously, the mouth expressions are captured through front camera of mobile where both eye movements and mouth expressions are fed as input to the CNN model which obtained an accuracy of 76.2%. The increase in heart rate is also monitored which can help to determine the level of fear while images are displayed in the VR headset. After a phobia is confirmed by determining fear level using the 4-choice scale, treatment for the same is suggested to the user using the Virtual Reality Exposure Therapy (VRET).
A Comprehensive Survey on the AI based Fully Automated Online Proctoring Systems to detect Anomalous Behavior of the Examinee
ABSTRACT. In the last decade, the education system has been shifted from traditional classroom to online or blended mode. Conduction of examinations in online mode posed challenge to education system. Considering this challenge and ongoing research on automated online proctoring for the education system, this paper presents a comprehensive survey of published literature and suggests future directions. This review aims to summarize and analyze AI based algorithms for examinee detection, examinee recognition, examinee head pose estimation, eye-gaze estimation, examinee spoofing and gadget detection. Furthermore, an extensive appendix is presented.
Automatic detection of sub-surface weld defects using machine learning approach
ABSTRACT. In this paper, automatic detection of sub-surface weld defects is done using CNN, CNN plus SVM, CNN plus Random Forest, and CNN plus lightGBM machine learning algorithms. High level features are automatically extracted from Convolutional Neural Network (CNN). A radiographic test is performed on the weldments to acquire images of sub-surface defects. Synthetic images are generated using data augmentation techniques. SVM, lightGBM, and Random Forest classifiers are trained on the features extracted by CNN algorithm to classify acceptable weld bead and sub-surface weld defects such as slag inclusion, and incomplete penetration. CNN plus Random Forest as well as CNN plus lightGBM, exhibited an accuracy of 87%. CNN plus SVM models exhibited an accuracy of 81% and CNN manifested 92% accuracy.
A DEEP LEARNING APPROACH LUNG SEGMENTATION AND PNEUMONIA DETECTION FROM X-RAYS
ABSTRACT. Lung segmentation is a process of detection and identification of lung cancer and pneumonia with the help of image processing techniques. Deep learning algorithms can be incorporated to build the computer-aided diagnosis (CAD) system for detecting or recognizing broad objects like acute respiratory distress syndrome (ARDS), Tuberculosis, Pneumonia, Lung cancer, Covid, and several other respiratory diseases. This paper presents pneumonia detection from lung segmentation using deep learning methods on chest radiography. Chest X-ray is the most useful technique among other existing techniques, due to its lesser cost. The main drawback of a chest x-ray is that it cannot detect all problems in the chest. Thus, implementing convolutional neural networks (CNN) to perform lung segmentation and to obtain correct results. The “lost” regions of the lungs are reconstructed by an automatic segmentation method from raw images of chest X-ray.
Allergen-Free Food Dataset and Comparative Analysis of Recommendation Algorithms
ABSTRACT. Food sensitivities brought about by allergens, for example, gluten, shellfish, nuts, soy and dairy, are developing at an outstanding rate affecting at least 3% of the world at any given time. In developed nations, allergen-free food is widely available because of better clinical and patient mindfulness, significant access to substitutes and financial stability to keep a sans allergen diet. This isn't the situation in the rest of the world, especially in countries like India where unavailability is a huge issue. This paper proposes a dataset consisting of allergen-free food and a comparative analysis of algorithms like Normalized Levenshtein, Damerau Levenshtein, Cosine Similarity, Jaro Winkler and Metric LCS in the context of recommending hypoallergenic variants of products from our dataset to customers.
Comparative Evaluation of Machine Learning Development Lifecycle Tools
ABSTRACT. The ML development lifecycle is the SDLC equivalent of Machine Learning. While the ML code is at the core of a real-world ML production system, it frequently represents only 5% or less of the system’s entire code. This study examines and contrasts the technologies utilised in the machine learning development lifecycle and focuses largely on the distinction between ML programming and ML development. According to Forrester Research, AI adoption is ramping up. 63% of business technology decision makers are implementing, have implemented, or are expanding use of AI. The main motivation behind this research is that the majority of the organizations do not have ML/AI solutions that have gone beyond the PoC / PoV stage, ML code in Jupyter notebooks cannot be distributed, and AI/ML solution deployment at scale is a challenge.
Machine learning services are evolving at a dizzying rate, opening up a variety of opportunities for on-field applications, especially for brands and businesses with the infrastructure and resources required to integrate ML into their operational structures as a decision-making fulcrum. Nearly 65% of stock market swings may be predicted by Azure Machine Learning. By incorporating ML into its operational framework, Amazon has successfully decreased the "click-to-ship" time by 225%. Breast cancer can be identified with 89% accuracy using Google's Deep Learning. Thus, we have compared these commercial tools for ML life cycle support and arrived at a conclusion about which tool is most suitable.
Prediction of Heart Disease Using Machine Learning
ABSTRACT. Heart disease is defined as any interruption in the usual activity of the heart. Heart disease was expected to kill millions of people every year, accounting for 32% of all deaths worldwide. To lower the death rate, it is important to detect heart diseases in their beginning phases so that the patient can start the treatment as soon as possible. For detecting heart disease, machine learning is an effective system to be applied. The primary goal of this paper is to attempt to predict heart disease, analyzing different attributes at an early stage to avoid disastrous consequences. Four machine learning models comprise the entire system: Logistic regression model, Decision Tree model, Random Forest model, and XGBoost Classifier model. In this system, there is a dataset of 1025 UCI data collected from Kaggle. It consists of 13 attributes in total. The dataset is evaluated and used in a variety of machine learning models. It has been divided into two segments for training and testing respectively. Then, the model has been trained using the training dataset. It is then evaluated on a test dataset. Different accuracies can be found for different models. Comparing the results, the Random Forest model and XGBoost model show 92.20% and 95.61% accuracy consecutively which are better than the Logistic Regression model and Decision Tree model holding 87.32% and 86.83% accuracy for predicting heart disease. As the Random Forest model shows good accuracy, a mobile application is developed based on this model. Thus, the study will benefit individuals by predicting whether a patient has heart problems or not.