View: session overviewtalk overview
Saptarsi Goswami (AKCSIT, CTS, India)
11:40 | A multifactor authentication framework for usability in education sectors in uganda PRESENTER: Hilda Mpirirwe ABSTRACT. The reduction of academic dishonesty in e-assessment has been mentioned as a necessity for improved security, which can be achieved by implementing multifactor authentication. In e-assessment, learner authentication is a challenge that has grown significantly in importance. To counteract academic dishonesty tactics like impersonation, the Multifactor Authentication Framework is utilized for authentication and tracking a user's presence during the assessment time. The three authentication factors coupled together to increase security in e-assessment employed in this method were: one biometric (facial recognition), and two knowledge-based features (user name, password, and security questions generated from user profile). The study's data collection and analysis techniques were hybrid (qualitative and quantitative). The design science technique was used for this study because it entails a systematic design, development, and evaluation process. A methodical approach was used to create and build the Multifactor Authentication Framework. The framework's effectiveness at preventing identity fraud and ensuring the security of e-assessments was then evaluated in a simulated environment. Multifactor authentication Framework is useful to improve security in e-assessment where single-factor authentication is not able to satisfy security levels. Results show that; the rate of username and password was high in terms of short time taken to log in (79.6%), easy to use (73.4%) and passwords work properly in authentication (72.1%); while the Majority 87% of the respondents agreed that biometrics such as; fingerprints were secure, efficient and effective for the login process. By increasing the security factors of biometrics in e-assessment, the proposed multifactor authentication Framework (E-MuAF) continually watches learners as they are being assessed while ensuring there is no cheating, such as impersonation. This paper proposes a Multifactor authentication Framework to upgrade from single-factor to three-factor authentication systems This Framework strengthens authentication security in e-assessment where necessary. |
11:55 | Facial features extraction and classification: a machine learning approach PRESENTER: Shashank Pratap ABSTRACT. This research-based project introduces a methodology for accurate and real-time facial recognition and feature detection. Leveraging machine learning algorithms and Haar cascades, the system achieves high accuracy in classifying faces using a trained Support Vector Classification (SVC) model. Real-time detection of facial features, including eyes and smiles, is accomplished through the implementation of Haar cascades. The proposed methodology is evaluated using the Labeled Faces in the Wild (LFW) dataset, demonstrating its effectiveness in various applications such as security systems, user interfaces, and human-computer interaction. Despite challenges related to privacy, biases, and environmental factors, the project offers avenues for future research and improvements. Responsible development and ethical deployment of facial recognition technology are emphasized to ensure its positive impact on society |
12:10 | Towards Space Efficient Semantic Querying with Graph Databases PRESENTER: Sumukh Sirmokadam ABSTRACT. Manually surveying vast databases consumes a lot of precious time and human resources. The aim of this proposed solution is to identify potential business opportunities for a client, against their given products and features which are stored in a knowledge graph by scraping text from the target websites provided by the client. We have considered a chemical entities data source for demonstration. The automation of this process of identifying business opportunities can significantly reduce the resources used. This organization of rich data in a graph format enables efficient querying of compound data based on specific properties or material classes. As an essential contribution, this proposed solution gives a graph format of databases which help carry out semantic search operations that give useful results from the available input corpus. |
12:25 | Crop recommendation and irrigation system using machine learning with integrated IoT devices PRESENTER: Mohammad Umair Khan ABSTRACT. In agriculture, timely and efficient irrigation is crucial to achieving maximum crop yield. However, traditional irrigation methods are often inefficient and wasteful, leading to water scarcity and abbreviated crop productivity. To solve these problems, we suggest an intelligent irrigation system that combines IoT devices and machine learning techniques to suggest the best crop for a particular area and deliver the best irrigation based on real-time weather data and soil moisture levels. The system accumulates data on soil nutrients, temperature, and sultriness utilizing IoT sensors and uses XGBoost, a popular machine learning algorithm, to recommend the most lucrative crop predicated on historical data. The system additionally incorporates authentic-time weather data from APIs and water level sensors to provide customized irrigation for each crop. Our system aims to improve crop productivity, minimize water waste, and allow farmers to make data-driven decisions to maximize profits. |
12:40 | PRESENTER: Vignesh K ABSTRACT. — Agriculture plays a pivotal role in global food security and economic sustainability. To meet the increasing demand for food, it is essential to maximize agricultural productivity while conserving resources and mitigating the environmental impact. The convergence of the Internet of Things (IoT), predictive analytics, and machine learning has ushered in a transformative era known as precision agriculture. This interdisciplinary approach leverages data from IoT sensors and satellites, applies predictive analytics, and harnesses the power of machine learning algorithms to optimize farming practices. Precision agriculture offers a myriad of benefits. It enables resource optimization by providing real-time data on soil conditions, weather patterns, and crop health. This data-driven approach minimizes waste, enhances yield, and reduces the need for pesticides and fertilizers. Furthermore, machine learning models provide predictive insights, facilitating early disease detection, precise pest management, and accurate yield forecasting. These advancements empower farmers to make informed, data-driven decisions, thereby promoting more efficient and sustainable farming practices. The future of agriculture with IoT, predictive analytics, and machine learning holds great promise. Ongoing research focuses on advanced sensors, edge computing, and data integration. Additionally, ethics, data privacy, and regulatory considerations are essential areas of exploration. As technology continues to advance, it is evident that precision agriculture will play a pivotal role in addressing the food security and sustainability challenges of the 21st century. With the use of experimental data gathered from a maize farm, the suggested strategy is assessed. The findings demonstrate that the suggested method has a 95% accuracy rate for predicting crop yields. By decreasing the negative effects of farming techniques on the environment and boosting crop output, this research may help promote sustainable agriculture. |
Dr. Selvakumar R (Professor Higher Academic Grade , VIT ,Vellore, India)
11:40 | An optimized machine learning model for crop yield predication by applying weighted ensemble technique PRESENTER: Dr. Shivani S. Kale ABSTRACT. Ensemble learning can help to improve machine learning models' prediction results by integrating the results of multiple models. Individual machine learning models are combined to form a single prediction model which reduces variance and bias using collective learning approaches and improves predictability. This Paper discusses calculation of weight to be used in the ensemble model. Solving of Objective function to get the optimum weight for each model. The Paper Proposes an Optimized Machine Learning Model by Applying Weighted Ensemble Technique. The comparison of average ensemble model is carried out with proposed model. The Proposed model shows 6% more accuracy of prediction and 6% less error. |
11:55 | Predicting chronic kidney disease progression using classification and ensemble learning PRESENTER: Valli Mayil Velayutham ABSTRACT. A severe health problem that affects millions of individuals worldwide is chronic kidney disease, or CKD. Accurate CKD progression prediction is essential for disease management and early intervention. Machine learning (ML) has shown great potential in predicting the course of CKD by employing a variety of medical, laboratory, and demographic factors. Numerous research have demonstrated the effectiveness of machine learning algorithms in predicting CKD pattern. In our research, we suggested ensemble learning model with a high level of prediction accuracy. We contrast the outcome with well-known machine learning models as neural networks, decision trees, random forests, and support vector machines. On a sizable dataset of CKD patients, where good progression outcomes are identified, our model is trained. Our model is trained on a big dataset of CKD patients with progression outcomes that are either positive or negative. The Chronic Kidney Disease dataset from Kaggle, a platform for data science competitions and challenges, is used to test the model. Standard metrics including accuracy, precision, recall, F1-score, and area under the receiver operating characteristic curve (AUC-ROC) are used to assess the performance of the model. In comparison to the conventional method, our model's experiment results demonstrate 100% accuracy in pattern prediction. |
12:10 | From pixels to insight: enhancing metallic component defect detection with GLCM features and AI explainability PRESENTER: Amoga Varsha ABSTRACT. Abstract. Steel surface defect detection is a practically important problem in manufacturing industries. Recently several efforts have been made to synergistically combine computer vision with artificial intelligence to achieve this goal. In this work we have employed a methodology to employ an automated detection capable of handling large volumes of image data. We have used the NEU database which comprises 1440 gray scale images containing six defective classes. From these images we have extracted texture features facilitated by the Gray-Level Co-Occurrence Matrix (GLCM) analysis. GLCM enables quantification of image intensities by exploring spatial relationships between pixels in close proximities. The extracted features were sent as input to Random Forest Classifier with the aim of building a robust classification model. With these six informative attributes we were able to get an overall test accuracy of 89%. As random forest is a complex approach to understand the overall working principle, we employed SHAP plots to unbox the model and explain the model outputs with easily interpretable visual illustrations |
12:25 | A comprehensive literature review on emerging potentials of machine learning algorithms on geospatial platform for medicinal plant cultivation management in existing scenario. PRESENTER: Dr. Pradeep Ambavane ABSTRACT. Medicinal plants have been an essential part of the traditional medicine systems of India over the centuries. Medicinal plants are used in different industries such as health care, pharmaceuticals, food, cosmetics and many more. The world trade in botanicals is US $ 32.702 billion and Asian botanical trade is for US $ 14.505 billion with 6.634 million tonnes and accounts for 44.35 % and 53.13% of world trade in terms of value and volume, respectively. India is the second largest exporter of medicinal plants with 8.75% share in Asian trade. With the growing demand for natural remedies and the need for sustainable agricultural practices, the cultivation of medicinal plants has gained significant attention. To ensure optimal plant growth and bioactive compound production, effective management of medicinal plant cultivation is essential. In spite of having huge demand for medicinal plants in the global market, it has several issues such as process standardizations, lack of adapting new age technology, minimum support price, in-consistent supply of medicinal plants, lack of market linkages, underdeveloped cultivation technology, poor awareness for conservation of species and many more. The machine Learning approach uses techniques such as; Supervised learning, Unsupervised learning, and Reinforcement learning methods. It has shown great potential in various fields, including agriculture. It can analyze complex data from various sources such as soil composition, climate data, and plant health indicators. Machine learning approach has various advantages such as Predictive analysis, Recommendation Engine, Data-Driven Decision Making, Precision Farming, Optimal Resource Management, Crop Monitoring, Yield Prediction, Cultivar Selection, Effective harvest planning, Knowledge Sharing, etc. Integrating data-driven decision-making with traditional agricultural practices paves the way for a more sustainable and efficient approach to meet the growing demand for medicinal plants while preserving biodiversity and ecosystem health. The study demonstrates the potential of machine learning techniques on geospatial platforms for sustainable medicinal plant cultivation management. Along with this, the research paper tries to recommend a suitable machine-learning framework that can be effectively used during the medicinal plant cultivation processes. |
12:40 | Predicting mental health disorders in the technical workplace: a study on feature selection and classification algorithms PRESENTER: Sumitra Mallick ABSTRACT. Mental disorders are increasingly common among technical employees, posing significant challenges in the workplace due to high levels of stress. Recognizing and accurately predicting diverse situations is crucial for promoting mental health in healthcare settings. This study aims to predict mental health disorders using feature selection algorithms on the Tech survey Dataset, which consists of 61 features related to mental health attributes and frequency in the global technical workplace. Multiple machine learning classification algorithms were applied to the best features selected by RFECV, LASSO and RFE. Performance metrics such as precision, accuracy and recall were used to determine the optimal models. The results, discussed in an aggregated table, reveal the percentage of technical employees experiencing mental disorders. Our method performance in terms of classification accuracy has been proven to be much higher when compared with many competing feature selection strategies. The proposed research achieved a 79% accuracy rate by evaluating various classification algorithms alongside feature selection methods. |
11:40 | PRESENTER: Rishabh Patil ABSTRACT. In today's society, drowsiness and fatigue have become prominent factors contributing to road accidents. These risks can be effectively mitigated by ensuring sufficient sleep, consuming caffeine, or taking breaks when signs of drowsiness manifest. Currently, complex methods such as EEG, ECG, steering wheel angle, and steering wheel pressure sensors are commonly employed to detect drowsiness. Despite their high accuracy, these methods rely on contact-based measurements and have limitations in monitoring driver fatigue and drowsiness in real-time driving scenarios. Consequently, they are not ideal for immediate use while driving. This research introduces an alternative approach that utilizes the rate of eye closure and the occurrence of yawning as indicators of drowsiness in drivers. The paper outlines a methodology for identifying the eyes and mouth in videos or images, extracting relevant features from the visual input, and determining whether the driver is drowsy or alert. The proposed system focuses on the facial region captured in the video or image, specifically targeting the eyes and mouth. By identifying the face, the eyes and mouth can be detected, facilitating eye and mouth state assessment as well as yawn detection. The parameters for eye and mouth detection are derived from the facial image itself. The video is transformed into individual frames, enabling the localization of the eyes and mouth within each frame. Once the eyes are located, features from the eye area and the overall face region are extracted to determine if the eyes are open or closed, while also extracting a yawn score. If the eyes are identified as closed for a certain duration, such as four consecutive frames, it confirms that the driver is in a drowsy state. |
11:55 | Forecast of energy demand using temporal fusion transformer PRESENTER: Chandreyi Chowdhury ABSTRACT. This research paper presents an application of Temporal Fusion Transformer (TFT) model for time series forecasting using deep learning techniques. The novelty of TFT model lies in combining the strengths of recurrent and convolutional neural networks, ultimately resulting in improved accuracy in forecasting. The dataset used in this study consists of data on power consumption per quarter hour of 370 consumers, over a four-year period. The results show that the TFT model outperformed traditional time series models and achieved a lower Mean Absolute Error (MAE) and a lower Root Mean Squared Error (RMSE) in predicting future energy consumption. The paper indicates that the TFT model can be an effective tool for accurate and reliable time series forecasting in various industries, including energy and finance. |
12:10 | An enhanced deep learning method to generate synthetic images with features that are comparable to original images using neural style transfer PRESENTER: Logash Kumar ABSTRACT. Image-to-sequence tasks have evolved greatly as a result of deep learning architectures achieving SOTA results in open-source datasets. From an industrial perspective, using these pre-trained models to attain such results mostly falls short. The main reasons are the data-hungry architecture and the limited amount of available data that is used for retraining these models. In this study, we discuss a deep learning method that generates synthetic data with properties similar to the real data in order to improve Image to text-based tasks. We provide a novel deep learning approach with two distinguishing steps. 1) Take an image from an OCR based source dataset and extract features to create a new image with no text in it. 2) Use this newly formed collection of synthetic images which is very similar to original images for finetuning. |
12:25 | A framework for analyzing legal documents by leveraging knowledge graphs PRESENTER: Sumukh Sirmokadam ABSTRACT. It is increasingly difficult for legal practitioners to effectively extract useful information and insights from the vast number of legal papers. Traditional manual document analysis methods are time-consuming, error-prone, and unable to uncover underlying relationships and patterns in legal texts. We propose a novel approach that combines intelligent document analysis techniques with knowledge graphs (KG) in order to enhance the efficacy and efficiency of processing legal documents. In this study, we present a comprehensive framework for naturally language processing (NLP)-based automated extraction and structuring of legal information from a large corpus of documents. The knowledge graph that serves as a rich semantic representation of legal ideas and their interactions is incorporated into our method along with these strategies. We offer enhanced reasoning and inference skills to find latent linkages and deliver deeper insights into legal texts by embedding legal knowledge into the KG. According to the early findings, activities related to document comprehension, such as entity extraction, relationship extraction, and legal concept identification, are much more accurate and efficient when intelligent analytic approaches are combined with KG-based representation. Advanced functions like automatic legal document summarizing, precedent recognition, and legal case similarity analysis are also made possible by the KG. By reducing the time and effort required to analyze documents while increasing the quality of the information obtained, the proposed framework has the potential to fundamentally alter how legal document analysis is done. The combination of intelligent procedures with KG-based representation equips legal practitioners with useful insights, speeds up the decision-making process, and makes it possible for them to better negotiate complicated legal environments. This work opens new possibilities for AI in the legal sector, paving the path for future advancements in intelligent legal document analysis. |
12:40 | VGGish deep learning model: audio feature extraction and analysis PRESENTER: Mandar Diwakar ABSTRACT. The use of speech recognition has become increasingly popular in recent years due to its many applications in the realm of human-computer interaction. One of the key challenges in speech recognition systems is the efficient extraction of discriminative features from raw audio signals which directly impacts their performance. This study recommends a distinct method to enhance the precision and resilience of speech recognition systems in which audio feature extraction can be done by VGGish model. This model originally designed for visual recognition tasks, has demonstrated remarkable capabilities in extracting hierarchical and representative features from audio signals. By leveraging the pre-trained VGGish model, we aim to exploit its ability to identify high-level acoustic patterns and transform audio inputs into a more informative feature representation. The proposed method involves two main stages feature extraction and speech recognition. The feature extraction stage involves the use of the VGGish model to process the raw audio signals & creating a feature representation that is both concise and comprehensive. The resulting feature vectors capture various acoustic attributes such as pitch, tempo and spectral content. The mentioned features are subsequently entered into the speech recognition mechanism. To determine the effectiveness of our methodology, we conduct comprehensive examinations on speech recognition datasets that meet industry standards. Our findings suggest that the use of VGGish-based feature extraction significantly enhances the efficiency of speech recognition systems, resulting in greater accuracy and improved resilience to high-noise environments. Furthermore, we conduct a comparative analysis with traditional feature extraction techniques commonly used in speech recognition, such as MFCC and Mel spectrograms The approach based on VGGish has been found to exhibit excellent recognition accuracy and robustness against noise as demonstrated by the experimental results. |
Dr. Balakrushna Tripathy (Professor Higher Academic Grade , VIT ,Vellore, India)
11:40 | Boosting tiny object detection in complex backgrounds through deep multi-instance learning PRESENTER: Sudipta Mukhopadhyay ABSTRACT. Our research paper presents a groundbreaking computer vision network architecture that effectively detects small objects by integrating Object detection and multi-instance learning (MIL). Conventional MIL models often underperform when the assumption of negative bags containing solely negative instances and positive bags containing at least one positive instance is violated, which commonly occurs during MIL for object detection. Additionally, detecting small objects using MIL poses challenges such as false positives and imprecise categorizations due to factors like varying scale and complex backgrounds. To address these challenges, our model incorporates two Recursive Feature Pyramid Networks (RFPNs) and employs the generation of exclusive negative bags. Through extensive experimentation, we demonstrate that our model surpasses the performance of state-of-the-art models for detecting tiny objects. Moreover, our work represents the first empirical study on detecting small objects from complex backgrounds using (multi-instance learning) MIL. Additionally, our solution improves the accuracy and speed of multi-stage methods by incorporating multiple architectural enhancements. |
11:55 | Blending psychological models with modern HCI techniques to develop artificial emotional intelligent “affective” systems PRESENTER: Ajay Kapase ABSTRACT. Affective computing is one of the youngest fields of interdisciplinary research involving use of computer science, psychology, cognitive science to recognize, interpret, process, and simulate human emotions. Considering advancements in artificial intelligence and taking its advantages Affective Computing also known as Emotion AI has gained huge popularity over the past decade. However, the major task of affective computing systems depends on understanding human psychology behind different emotions and utilizing it in developing the systems which interact with humans and understands the current emotional status and overall emotional behavior referred as affective event index (AEI) and emotional personality index EPI for everyone based on which the affective system. These systems can be further enhanced to provide affective response according to the emotional state of individuals. Starting from understanding human psychology, decoding different human emotions based, and providing proper affective response, HCI plays a very crucial role. The paper investigates various types of human emotions, the sources to identify it and various HCI traditional to modern techniques which proves to be helpful to develop fruitful affective systems. |
12:10 | Shift of customer from unorganized to organised sector in retail: is adoption of technology a catalyst PRESENTER: Harikrishnan R ABSTRACT. Retail industry is witnessing huge change in the business model by adoption of technology advancement as well as different convenient factors by the customers. This study focus on the shift of customers from unorganized sector to organised sector in the retail industry and the factors which fuel the shift. The study was conducted using Questionnaire distribution in different retail stores organised and unorganized and collected data from around 208 customers. The study employed multi nominal logistic regression model to explain the model with dependent variable as willingness to Shift from Unorganized to Organised sector. We club different factors and created three major variables namely Purchase convenience, Purchase transaction and Technology factors. In the study we identify and speculate different technological impact on the shift as well as the cross interaction of technical factors with service and convenience factors. The research is concluded with a positive note that the technology as well as transaction variables fuel the shifting process of customers from Unorganized to Organised firms in retail. |
12:25 | Offshore wind power energy develops energy storage capacity for low carbon footprint to implement green artificial intelligence and sustainability PRESENTER: Bindiya Jain ABSTRACT. The world economy growth needs energy. Energy consumption and carbon emission both things directly affect our sustainability. Renewable energy like offshore wind energy use in lithium-ion based batteries. These batteries used energy hungry companies in various IOT devices. This study examines how to reduce CO2 emissions by AI and IOT devices. The model based on offshore wind energy, lithium-ion based batteries and artificial intelligence devices can able to calculate how much CO2 emission is generated in environment. For this study, researchers have been used the statistical method Linear regression of ML with IOT devices to implement and calculate CO2 emission is stable, after which researcher can analyse that how offshore wind energy storage can be used for AI technology and less carbon emission is demonstrated by Green AI. This research focused on environment consumption where CO2 emissions would be stable or decrease |
12:40 | PRESENTER: Mrinmayee Deshpande ABSTRACT. Mental Health Disorders have become a significant public health concern worldwide, necessitating accurate and timely diagnostic methods. This study aims to predict the type of Mental Disorder using Artificial Intelligence specifically, the Random Forest Algorithm which is known for its effectiveness in classification tasks. The motivation for this study is lack of a model which can accurately predict the type of mental health disorder of any person. The main objective of ‘mental health prediction’ is to predict the mental health of patient on the basis of symptoms and diagnose the exact disease in order to resolve the serious issues related to mental health which are ignored by society by considering disturbed mental health as a taboo. This paper makes a survey of various mental health symptoms and problems related to it in our society which are solved using AI technologies. To test the performance of our proposed system we used several machine learning algorithms like Support Vector Machines (SVMs), Random Forest (RF) Algorithms. Here, these Algorithms are mainly used for diagnosing mental health disorders on the basis of given input (i.e. verified dataset of symptoms). The Random Forest Model achieved an overall accuracy of 95% in predicting the type of the mental disorder. Gain in the values of Precision, Recall and F1 – Score was also noted. This model is basically a chatbot which predicts accurately the type of mental disorder of a person, if any. We can expect outcomes such as early detection of any mental disorder, facilitating all self-diagnosis through this bot, free interaction of the patients with the bot, etc. through this model |
11:40 | Single person occupancy detection using PIR sensors PRESENTER: Ranjit Kolkar ABSTRACT. The occupancy detection system presented in this study utilizes a combination of two PIR sensors and a microcontroller board to detect and store occupancy information in different rooms accurately. The PIR sensors detect motion within their field of view while the microcontroller processes the sensor inputs and controls the storage of occupancy data in a memory device. The circuit provides real-time occupancy status updates and allows for data retrieval for further analysis. The setup offers significant advantages such as energy efficiency, simplicity, and cost-effectiveness. The experimental results demonstrate the system's effectiveness in accurately detecting and storing occupancy information. The results show the elderly spend time in various rooms. The combined circuit has potential applications in various domains, including smart homes, energy management, and security systems, where knowledge of room occupancy patterns is crucial for optimizing resources and enhancing user experiences. |
11:55 | PRESENTER: Omprakash Kambli ABSTRACT. The evaluation of research impact is a crucial aspect of academic research. In recent years, the h-index has become a widely used metric for assessing research productivity and impact. In this paper, we shall explore the use of web crawling techniques and Python programming language to collect publication data from Google Scholar and calculate the h-index of academic researchers. We demonstrate how the Scholarly package in Python can be used to retrieve publication data and perform h-index calculations, providing an efficient and objective means of evaluating research impact. Our proposed methodology combines the power of web crawling, h-index, and Python to enable comprehensive research analysis and evaluation. This paper presents a valuable contribution to the academic research community by providing an objective and efficient method for evaluating research impact. |
12:10 | A simplified hasse diagram for visualising large datasets PRESENTER: Santhosh Y M ABSTRACT. In this paper an automated method to generate Hasse Diagrams for Large dataset has been proposed. This approach uses a divisibility algorithm that takes the number of elements as an input and generates the diagram by printing the relationships between each pair of elements. This method provides a quick and efficient way to generate Hasse diagrams, especially when dealing with large datasets, reducing the manual efforts and saves time. Further, this method is easily integrated into existing systems and used to visualize and analyze complex data structures and data sets. |
12:25 | Unbreakable passwords: fortifying cryptographic security with derangement keys PRESENTER: Shreyas R ABSTRACT. Protecting sensitive information through robust password security is of ut-most importance. However, the increasing availability of pre-computed ta-bles like rainbow tables has made traditional hashing algorithms vulnerable to easy cracking. To address this issue, this paper presents a novel approach that enhances password security by incorporating a derangement key into the hashing process. By employing this technique, a powerful one-way func-tion is formed, significantly impeding an attacker's ability to reverse-engineer the process and retrieve the original password. Our research demonstrates that the implementation of a derangement key remarkably strengthens password security, rendering it substantially more challenging for unauthorized individuals to gain access to sensitive data stored in databases. This solution offers an effective means of elevating password security in an era where precomputed tables have rendered the cracking of traditional hashes a trivial endeavor. |
12:40 | Green banking awareness: a study on tier 3 location – with special reference to kerala PRESENTER: Anjalidevi S ABSTRACT. Customers have shown interest in green banking, which advocates for financially responsible practices that are environmentally sustainable for now as well as for future generations. This paper examines the awareness and adoption of the various green banking products like green loans, green credit cards, green savings accounts among customers, from a tier-3 location. Data from 212 respondents were analyzed to assess the customer’s knowledge, attitudes, and behaviors regarding green banking products. Additionally, this study examines the factors influencing their awareness and adoption, such as socio-demographic characteristics, educational background, and exposure to sustainability-related information |
15:30 | Enhanced artificial neural networks for prostate cancer detection and classification ABSTRACT. Histopathological scans identify prostate cancer, which medical professionals say is common in men. Pathologists' modest differences in prostate cancer grading using the Gleason system persist. This work replicates automated prostate cancer segmentation and classification. The proposed method uses gland-oriented segmentation and classification .Prostate cancer is one of the major killers of men in the US. Complex masses can cause radiologists to overlook prostate cancer. Recent prostate cancer detection technologies are weak. A robust deep learning ANN and transfer learning are used in this research. Decision Tree, SVM with different kernels, and Bayes are compared. Google Net models and Machine Learning classifiers employ Cancer MRI database properties. Morphological, entropy-based, textural, SIFT, and elliptic Fourier Descriptors are examples. Performance assessment uses numerous indicators. Specificity, sensitivity, and Accuracy Transfer learning and ANN (Google Net) yielded the greatest results. |
15:45 | An intelligent system for prediction of lung cancer under machine learning framework PRESENTER: Antara Bhandari ABSTRACT. In recent years, there has been a significant increase in the prevalence and severity of lung cancer, rendering it a grave health concern. Lung cancer stands as the primary cause of mortality associated with cancer on a global scale. Early detection is of utmost importance as the failure to do so may lead to fatality. Lung cancer metastasis was observed in both male and female individuals. This paper presents a machine learning-based model developed for the purpose of predicting lung cancer. The primary aim of this study is to construct a predictive model that can effectively and promptly forecast the occurrence of lung cancer. The suggested endeavour has employed many machine learning algorithms, including Support Vector Machine (SVM), Logistic Regression, Decision Tree Classification, K-Nearest Neighbours (KNN), Gaussian NB, and Artificial Neural Network. The highest level of accuracy, reaching 99%, has been attained using Decision Tree and Artificial Neural Network methodologies. The originality of this study is in the application of various machine learning approaches and their comprehensive comparative analysis in the prediction of lung cancer. This research endeavour aims to enhance the efficiency of the treatment process by facilitating early and precise disease prediction. |
16:00 | Identification of misinformation using word embedding technique Word2Vec, machine learning and deep learning models PRESENTER: Arati Chabukswar ABSTRACT. Real-time news is widely disseminated through the internet on a global scale. One of the factors contributing to its success is the simple and speedy spread of news. Social networking platforms have a huge user base that includes people of all ages, genders, and social backgrounds. Considering these positive aspects, a serious draw-back is the propagation of misinformation, as most individuals read and spread in-formation without giving any thought to its veracity. Researching techniques for news authenticity is so essential. To address this problem, a fake news identification system is created by training the COVID-19 tweets with roughly 12427 records taken from Kaggle and GitHub repository from five different sets, annotated manually as Fake (0) and Real (1) by cross-checking through websites that verify facts using machine learning classifiers like RF, SVM, LR, NB, and Deep Learning models LSTM and Bi-LSTM. The feature extraction process makes use of the Word2Vec word embedding technique. According to the findings, Bi-LSTM outperformed all the other models in terms of accuracy, scoring 87.3%. |
16:15 | Rank prediction for indian universities based on national institutional ranking framework PRESENTER: Harshali Patil ABSTRACT. India has a great legacy of the education system, the world’s first residential university, Nalanda was established in the 13th century. Till last year, 1218 Indian universities of different types are imparting knowledge. National Education Policy 2020 is going to create new opportunities for domestic and international students to earn degrees through India’s National Digital University. Indian universities are recognized worldwide and the value of the Indian education system promoted India as the 7th most represented country in the rankings of world universities. The university ranking has the power to attract the best and brightest students from India as well as abroad. The National Institute of Ranking Framework used quantitative methods to rank the universities. This paper focuses on university rank prediction based on the top five highly correlated dimensions of any educational institute/university. The five dimensions are Teaching, Learning, and Resources (TLR), Research and Professional Practice (RP), Graduation Outcomes (GO), Outreach and Inclusivity (OI), and Perception (PR). 2016-2023 data on university ranking is used for the prediction of university rank. The various machine learning regression algorithms and neural network algorithms are used and tuned to get an accurate prediction. The model is deployed on the flask server and a user interactive interface is created for prediction. |
16:30 | Agricultural indicators as predictors of annual water quality: an analysis of interconnectedness and prediction using machine learning PRESENTER: Lukas Maier ABSTRACT. Water, the chief constituent of ecosystems, critically influences key facets of civilizations like urbanization and agriculture. Its quality, often impacted by agricultural practices such as pesticide use, plays a central role in sustainable development. Recognizing the intertwined nature of water quality and agriculture, this study’s mission is to predict a country's aggregated annual water quality using agricultural indicators. Focusing on the annual volume of pesticide sales and the agriculturally used area per country as primary indicators, the research aimed to provide a streamlined perspective on agriculture's effect on water quality. Analyzing global water quality through the Water Quality Index presented a macroscopic view, categorizing countries based on their mean scores. Through detailed data analysis, relationships between agricultural variables and water quality metrics were established. For the prediction, multiple machine learning techniques were tested and k-Nearest-Neighbor was chosen for the best performance. The predictive model was successful in estimating the Water Quality Index with commendable accuracy. |
Dr. Aswani Kumar Cherukuri (Professor Higher Academic Grade , VIT ,Vellore, India)
15:30 | Gender gaps in the context of cryptocurrency literacy: evidence from survey data in europe and asia PRESENTER: Ralf Hoechenberger ABSTRACT. Although the term cryptocurrency is widely used nowadays, it is unclear if people have basic knowledge about the underlying concepts. Literature on financial literacy posits that often such knowledge is missing, especially for women. In this study, we utilize survey data across five nationalities in Europe and Asia and analyze differences between self-assessed expertise and actual performance in various cryptocurrency knowledge items. We find that women on average report a lower self-assessed cryptocurrency literacy than men do. At the same time, we obtain the surprising result that women on average are not less literate than men, and in some literacy items, they even perform better than men. Further analysis shows that female self-assessed experts indeed perform better than female self-assessed non-experts. On the contrary, male self-assessed experts do not perform better than male self-assessed non-experts. Hence, women have a more accurate evaluation of their cryptocurrency literacy than men who exhibit overconfidence in their abilities. We also find that both men and women generally have an upwards biased perception of their abilities, i.e. they are not aware of the fact that the absolute quality of their answers is far away from one would expect from a cryptocurrency expert. |
15:45 | Fruit and vegetable segmentation with decision trees ABSTRACT. Objective of our study is to develop a method to quickly and efficiently identify different types of fruits and vegetables that are travelling on a conveyor. Twenty four different categories of fruits and vegetables, with 80 images per category, are used for training. Segmentation is first performed on the images with two segmentation masks, which are then downsampled using max-pooling to 25% of the original size. The masked images before downsampling are used with local binary patterns and HOG methodologies for extraction of features to get the textures and shapes. PCA performed on downsampled images along with extracted features reducing the number of principal components but is discarded due to too great a loss in accuracy. Finally, these are fed into the classifier to identify the category of fruit or vegetable. Classification is performed using bagged decision trees. The conclusion indicated the high level precision of our proposal along with faster runtime than Inception. |
16:00 | Analyzing UNO statistics on land use of agricultural practices by using k-means clustering and SARIMA: irrigated, organic, and overall agricultural activities on a global scale PRESENTER: Moritz Wüst ABSTRACT. Ongoing and self-enhancing climate change, population growth and urbanization put a lot of pressure on world-wide agriculture. To ensure food production, agriculture must become either more efficient or grow in size. The United Nations offer a broad range of statistics on global land usage, irrigated land, and certified organic land. This paper aims to analyze the development of overall land usage and the interest shifting of modern-day agriculture by using the Silhouette Score, k-Means clustering and SARIMA. It looks on ongoing trends concerning organic food production as well as the growing amount of irrigated land for growing crops. An outlook of worldwide, French, and Indian land use is applied with the SARIMA algorithm. As results, we identified clear clusters of countries that irrigate their farmland and saw a rise in organic farmlands all around the globe. Additionally, a clear land loss for all agricultural proposes was detected in most of countries inside the dataset. Especially in high developed countries such as ones in Western Europe, land surface is declining massively for decades. This trend will most likely continue in the future, as we discovered by using the predictive SARIMA algorithm. |
16:15 | Handling missing data in longitudinal anthropometric data using multiple imputation method PRESENTER: Dhruv Varma ABSTRACT. Diabetes mellitus, a prevalent and an ever-growing metabolic syndrome, has grown to be a widespread global health challenge. Given its tremendous occurrence, complexity, and the continuously rising healthcare fees related to it, there is an urgent need for research to advance our knowledge and remedy of this condition. This paper specializes in addressing missing data in a longitudinal clinical study targeted around diabetes. The study, published in October 2003, aimed to examine 14 different strategies for imputing missing data within a long-term study including older adults. Missing data encompassed an extensive range of variables, along with factors like despair, weight, cognitive functioning, and self-rated fitness, especially applicable to older adults. To address the problem of missing data, we carried out a radical exam of properly-established imputation strategies: K-Nearest Neighbors (KNN) and Multiple Imputation by using Chained Equations (MICE). Additionally, we tested the MICE technique, which iteratively imputes missing information while respecting temporal dependencies, resulting in the formation of multiple imputed datasets. Our study found out that the MICE imputation method outperformed KNN approach in terms of maintaining the mean and standard deviation. Also, the rigorous statistical evaluation confirmed the MICE approach's remarkable potential to preserve the nuanced temporal characteristics of the data. In conclusion, this study underscores the paramount significance of preserving temporal consistency in longitudinal research, specifically when coping with diabetes-related statistics. |
16:30 | Enhancing the detection of fake news in social media: A comparison of support vector machine algorithms, hugging face transformers, and passive aggressive classifier PRESENTER: Shamik Misra ABSTRACT. In this research, we compare and contrast many different AI-algorithms designed to improve the ability to spot false information on social media. In particular, it assesses the efficacy of Passive Aggressive Classifier [7], Hugging Face [5], and Support Vector Machine Algorithms [1][2][3][4][8]. Amid the increasing menace of misinformation on social media, the need for ef-fective fake news detection mechanisms cannot be overstated. The study begins with an overview of the algorithms under review, followed by an explanation of their application in fake news detection. This analysis then moves into a compar-ison mode, assessing each method according to several criteria including compu-tational complexity, accuracy, precision, and recall. The research goes further into the pros and cons of each model, illuminating how well they perform with various sets of data and varieties of disinformation. In order to create reliable and accurate false news detection systems, it is important to determine which algo-rithms are the most successful. The results of this comparison not only add to the body of knowledge on disinformation identification, but they also provide con-crete strategies for bolstering the trustworthiness of content shared on social me-dia. |
16:45 | Multi script handwriting recognition using RNN- transformer archite PRESENTER: Murali Krishna G ABSTRACT. Multiscript handwriting recognition is used to recognize handwritten text from multiple scripts. It is a challenging task because of each script has its own unique characteristics like writing direction, their character shapes, and their ligatures. Due to this the system should be able to recognize text from a variety of the handwriting styles, it includes many different writing speeds, their pen pressures, and the slants. It is a challenging task because different people write in many different ways, and they’re even the same person handwriting can vary depending on the situation. system should be able to recognize the text quickly and it is accurate. Then even when the amount of data is large. It is important because multi script handwriting recognition systems is used to process a larger datasets and digitized historical documents or medical records |
K. Saravanan (College of Engineering, Guindy, Anna University, Chennai, India)
15:30 | PRESENTER: Rucha Shriram ABSTRACT. Plant nursery is a place where plants are propagated, fostered, cultivated, and sold for use in a personal garden or for commercial purposes. Improved quality seedlings are grown under favorable conditions until they are ready for planting on a small or large scale. Whenever people want to buy plants, they visit the plant nursery. The nurseries must be able to assist with all the types of enquiries regarding plants and purchases. Sometimes people do not have certain information about plants and it may happen that the seller also can-not give exact information regarding plants. Similarly, it is quite difficult to detect plant diseases with human eyes. There is a need to develop an auto-mated system to detect the plant diseases. In such a scenario, developing a mobile application can help customers to identify a plant, get information about plants, buy plants and detect plant diseases using plant images. Therefore, the system takes an image of a plant leaf as input, processes it and uses deep convolutional neural network to identify a plant and detect disease if any. The model is trained with a large dataset of plant leaf images consisting of different species and diseases. The trained model is lightweight and is used to build an android application. Thus, the developed mobile application is user-friendly and very useful in identifying plants and detect diseases by just clicking an image of the plant using mobile. |
15:45 | Diabetic retinopathy detection using real-world dataset of fundus images PRESENTER: Raksheet Jain ABSTRACT. Abstract– Diabetic Retinopathy (DR) is an eye condition that affects people who have diabetes and damages their retina over time, potentially resulting in blindness. Traditional methods of diabetic retinopathy (DR) screening, overseen by ophthalmologists, exhibit limitations such as scarcity of expert professionals, time intensiveness, and high costs. This highlights the need for a more precise and efficient approach. Our study introduces an innovative solution by harnessing deep learning. Leveraging the Residual Neural Network (ResNet) model, we analyse a diverse dataset of 7000+ authorised real-time eye images of patients from "Anand Eye Hospital, Jaipur." This methodology facilitates credible and early detection of diabetic retinopathy, a pivotal factor in averting vision loss. The severity scale separates the images into five classes: normal, mild, moderate, severe nonproliferative diabetic retinopathy (NPDR), or proliferative diabetic retinopathy (PDR), from a healthy eyeball to a proliferative diabetic retinopathy presence. Remarkably, our approach attains an impressive 93.5% accuracy, bolstering the model's reliability in diabetic retinopathy diagnosis from fundus images. With potential applications spanning urban and rural healthcare landscapes, our study underscores the transformative potential of advanced technology in reshaping retinopathy diagnosis and patient outcomes. |
16:00 | Blending motion capture and 3D human reconstruction techniques for enhanced character animation PRESENTER: Anshuman Giramkar ABSTRACT. In the contemporary timeframe, 3D human modeling has become increasingly important due to broad spectrum in leveraging utilities. It can create realistic representations of human anatomy, which is useful in healthcare for training, simulation, and surgical planning. The entertainment industry also uses 3D human modeling to create digital characters, while the fashion and retail industry uses it for virtual fitting rooms and personalized clothing recommendations. With advancing technology and new applications emerging, the relevance of 3D human modeling is expected to continue growing. The following research paper investigates the combination of motion capture and 3D facial reconstruction techniques for enhanced character animation. We advocate a blended method that integrates the strengths of both techniques, creating a more lifelike and expressive animation. Our method combines a deep learning-based 3D facial reconstruction algorithm with motion capture data to produce high-quality facial animation. We evaluate our method by comparing it to traditional animation methods and show that our approach produces more realistic and natural facial expressions. Our results demonstrate the potential for combining motion capture and 3D facial reconstruction techniques to enhance character animation in the entertainment industry and beyond. |
16:15 | A graphical neural network-based chatbot model for assisting cancer patients with dietary assessment in their survivorship PRESENTER: Jhilam Mukherjee ABSTRACT. Cancer diagnosis and treatment are challenging procedures requiring not just specialized medical expertise but also individualized support and guidance. In recent years, the application of artificial intelligence in healthcare has opened up new possibilities for improving patient care. This article outlines the development of a special chatbot for cancer patients that employs Graph Neural Networks (GNNs) to provide individualized information, emotional support, and dietary suggestions. By creating a knowledge graph that captures intricate connections between medical data, treatment plans, food recommendations, and patient preferences, the GNN-powered chatbot offers a complete approach to patient care. It provides individualized dietary guidance, monitors patients' health, and modifies its responses in real time based on the patient's medical history and current needs. This innovative approach enhances the patient's quality of life while reducing the workload for healthcare workers and encouraging a more collaborative and effective healthcare environment. By demonstrating the potential of GNNs to address the unique challenges faced by cancer patients as they pursue full recovery, this study contributes to the increasing corpus of research on AI-driven healthcare solutions. The accuracy of our suggested model is 94.11\%, which is pretty encouraging. |
16:30 | Preserving tamil brahmi letters on ancient inscriptions: a novel preprocessing technique for diverse applications PRESENTER: Poornimathi K ABSTRACT. Inscriptions play a crucial role in preserving historical, cultural, and linguistic information. The identification and analysis of patterns in Tamil letters found in inscriptions provide valuable insights into the evolution of the Tamil language and its script. However, manual analysis of these inscriptions is time-consuming and prone to errors. In recent years, deep learning techniques have shown promising results in pattern recognition tasks, motivating the exploration of various strategies to identify the patterns of Tamil letters on inscriptions.This paper focuses on leveraging deep learning algorithms for the automated identification of Tamil letter patterns in inscriptions. Firstly, a dataset of digitized Tamil inscriptions is collected, consisting of high-resolution images representing a wide range of letter variations. Preprocessing techniques are employed to enhance the quality and clarity of the images, removing noise and artifacts.Various deep learning models, such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), are then trained using the preprocessed image dataset. CNNs excel in extracting spatial features from images, enabling the recognition of letter shapes and contours. RNNs, on the other hand, capture temporal dependencies within sequences of letters, aiding in deciphering the structure and connectivity of the inscriptions. To improve the performance of the models, data augmentation techniques are employed to increase the dataset size and enhance its diversity. However, preprocessing plays a major role in sharpening the features present in the image. Hence, this paper addresses the preprocessing techniques such as Image Blur, Binarization and Edge Detection with respect to inscription. Preprocessing techniques were identified and tested with the inscription image. Based on the results and response time, it is suggested that the Median filter with canny edge detection is working well for inscription images. After preprocessing the results have been tested with edge detection and it is found that, Median filter with canny edge detection gives best accuracy in comparison with other algorithms. |
15:30 | Comparative analysis of deep learning models for car part image segmentation PRESENTER: Anupama M A ABSTRACT. Accurate and efficient car part instance segmentation is a fundamental re-quirement in the automotive industry, with applications ranging from vehicle diag-nostics and maintenance to insurance claim assessments. In this study, we present a quantitative approach to car part segmentation, by evaluating and comparing the power of YOLOv8, the Detectron2 Mask R-CNN with Resnet 101 and Mask R-CNN with ResNeXt 101 32x8d configuration +FPN Backbone architecture. Notably, our study focuses on a dataset comprising of 18 distinct car part labels, adding complexi-ty and relevance to the real-world scenarios. Car damage assessment is often accom-panied with a multifaceted challenge, with varying damage types and degrees of se-verity. Identifying and delineating these parts accurately is essential for decision-making in the repair and insurance sectors. The presence of noise factors, such as dirt, grease, and varying lighting conditions, further exacerbates the instance seg-mentation task. In this study, we have trained and rigorously evaluated our models on a diverse internally labelled dataset consisting of 18 unique car part labels. Our results demonstrate the efficacy of our approach in achieving precise car part seg-mentation. The Detectron2 Mask R-CNN R101+FPN and ResNeXt 101 32x8d con-figuration model excelled in real-time car part detection and segmentation, with its powerful backbone architecture exhibited superior performance in handling intricate part boundaries and fine-grained segmentation. Whereas, The YOLO V8 model per-formed really well in real-time car part detection and segmentation, displaying its versatility in identifying and delineating car parts tasks. |
15:45 | Stacking ensemble-based approach for sarcasm identification with multiple contextual word embeddings PRESENTER: G.R.S. Murthy ABSTRACT. Prior research has emphasized the efficacy of pre-trained word embedding techniques in gauging and identifying emotions conveyed in text documents. However, relying exclusively on a singular word embedding approach can hinder our capacity to capture the intricate interconnections between words within texts. To address this issue, this paper introduces a stacking ensemble model for sarcasm detection that harnesses the strengths of multiple contextualized word embeddings. The proposed model accomplishes sarcasm recognition through the utilization of a trio of state-of-the-art contextual word embedding techniques, namely XLNet, BERT, and RoBERTa. The model utilizes these three sets of contextualized word embeddings to train a stacking ensemble classifier. This ensemble comprises base-level classifiers, including CNN, BiLSTM, and BiGRU, and a meta-level classifier based on SVM. The effectiveness of the proposed method has been examined using a self-annotated reddit corpus of political comments. The results from the experiments demonstrated that the proposed approach achieved an accuracy of 80.52%, which signifies a 2.02% enhancement over the previous state-of-the-art techniques. |
16:00 | A two-stage CNN based satellite image analysis framework for estimating building-count in residential built-up area PRESENTER: Shambo Chatterjee ABSTRACT. The assessment of population within a certain region holds significant importance in the field of urban planning and the allocation of resources. The primary objective of this proposed research endeavour is to aid in assessing the population of a region by estimating building counts through a two-stage framework that analyses low-resolution satellite images using convolutional neural network (CNN)-based models. The first phase of the framework focuses on the segmentation of built-up areas in a satellite image using a Mask-RCNN model, while the second phase employs a convolutional neural network (CNN)-based regression model to estimate the number of buildings within each segmented built-up area without the need for individual extraction of roof-tops. The extraction of roof-tops from low-resolution satellite images for population estimation still poses a huge challenge to the researchers due to the lack of visual clarity. Further, in densely populated areas, the low contrast of the built-up areas causes huge difficulty in the detection of roof-tops individually. In view of such challenges, we develop a Mask-RCNN model for segmentation of probable built-up areas in low-resolution satellite images instead of attempting to extract every building individually. Subsequently, a CNN-based regression model is developed to estimate the count of buildings within the segmented built-up areas in low-resolution satellite images. The proposed framework exhibited a promising level of accuracy while working with low-resolution satellite images. The experimental results indicated that the proposed framework can provide a cost-effective solution for estimating the population in a region, which is useful for the assessment of demographic variation, resource allocation for disaster management, smart city construction, and many other socio-economic planning activities. |
16:15 | E-CNN-FFE: an enhanced convolutional neural network for facial feature extraction and its comparative analysis with FaceNet, DeepID, and LBPH methods PRESENTER: Srinivas S ABSTRACT. Facial feature extraction plays a pivotal role in modern-day computer vision tasks, and the effectiveness of these methods is imperative for applications ranging from facial recognition to emotion detection. In this paper, we introduce E-CNN-FFE, an Enhanced Convolutional Neural Network designed specifically for Facial Feature Extraction. Built on the foundation of existing convolutional neural network (CNN) architectures, E-CNN-FFE incorporates novel modifications to optimize the extraction of intricate facial features, aiming for enhanced performance in both accuracy and computational efficiency. We embark on a comprehensive comparative analysis, positioning E-CNN-FFE against established algorithms: FaceNet, DeepID, and LBPH. Evaluations are carried out in terms of feature extraction capability, accuracy, computational speed, and robustness against varying facial conditions and distortions. Preliminary results suggest that E-CNN-FFE showcases significant improvements in specific domains over its counterparts, elucidating its potential as a robust and reliable tool for facial feature extraction tasks. The implications of E-CNN-FFE span across various facial recognition applications, potentially setting a new benchmark in the field. This study not only offers insights into the algorithmic enhancements of E-CNN-FFE but also serves as a comprehensive guide for researchers and practitioners aiming to harness the power of neural networks for facial analysis tasks. |
16:30 | Improving sentiment analysis by handling negation on twitter data using deep learning approaches PRESENTER: Mamatha M ABSTRACT. Sentiment analysis is a tool to identify and measure the emotion in a piece of text. Negation handling is an important aspect of natural language processing(NLP) for Twitter data. In this article, a negation handling technique using Convolutional Neural Networks (CNNs) model for classification is proposed. The system is evaluated on SemEval-2017 dataset. The classification performance is improved by using CNN on the negative tweets. The paper compares the performance of ANNs and CNNs in handling negation words and evaluates them on the tweets data. The proposed negation strategy attains a superior performance accuracy over machine learning models by preventing misclassified tweets. |
Prashant Joshi (Founder & Director, Leap & Scale, India)
15:30 | Quantum computing based banking system ABSTRACT. In this work a quantum computing based banking system has been proposed using a quantum blockchain model. The digital currencies which are qubits are used to purchase commodities in online platforms. The qubits are stored in quantum wallets. The qubits have a tag id to identify the buyer and seller. Each qubit has a phase angle to check the authenticity of the transaction. The qubit also has the information of the currency for the transaction. The quantum banking system checks the price of the commodity and the value of the qubit currency if matched the commodity is shipped to the customer. The quantum computing based banking system yields a reliability for authentic transactions and unique transactions of 32% and 42% respectively, compared to conventional blockchain networks. |
15:45 | PRESENTER: Sanket Salvi ABSTRACT. The traditional conduction of examinations need to use physical copies of question paper and answer sheets, which are then evaluated by the evaluators. However, in a post pandemic era, where the emphasis is more on the minimal usage of physically transferable materials, safe conduction of examinations in the classroom environments becomes challenging. Conduction of examination in complete online mode requires wireless access to the Wi-Fi access points, however, as the range of the Wi-Fi access point goes beyond the classrooms, there is possibility of accessing the network from outside classroom which is not desired. With an aim to address these issues, in this paper a novel approach for providing dynamically changing passwords using visible light communication is designed, implemented and tested for connecting to the wireless network. The setup is useful in the environment where restricted physical access is needed to ensure system and network security. |
16:00 | Smart farming based on IoT-edge computing: exploiting microservices architecture for service decomposition PRESENTER: Dr.Nitin Rathore ABSTRACT. In today’s world of digitization, IoT plays a significant role. It has made eve-rything smart, e.g., smart cities smart healthcare, industrial automation, commercial), and even farming is not untouched by IoT. Due to which data generated by IoT devices will reach up to 180 zettabytes by 2025 as forecast by the International Data Corporation . To handle such enormous data, the edge computing approach has created innovation opportunities within the IoT ecosystem by applying cloud services to the network edge to reduce network latency and serve IoT applications in real-time. Edge computing has applications where Internet connectivity is poor or unreachable because, in this case, sensors are not able to communicate with the cloud efficiently. So in this paper, we have implemented a distributed edge computing platform for smart farming to increase/improve crop productivity in remote agri-land. Since edge computing nodes are resource-constrained, diverse, and distribut-ed in nature, edge applications require to be built as a set of smaller, interde-pendent modules. These tactics are in line with microservices architecture, so proposed smart farming is based on a microservices architecture that al-lows service distribution across discrete computing nodes in the IoT-edge-cloud architecture. Results show that by applying service distribution and lo-cal processing at the edge layer, we achieve an 89.85% decrease in the quan-tity of data migrated to the cloud server. |
16:15 | Design and analysis of quantum transfer fractal priority replay and mirdad priority loss algorithms for quantum reinforcement learning PRESENTER: Palanivel Rajalingam ABSTRACT. The research explores the Quantum Transfer Fractal Priority Replay (QTFPR), an innovative algorithm aimed at improving quantum system performance using reinforcement learning techniques. By combining Quantum Q-learning with prioritization strategies, QTFPR demonstrates remarkable convergence and efficiency improvements. It utilizes Transfer Neural Networks and Fractal techniques to effectively prioritize experiences that are relevant to specific tasks within the quantum replay buffer. The proposed Mirdad Loss Priority (MLP) function, which incorporates quantum amplitude damping, outperforms traditional loss functions. The study highlights the practical implementation of QTFPR on real quantum hardware, such as the IBM Quantum Computer with Qiskit libraries, promising significant advancements in quantum machine learning efficiency. Various metrics, including priority, experience, total reward, average reward, and accuracy, are employed to evaluate the algorithm's performance. QTFPR integrated with Quantum Q-learning, both of which demonstrate remarkable success in Atari games. |
16:30 | Chatbot development simplified: an in-depth look at JIGYASABOT platform and alternatives PRESENTER: Amol R Madane ABSTRACT. The rapid integration of chatbot technology in diverse industries has revolutionized the way businesses engage with their customers and users. This research paper presents a comprehensive overview of recent chatbot platforms, exploring its essential components, functionalities, and its profound impact on the chatbot ecosystem. The study further investigates the potential applications of this platform and its transformative influence on artificial intelligence applications and innovations. Furthermore, the paper analyzes different categories of chatbot platforms, considering factors such as goal orientation, conversational capabilities, and ease of programming. The study showcases various tech-giant platforms like Google Dialogflow, IBM Watson, RASA NLU, ManyChat, and Microsoft Bot Framework, highlighting their unique features and applications. Moreover, the research discusses how these platforms can be employed to develop chatbots for distinct purposes, from educational and medical consultancies to marketing and customer care. RASA NLU stands out as an open-source NLP library, enabling designers to customize natural language processing for chatbots, while ManyChat offers simplicity and rapid deployment for Facebook Messenger bots. Python's ChatterBot library and TensorFlow, a versatile machine learning platform, are also explored for their automated response capabilities and deep neural network training, respectively. This research paper sheds light on the diversity and potential of chatbot builder platforms. The findings offer valuable insights for researchers, developers, and industry professionals in harnessing the power of chatbots and artificial intelligence to shape the future of customer engagement and user experiences. |
16:45 | Maximizing cloud resource utility: region-adaptive optimization via machine learning-informed spot price predictions PRESENTER: Kavita Srivastava ABSTRACT. This research paper presents a comprehensive study on the use of machine learn-ing models for price prediction of spot instances in various geographic regions in Amazon Web Services (AWS). The work focuses on forecasting prices across eleven unique locations using XGBoost and RandomForestregressors, with the goal of revealing significant insights into pricing dynamics and prediction accu-racy. The research explores how well these models anticipate prices, finds factors that influence price fluctuation, and assesses the practical consequences of these predictions for enterprises. The study employs a dataset containing pricing data from several places in a methodical manner. The study's findings reveal noteworthy trends and findings. The predicted performance of the models varies by region providing for region-specific insights into price forecast accuracy. To measure prediction perfor-mance, models are evaluated using Mean Squared Error (MSE) and Mean Abso-lute Error (MAE). Significantly accurate forecasts show that the models can successfully capture pricing changes. A comparison of the XGBoost and RandomForest models also offers light on their relative performance, which will benefit in algorithm selec-tion for future investigations. |
17:00 | Guarding the gateway: data privacy and security in metaverse tourism ABSTRACT. As the boundaries between physical and digital worlds blur, Metaverse Tourism is emerging as a groundbreaking domain, promising immersive experiences and limitless possibilities for exploration. However, this rapid development raises critical concerns regarding data privacy and security, necessitating a comprehensive investigation into these issues. This research paper is dedicated to addressing these concerns, proposing solutions, and shedding light on the current state of research in this field. The primary objective of this paper is to identify, analyze, and mitigate the privacy and security challenges that arise within the metaverse tourism ecosystem. As a metaverse is a virtual world where users can interact with one another, share information, and explore diverse environments, ensuring the confidentiality, integrity, and availability of personal and sensitive data becomes a paramount concern. In this context, this study endeavors to provide practical solutions that not only safeguard the privacy and security of metaverse tourists but also promote the growth and development of this emerging industry. To provide a foundation for this research, conducted a bibliometric analysis of existing literature in the field. By analyzing a diverse range of scholarly works, aimed to gain insights into the current state of research, identify key trends, and understand the most pressing issues related to data privacy and security in metaverse tourism. This analysis served as a precursor to our proposed solutions, allowing us to build upon the existing knowledge and tackle the challenges more effectively. This paper aims to contribute to the academic discourse on metaverse tourism while also providing actionable recommendations to protect the privacy and security of tourists in this exciting new world. |