The Conference Chair will formally open the conference and introduce our keynote speaker. The Program Chair will discuss any last-minute changes to the program.
Tabitha Samuel (National Institute for Computational Sciences - UTK, United States)
Keynote Address
ABSTRACT. Tabitha Samuel is the Interim Director and HPC Operations Group Leader at the National Institute for Computational Sciences, University of Tennessee, Knoxville. She has over 16 years of experience in advanced research computing, spanning user support and systems programming to executing and advancing the overall vision and mission of a nationally recognized supercomputing center. An active Research Computing and Data (RCD) community member, Tabitha is co-PI of the Building Research Innovation at Community Colleges (BRICCs)-Pathways, focusing on creating meaningful pathways for collaborative research computing between community colleges and research-intensive R1 universities. She is also the co-founder of Tennessee Research, Education, and Computing Collaborative (TRECC), a platform for collaboration and regional advancement in cyberinfrastructure for RCD professionals in Tennessee. Tabitha also serves on the Position Committee of the Coalition for Academic Super Computing. She is also the chair of the ACM Practice and Experiences in Advanced Research Computing (PEARC) Conference Series, and will be its co-general chair in 2026.
Tabitha earned her PhD in Computer Science from the University of Tennessee, Knoxville, where her research focused on using natural language processing techniques to improve vocabulary comprehension and retention in children. In her free time, Tabitha loves to go on awe-inspiring adventures around the world (using miles and points, of course!), cook world cuisine, and play tabletop RPGs.
Bridging Traditional and Generative AI in the Classroom
ABSTRACT. The rapid growth of generative artificial intelligence (AI) is reshaping how AI is taught in computer science programs. Traditional AI instruction has emphasized symbolic reasoning, search, and classical machine learning, often with well-defined problems and transparent algorithms. Teaching generative AI, by contrast, requires new approaches: models are larger and less interpretable, the outcomes are probabilistic, and applications such as text, image, and code generation raise unique opportunities and challenges in the classroom. This presentation focuses on the pedagogical differences between teaching traditional and generative AI, including adjustments in course design, assessment strategies, and student engagement. Examples from classroom practice illustrate how generative AI projects encourage creativity and critical thinking, while traditional AI projects reinforce core principles of logic, search, and problem decomposition. We also discuss challenges such as the need for computational resources, addressing ethical concerns, and balancing hands-on experimentation with conceptual understanding.
09:35
Saman Sargolzaei (University of Tennessee, United States) Curt Lynch (University of Tennessee at Martin, United States) Seth Hatchett (University of Tennessee at Martin, United States) Kyle Byassee (University of Tennessee at Martin, United States) Isaac Copeland (University of Tennessee at Martin, United States) Arman Sargolzaei (Florida International University, United States)
Integrating EEG, HRV, and Eye Tracking in XR for Advanced Autonomous Driving Simulations
ABSTRACT. The study introduces a novel driving simulation system that integrates electroencephalography (EEG), heart rate variability (HRV) via photoplethysmography (PPG), and eye tracking to assess drivers' physiological and cognitive responses during autonomous vehicle scenarios. By combining these modalities within an extended reality (XR) environment, the system enables real-time monitoring of brain activity, stress levels, mental workload, and visual attention, offering a multidimensional view of human-machine interaction. The work paves the way for future research in intelligent transportation systems, aiming to enhance the safety, usability, and emotional well-being of users in autonomous driving contexts.
09:55
Masoud Naghedolfeizi (FORT VALLEY STATE UNIVERSITY, United States) Nabil Yousif (FORT VALLEY STATE UNIVERSITY, United States) Xiangyan Zeng (FORT VALLEY STATE UNIVERSITY, United States)
Prime Targets in the Cyber Threat Landscape: Understanding Risks and Building Defenses
ABSTRACT. Cybercrime today is more than just random hackers looking for fast money. It has grown into a massive, structured business, often backed by nation-states, with damages expected to hit $10.5 trillion globally by the end of 2025. This makes cybercrime one of the biggest threats facing modern organizations, no matter their size or sector. Industries that store huge amounts of sensitive information such as healthcare, finance, retail, and education are often prime targets. Government agencies and critical infrastructure sectors like energy and manufacturing are also under attack, not only for financial gain but also for political reasons.
Healthcare systems, for example, are constantly targeted by ransomware because of the value of patient records and the need to keep systems running. Retail and e-commerce face credential theft, identity fraud, and now AI-powered scams that are harder to detect. Universities are often unprepared, even though they hold valuable intellectual property. And government agencies deal with nation-state actors using cyber tools to steal data or disrupt services, sometimes in partnership with criminal groups. It has been reported that the average cost of a data breach was over 4 million dollars in 2024.
The costs of a breach go beyond money. They damage trust, disrupt essential services, and in extreme cases, threaten lives. To reduce these risks, solutions include stronger use of multi-factor authentication, better patching and encryption, training and education, and AI-based monitoring tools. Higher education and healthcare organizations need more investment in cybersecurity training and compliance. For government and critical sectors, zero-trust models and international cooperation are key.
In summary the threat landscape is evolving fast, but proactive and layered defenses can help organizations stay one step ahead.
Lessons learned from my attempt to ChatGPT-proof a class for better student outcomes
ABSTRACT. After ChatGPT reached the masses, the structure of our introductory computer programming class needed an update to stay relevant for student learning. Hear about how we flipped the classroom, restructured assignments, and used technology to help students learn. There were tears along the way, but better outcomes at the end.
Emil Lopez (Fort Valley State University, United States) Kamiya Ridley (Fort Valley State University, United States) Solomon Rayton (Fort Valley State University, United States)
Making AI Trustworthy & Interpretable for Medical Field
ABSTRACT. Machine learning models in healthcare often operate as "black boxes," limiting their adoption in critical medical decisions where transparency is essential. Heart disease remains a leading cause of mortality worldwide, with early detection crucial for patient outcomes. Traditional machine learning approaches for heart disease prediction, while achieving high accuracy, fail to provide interpretable explanations that medical professionals require for clinical decision-making. The rationale for this study stems from the critical need to bridge the gap between model performance and clinical trust, particularly in high-stakes healthcare environments where understanding the reasoning behind predictions is paramount for patient safety and physician confidence.
The purpose of this research is to develop and evaluate an explainable AI tool that predicts heart disease while providing transparent, interpretable explanations for each prediction.
The methodology involved training multiple machine learning models including Decision Trees, Random Forest, Logistic Regression, and Multi-Layer Perceptron on the Cleveland Heart Disease Dataset from the Kaggle website. The data includes 13 factors contributing to heart attack and the severity of heart attack ranking from 0 (none) to 4 (severe). The total number of records in the dataset is 304. preprocessing included handling missing values, feature normalization, and categorical variable encoding.
In the initial phase of the study, the models were trained without incorporating explainable AI techniques in order to evaluate their baseline predictive performance. It was observed that the output variable, representing five levels of heart disease severity (ranging from 0 to 4), was highly imbalanced corresponding to 57%, 19%, 12%, 8%, 4% of total data, respectively. This imbalance led to relatively low prediction accuracy, ranging from 51% to 57%.
However, when these five levels were consolidated into two categories (No Heart Attack vs. Yes Heart Attack), the model accuracy significantly improved, reaching up to 85%.
Explainable AI techniques, specifically SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations), will be integrated to generate both global and local explanations for model predictions. Additionally, a web-based visualization tool will be developed using Streamlit to present predictions alongside interpretable explanations in an accessible interface.
Future research could extend this approach to larger, more diverse datasets such as medical imaging and sonographic scanning. additionally, further XAI techniques like model-agnostic techniques could be explored to further enhance model interpretability across different medical conditions.
Research advisor: Dr. Masoud Naghedolfeizi
09:35
Kamia Ridley (Fort Valley State University, United States)
Developing a Chatbot to Improve Information Accessibility on a University Website
ABSTRACT. Integrating chatbots into websites has become an effective way to improve user engagement and simplify information access. This study focuses on developing a chatbot using the Rasa framework tailored for a university website. Unlike traditional search engines, which often deliver broad or irrelevant results, the chatbot is designed to provide precise, conversational responses specific to users’ inquiries. It also offers dynamic interaction by suggesting additional information when needed.
The chatbot was created using Python on the PyCharm platform with essential Rasa libraries. The core components of the Rasa framework, including the natural language understanding (NLU), Domain, and Stories files, were configured with information on the university’s programs, administration, and services. After training the chatbot, its accuracy was tested, achieving 99.8% on training data and 76% on test data.
To test its functionality, a simple web interface was developed, allowing users to ask questions via a chat box, and the chatbot responds with relevant information. JavaScript was used to enable communication between the website and the Rasa server. Queries entered in the chat box are sent to the Rasa server through HTTP requests, and the chatbot’s responses are returned and displayed in real time. Initial testing indicates seamless communication between the chatbot and the web interface, although further evaluation is planned to ensure secure data exchange.
Future work will involve expanding the chatbot’s knowledge base to include more comprehensive university-related topics, such as academic course catalogs, and improving its test accuracy to at least 85%. This will be achieved by refining training parameters, including increasing the number of epochs, adjusting learning rates, and incorporating more diverse training data. Collaboration with the university’s IT department is also planned to facilitate full integration of the chatbot into the official university website.
This research highlights the potential of chatbots to enhance user experiences on university websites by providing efficient and targeted support. It also identifies methods to optimize performance and expand interactive capabilities.
Mentors: Dr. Masoud Naghedolfeizi and Dr. Sarwan Dhir
Supporting Parent Understanding of Tennessee’s K–12 Computer Science Standards: An Evaluation of Data Visualizations and Resource Tools in K–2
ABSTRACT. Recently, new standards for Computer Science education were introduced in
Tennessee for K-2 students. Parents often lack access to tools and resources needed for understanding these new standards and topics, which affects their ability to support their children’s learning. The aim of this project is to provide a parent-focused resource for navigating and understanding the new requirements. For this project, we are developing an interactive, web-based platform to provide parents and their children with the necessary resources to understand the new curriculum, and the topics related to them. Our platform integrates data from K-2 CS standards and survey responses from local parents to develop resources and visualizations needed for navigating these changes. Through the use of our platform, we hope to bring transparency to the new curriculum and contribute to the future development of CS education in Tennessee.
10:15
Madison Gardner (Western Kentucky University, United States) Jacob Kennedy (Western Kentucky University, United States) Michael Galloway (Western Kentucky University, United States) Greg Arbuckle (Western Kentucky University, United States)
Machine Learning and Predictive Maintenance
ABSTRACT. The Fourth Industrial Revolution (4IR) or what is referred to as Industry 4.0 ushered in new data-driven technology into manufacturing processes. Advancements such as the integration of AI and Machine Learning give manufacturers a relationship with data that they have never had before. For all of this to work, however, efficient and organized communication between machines (M2M) and processes is vital. This is why the Industrial Internet of Things, or IIoT was created. IIoT allows a manufacturer to create their own network consisting of machines, controllers, sensors, and other field devices all transmitting data simultaneously. The pairing of IIoT and AI gives us the ability to develop Machine Learning algorithms, as well as the basis for a Predictive Maintenance System. Predictive Maintenance allows manufacturers to address unexpected halts in production before they even happen, because they can predict when a failure will occur. This prediction is made by analyzing historical data of the process, paying particular attention to prior failures and looking for patterns in the data before the failure occurred.
In the spring of this year, the IIoT research team was able to establish their own IIoT on campus. A network was assembled consisting of: two robots (Mitsubishi and Fanuc), their respective controllers, two Keyence cameras, two computers, and two Groov RIO (Remote I/O) modules. All of which are routed to the project’s very own server rack. This fall, there are five teams collaborating to outfit a FANUC robot with IMU sensors, and ArUco markers, to then create a Digital Twin of the robot. The groups then plan to use data from the Inertial Measurement Units and ArUco markers to verify the robot’s “actual” position and compare it to the robot’s own positional data. If the positions don’t match, the team can see if the value stays consistent during movement, and how far from the desired position the actual position is after movement. With historical data of this kind or a simulation, engineers could begin to predict the variance in the robot’s position, or when it would no longer function within manufacturer specifications, or the basis on which every Predictive Maintenance System is created.
3D Spatial Reconstruction Using Low-Cost 2D LiDAR and Iterative Closest Point Alignment
ABSTRACT. This project presents the development of a low-cost static 3D LiDAR scanner system. To implement this, a 2D LiDAR scanner will be used in conjunction with a stepper motor to generate three-dimensional spatial data. To compensate for deviations between the positioning of multiple scans of the same room, the use of the Iterative Closest Point (ICP) Algorithm is implemented. The objective of this work is to demonstrate that affordable hardware can approximate the same functionality as a commercial 3D LiDAR system. The proposed approach has multiple applications in indoor mapping, robotics, and low-cost reconstruction, offering an affordable and adaptable solution for use in research and education.
ABSTRACT. GoGoGecko is a pathfinding delivery bot with a detachable package carrier that has been designed to customize to each service it caters. The pathfinding is done using an A* algorithm and a dataset that is reflective of a subdivided map of a given area. We use sound sensors for obstacle detection and encoder dc motors to track its own location in real time. The delivery bot is made using Arduino and microcontrollers on a chassis with omnidirectional wheels. The outer shell and package carrier are custom designed, and 3D printed. The package carrier is made to be easily removed and replaced with each one having a specialized design.
Effective Methods of Real-Time Raytracing with Light-Ray
ABSTRACT. Light-Ray is a lightweight effective real-time dynamic raytracing engine that is meant to render computer graphics in order to provide photorealistic rendering on cross-platform. We implemented the engine in C++11 with OpenGL 4.6 compute shaders; the engine leverages GPU parallelization with GLSL, multithreading, and scalable rendering algorithms with the aim of achieving interactive frames rates of ~60 FPS. The prototype will have core features including importing Blender objects, mesh handling and an interactive editor built with ImGUI. Light-Ray simulates light behavior using a combination of pathtracing algorithms and optimizations, allowing for nearly photorealistic rendering. Light-Ray will also be integrable into other projects by exposing an API with the main render/event passes being virtual and implemented by the user.
Structural Feature Extraction of Hand-Drawn Kanji Characters through Stroke Identification and Graph Generation for GCN-Based Image Classification
ABSTRACT. Character classification through image recognition is a key task in the field of computer vision due to its application in translation, education, and document scanning. This can be a challenge for languages with a large number of characters such as Japanese, which contains thousands of logographic Kanji characters. Kanji characters are comprised of strokes drawn in specific positions and directions, and overlap with other strokes. Due to these properties, Kanji characters contain structural information which can be useful for recognition. Traditional character classification methods that employ convolutional neural networks (CNNs) rely on image texture data and do not capture underlying structural information from characters. Graph convolutional networks (GCNs) have the potential to utilize these structural features by classifying graphs derived from the strokes of character images, though the application of GCNs to Kanji character recognition is under-researched. To address this, we propose a methodology for extracting structural features from images of Kanji characters through stroke segmentation and identification, followed by generating corresponding graphs for GCN-based classification. We evaluate our approach using the ETL9B dataset to demonstrate the effectiveness of generating structural graphs from hand-drawn Kanji characters.
Dylan Varney (Tennessee Technological University, United States) Mike Rogers (Tennessee Technological University, United States)
EVLibScale: A Discrete Event Simulator of EV Charging Stations
ABSTRACT. Electric Vehicles (EVs) are becoming increasingly common in the modern world. City planners face many challenges to the placement and maintenance of charging Stations due to those stations’ effects on the power grid, especially as the number and scale of those stations increase. Testing and evaluation before deployment are required to avoid unnecessary and potentially harmful efforts, and need to be done in a timely manner to meet deadlines and avoid costs of prolonged development. An effective tool for such evaluation is simulation. Several works have focused on the geographic placement of charging stations using simulation, either software or software and Hardware-In-the-Loop (HIL), as a platform for their analysis and evaluation. However, comparatively few of these works have considered the behavioral aspect of consumers and their Electric Vehicles. In this work, we propose EVLibScale, a discrete event simulator, based on the EVLib Java library, to model the workings of multiple charging stations throughout an urban area to observe the effects of traffic at each one, including simulating individual drivers balking at long wait-times and choosing a another charging station. We have implemented the simulator and tested its performance. Our results indicate that EVLibScale can provide city planners with a performant, useful tool for charging station placement, as well as a platform for future, information-capturing updates.
09:35
Lydia Ray (Columbus State University, United States) Kisun Kim (Columbus State University, United States) Brian Ganzler (Columbus State University, United States)
Analysis of Social Biases in Text-to-image AI Models
ABSTRACT. This study investigates social biases in text-to-image AI models by analyzing 7,080 headshots generated with DALL·E 2, OpenJourney, and Stable Diffusion. Using automated pipelines for image generation, skin tone classification, and gender detection (with manual validation), we examined representations across professions, education levels, and crime likelihood. Results reveal consistent biases: high-income and highly educated roles are predominantly depicted as male and light-skinned, while crime-related prompts skew toward darker skin tones. These findings demonstrate how generative AI systems reinforce entrenched stereotypes, highlighting the urgent need for bias detection, mitigation strategies, and transparent evaluation in text-to-image AI research.
09:55
Pooja Pandey (Southeastern Louisiana University, United States) Paulo Alexandre Regis (Southeastern Louisiana University, United States)
Benchmarking Machine Learning Techniques for Soccer Match Outcome Prediction
ABSTRACT. Sports prediction has garnered increasing interest in recent years due to its growing relevance in sports analytics, betting markets, and enhancing fan engagement. In this study, we compare the performance of six established machine learning algorithms in modeling soccer match outcomes. The dataset we constructed incorporates a rich set of match details, including both player-level and team-level features. To ensure data quality, we applied imputation techniques for handling missing values and used appropriate encoding for categorical variables. Additionally, we employed Principal Component Analysis (PCA) and feature selection through Random Forest with varying importance thresholds to reduce dimensionality and highlight the most informative variables. The machine learning models evaluated include logistic regression, random forest, XGBoost, decision tree classifier, and two variants of multilayer perceptron. Experimental results yielded classification accuracies around 65%, and further analysis offered meaningful insights into feature significance and their influence on match outcomes.
10:15
Leah Spalding (Western Kentucky University, United States) Mark Simpson (Western Kentucky University, United States) Jeffrey Galloway (Western Kentucky University, United States)
Piloting a Replicable Model for Undergraduate User Experience Education: An Interdisciplinary, Capacity-Building Framework
ABSTRACT. The rapid growth of User Experience (UX) as a professional field has
created a significant gap between industry demand and academic
preparation. Higher education institutions, particularly regional
comprehensives, face systemic barriers to developing the interdisciplinary
programs needed to train a job-ready UX workforce. This
paper presents a case study of a novel, STEM-designated Bachelor
of Science in User Experience (UXBS) at Western Kentucky University,
the first of its kind in the state. We introduce a replicable
framework for institutional transformation structured around three
interdependent pillars: (1) Student Support, (2) Faculty and Curriculum
Development, and (3) Alumni and Industry Engagement. This
model provides a structured, evidence-based approach to building
institutional capacity, closing skills gaps, and establishing a
sustainable pipeline of ethically-grounded UX professionals.
Robert Lowe (University of Tennessee at Martin, United States)
Gang of Four and GPT Make Five
ABSTRACT. For three decades, the Gang of Four design patterns have shaped how we teach object-oriented software design. Yet, students often struggle to move from pattern recognition to pattern application—understanding not just what a pattern does, but why it fits a given problem. This presentation proposes an approach that brings a new “member” to the design table: the large language model.
By pairing students with GPT-based assistants, we can guide them through a conversational process of design reflection. Instead of using an LLM to generate code, students use it to articulate intent, identify relevant patterns, compare alternatives, and reason about trade-offs. For instance, given a scenario requiring event-driven updates, a student might prompt GPT for potential designs. The resulting dialogue—evaluating the Observer pattern versus Mediator—becomes a structured exercise in pattern-based reasoning.
This presentation will include segments of a chat with an LLM resulting in a well-designed program. Links will be provided to this sample along with additional information about how design patterns may be used to enhance student's understanding of design patterns and Software Engineering Principles.
11:00
Lixin Wang (Columbus State University, United States) Yogesh Botcha (Columbus State University, United States)
Stepping-stone Intrusion Detection Resistant to Intruders’ Chaff Attacks through Packet Matching
ABSTRACT. Professional hackers usually launch cyberattacks through several compromised hosts (called stepping-stones) to reduce the chance of being detected. The hacker's identity would not be revealed as it is hidden behind a long interactive chain of stepping-stone hosts if the attack is launched through a stepping-stone intrusion (SSI). Many approaches for SSI detection (SSID) have been proposed over the last thirty years. Most of these existing SSID methods only work well for attacking network traffic without chaffed packets, and thus these known SSID approaches are weak to resist intruders’ chaff attacks. This paper proposes an innovative SSID algorithm resistant to intruders’ chaff attacks through packet matching and computing the crossover ratios of packets. The proposed SSID algorithm is verified by well-designed network experiments. Our experimental results show that the proposed SSID algorithm based on packet matching and packet crossover works effectively in detecting SSI as well as resisting intruders’ chaff attacks.
11:20
David Frazier (University of Virginia's College at Wise, United States)
AI as a Paradigm Shift in Computer Programming
ABSTRACT. A computer programming paradigm is a description of how a program's logic and data are organized and expressed. Examples include Object-Oriented Programming or Event-Driven Programming. Each has a set of concepts, abstractions and principles that guide the programmer in creating solutions to problems. Paradigm shifts often arise from advances in computer hardware, the growing complexity of software systems, changes in application domains, or broader societal and economic forces.
.
AI fundamentally changes the way that programmers create programs. My presentation will compare traditional programming paradigms with AI-driven approaches and explore practical steps for integrating this paradigm shift into computer programming education.
11:40
Suk Jin Lee (James Madison University, United States)
Electric Vehicle Driving Range Estimation using On-Board Diagnostics Technology
ABSTRACT. Electric vehicles (EVs) have many advantages over internal combustion engine (ICE) vehicles, including environmental benefits, fuel cost savings, lower maintenance costs, and better performance. Range anxiety is still one of the main reasons to be reluctant to widespread adoption of EVs. This study assesses the performance of current range estimates. We collected data of the vehicles’ parameter IDs (PIDs) using On-Board Diagnostics (OBD)-II scanners, often located under the dashboard on the driver's side. We used Python-OBD library for handling data from a car's OBD-II port, which allows data parameters to select and transfer PIDs to Raspberry Pi using wireless communication channel while driving the vehicle. We designed fuzzy logic rules to estimate the driving range by considering personal behavior and environmental factors, such as state of charge (SoC), temperature, cruise, and aggressiveness. We compared the fuzzy model’s estimated range against the actual observed values. The model showed an accuracy of RMSE 5.15 miles, which allowed us to test how the fuzzy rules reflected real driving data.
Machine Learning Based Anomalous Network Traffic Detection
ABSTRACT. As network intrusion attacks become more prevalent, many businesses and other institutions are turning to machine learning in order to train their intrusion detection systems. Many of these groups are using various algorithms in order to create an efficient IDS. In this paper, we used the publicly available network intrusion dataset NSL-KDD to train and test the isolation forest, autoencoder, and XGBoost algorithms. We experiment with different preprocessing methods and then tune hyperparameters in order to achieve greater performance. After tuning the algorithms, we analyze their performance when it comes to detecting anomalous network data in order to select the best method for using NSL-KDD to create an accurate IDS.
11:00
Tommy Nguyen (CSU Student Chapter of ACM, United States) Charles Pruitt (CSU Student Chapter of ACM, United States) Hassan Stewart (CSU Student Chapter of ACM, United States) Joshua Bandoo (CSU Student Chapter of ACM, United States)
Phishing in the Age of Deepfakes: Emerging Cybersecurity Risks and Defenses
ABSTRACT. Phishing remains one of the most common and damaging forms of cyberattack, but advancements in artificial intelligence have created new, highly convincing methods of deception. A major concern is the rise of deepfakes—synthetic audio, video, and images generated using AI—to strengthen phishing campaigns. Attackers can now imitate trusted individuals, such as company executives or family members, to manipulate victims into transferring money or revealing confidential information.
Although phishing has been studied extensively, the integration of deepfakes significantly increases the complexity and effectiveness of these attacks. This issue is critical because it exploits human psychology and exposes the weaknesses of current detection technologies.
This project investigates how deepfakes are being used in phishing schemes, analyzes real-world case studies, and evaluates both technical and organizational defense strategies. The study highlights the importance of AI-driven detection systems, Zero Trust frameworks, and employee awareness training in mitigating deepfake phishing threats.
This topic holds strong relevance in the fields of Computer Science, Information Technology, and Cybersecurity. It illustrates how AI innovation can simultaneously advance technology while introducing new security vulnerabilities. The ultimate goal is to raise awareness and provide actionable strategies for identifying and preventing deepfake-based phishing attacks.
Works Cited:
Kietzmann, Jan, et al. “Deepfakes: Trick or Treat?” Business Horizons, vol. 63, no. 2, 2020, pp. 135–146. Elsevier, https://www.sciencedirect.com/science/article/abs/pii/S0007681319301600?via%3Dihub
Smith, John. “AI and Phishing Threats.” Cybersecurity Today, 2024, www.cybersecuritytoday.com/ai-phishing-threats
Smaller but Smarter: Why Small Language Models May Be the Future of AI Accuracy
ABSTRACT. As artificial intelligence continues to evolve, the race to create increasingly larger language models has revealed a new drawback: scale does not always equal accuracy. Large Language Model (LLMs) are trained to handle nearly every kind of task, but their broad generalization often leads to factual errors, hallucinations, and overall reduced reliability. While, Small Language Models (SLMs) demonstrate that smaller systems can outperform their larger counterparts on specialized tasks. This presentation explores how overgeneralization affects the accuracy of large models, why specialization enhances performance, and how the AI community might benefit from a shift toward smaller models. Ultimately, the goal is to show that the future of AI may depend not on building bigger systems, but building smaller, more purposeful ones.
11:40
James Hart (UT Martin, United States) Mehmuna Haque (Southeast Missouri State University, United States) Robert Lowe (UT Martin, United States)
FlyWire Project Data
ABSTRACT. The complete mapping of neural connections, or connectomes, provides an unprecedented foundation for bridging biological and artificial intelligence. The FlyWire project has produced a detailed connectome of the Drosophila melanogaster brain, which we use as the basis for exploring biologically inspired neural architectures. In this project, we aim to recreate parts—if not the entirety—of the fly connectome and translate them into computational neural networks. Using the Keras deep learning framework, we implement recurrent and feedforward models that mirror the connectivity patterns observed in FlyWire data. Our objective is to test whether architectures derived from real biological networks can yield improved performance or novel properties compared to conventional artificial neural networks. This work not only advances the understanding of how connectomic data can be computationally leveraged, but also highlights the potential for biological blueprints to guide the design of more efficient and adaptive machine learning systems.
Autumn Moncrief (University of Virginia at Wise, United States)
Leveraging Real-World Randomness for Improved Computer Security and Applications
ABSTRACT. The real world is full of possibilities and endless combinations - true randomness, something that computers lack. Computers do need the ability to be random and statistically they can come up with endless combinations, yet they aren’t random in nature. Computers are pseudo-random which means that when it’s important for things like security the computer is vulnerable due to being predictable. How do we get around this?
In this research I plan to research different ways the real world can introduce randomness into computers for programs, security, games, etc. I want to be able to see how beneficial this can be if it is introduced
This presentation will examine the uses of different tools to get around pseudo-random and how it can be beneficial moving forward.
11:00
Mathews Tomat (Samford University, United States) Brian Toone (Samford University, United States)
Real Time Vehicle Detection with YOLO: Evaluating Speed, Accuracy and Recall
ABSTRACT. In the following research on neural networks and artificial intelligence the reliability, accuracy and speed of the YOLO (You Only Look Once) AI image processing model is explored to determine its use in a practical setting. Using over 4,000 training images, the program is modified and optimized for use within the setting of real time vehicle detection under varying light and weather conditions. The goal is ultimately to see the recall, accuracy and speed when detecting vehicles which are traveling alongside the camera which captures the photos. Irrelevant vehicles which were located farther away from the camera or were on the opposite side of the road were discarded. The top speeds achieved were in the 30 FPS range and the accuracy reached well into the 95% range. These results indicate a strong potential for real world deployment in traffic situations.
ABSTRACT. Agentic AI systems have the ability to plan, make decisions, learn from their environment, and execute tasks. These systems can be integrated into the payment processing and finance industries in the form of smart fraud detection systems, investment insight agents, and autonomous payment bots. However, use of agentic AI in these instances poses challenges in maintaining Payment Card Industry Data Security Standard (PCI DSS) compliance and securing user data. This review, conducted alongside Leojai Hibbert under Dr. Sudip Mittal and Dr. Subash Neupane as part of the Cybersecurity in Emerging Technologies REU site at Mississippi State University, explores current research on agentic systems and presents ways to develop more compliant architectures. We explore suggested compliance solutions through a set of academic literature and map these solutions to overall PCI DSS goals such as maintaining secure networks, cardholder data protection, testing systems, and access control measures. Additionally, the review addresses emerging ideas such as explainable AI (XAI) systems for auditing, robust encryption for data handled by agents, and governance and human oversight solutions. We aim to outline further research directions and draw attention to the potential security and compliance risks associated with agentic systems and their implementations within the payment environments. We promote that agentic-specific vulnerabilities be addressed outside of high-level compliance frameworks in order to achieve the benefits of these systems.
11:40
Md. Nurullah (Columbus State University, United States) Rania Hodhod (Columbus State University, United States) Gaurob Saha (Columbus State University, United States)
Interpretable AI for Multi-Label Plant Disease Diagnosis
ABSTRACT. Plant diseases impact crop yield, quality, and overall agricultural productivity, posing a serious danger to global food security. Plant disease diagnosis has historically depended on laborious visual examinations by specialists, which can result in errors. Through the examination of leaf images, machine learning (ML) and artificial intelligence (AI), in particular Vision Transformers (VITs) and Convolutional Neural Networks (CNNs), provide a quicker, automated method of identifying plant illnesses. But because they are opaque, these models are sometimes termed as "black box" models, which reduces confidence in their forecasts. According to our research, the use of Explainable AI (XAI) methods, like Grad-CAM, Integrated Gradients, and LIME, greatly enhances model interpretability and facilitates practitioners' ability to recognize the fundamental signs of plant illnesses. In addition to advancing the field of plant disease identification, this work presents a viewpoint on enhancing AI transparency in practical agricultural applications by utilizing explainable AI methods. Our suggested models perform better than previous studies on the same dataset, with training accuracies of 100.00% for ViT, 96.88% for EfficientNetB7, 93.75% for EfficientNetB0, and 87.50% for ResNet50, and corresponding validation accuracies of 96.39% for ViT, 86.98% for EfficientNetB7, and 82.00% for EfficientNetB0. Through interpretable and trustworthy decision-making, this shows a significant boost in model performance while preserving transparency and credibility.
Madison Gardner (Western Kentucky University, United States) Jacob Kennedy (Western Kentucky University, United States) Isaac Gentry (Western Kentucky University, United States) Michael Galloway (Western Kentucky University, United States) Gregory Arbuckle (Western Kentucky University, United States) A K M Foysal Ahmed (Western Kentucky University, United States)
A Real-Time Cyber-Physical Framework for Predictive Maintenance in Smart Manufacturing
ABSTRACT. Smart manufacturing is a cornerstone of Industry 4.0 (4IR). It leverages AI, Industrial Internet of Things (IIoT), and automation to
transform production efficiency, product quality, and supply-chain
agility. However, manufacturing systems still face costly unplanned
downtime and underperformance due to static models, siloed sensor streams, and delayed analytics. Addressing these challenges
requires a new approach of interconnected systems that combine
rapid, edge-level intelligence with robust, scalable architectures
to develop foundational advances in computer and information
science that enable the next generation of intelligent, AI-driven,
distributed systems.
11:00
Eric Webb (Nova Southeastern University, United States) Gregory Simco (Nova Southeastern University, United States) Francisco Mitropoulos (Nova Southeastern University, United States) Sumitra Mukherjee (Nova Southeastern University, United States) Michael Lehrfeld (East Tennessee State University, United States)
Implementing RSA Accumulators for Asynchronous and Permissionless Reliable Broadcasting
ABSTRACT. Asynchronous consensus protocols are critical for decentralized and trustless environments such as decentralized finance, supply chains, and voting systems. These protocols avoid centralized authority and timing assumptions, improving resilience and security. As networks scale communication overhead becomes a major bottleneck limiting performance. The Aleph protocol is a notable example that offers both asynchronous and permissionless Byzantine Fault Tolerance. Unlike many prior designs, Aleph does not depend on a trusted dealer or fixed membership, making it well suited for open blockchain systems. Aleph’s design advances decentralization and security in the blockchain trilemma but at the cost of higher communication complexity and hindering scalability. The Aleph consensus relies on a Chain Reliable Broadcast protocol (ch-RBC) that suffers from quadratic communication overhead in large networks. This study enhances ch-RBC by replacing its Merkle tree-based transaction validation with Rivest–Shamir–Adleman (RSA) accumulators. RSA accumulators provide compact and constant sized proofs that can be batched and parallelized, thus reducing the protocol’s complexity from O(Tr + N² log N) to O(Tr + N²), where T and r denote the number of transactions and rounds respectively. This modification lowers bandwidth consumption and improves scalability while preserving security guarantees. In this study both Merkle and RSA based versions of ch-RBC were implemented in Rust and deployed on AWS EC2 instances using the AWS CDK. Experiments were scaled from 5 to 104 nodes with batch sizes up to 1024 transactions per round. Key metrics included throughput, latency, communication overhead, and resource utilization. Results demonstrated that RSA accumulators significantly improve scalability as the network increases, showing promise for future asynchronous and permissionless consensus.
Service-Learning as a High-Impact Practice: Fostering Identity, Motivation, and Success in First-Year Computer Science, Cybersecurity, and Mathematics Majors
ABSTRACT. Service-learning is a high-impact practice associated with gains in engagement and persistence, yet it remains underutilized in early STEM curricula. This paper reports on a pilot service-learning initiative embedded in a freshman seminar for first-year Computer Science, Cybersecurity, and Mathematics majors at a regional public university. Students completed a one-hour, professionally relevant shadowing experience with the campus Information Technology Services help desk, client services, and related units, followed by structured reflection. Using an exploratory qualitative case study design, we analyzed reflective essays revealing evidence of shifts in professional identity, computing self-efficacy, and intentions to persist. Themes indicated that the brief experience helped students appraise career fit, expand their understanding of computing work beyond programming, and translate observation into concrete next steps. While bounded by a single site and short duration, the results suggest brief service-learning experiences can function as a scalable, resource-efficient touchpoint that supports professional identity development, motivation, and success for first-year students in technical majors.
TealPlay: A Media Center for Experiential Learning
ABSTRACT. TealPlay is an MIT-licensed open source media center application that is designed to be accessible to lower-level undergraduate computing students. TealPlay provides an opportunity for these students to work on a large program early in their academic careers, instead of focusing exclusively on small example problems, which could build student confidence and improve motivation and satisfaction. This application is designed to solve a pedagogical chicken-and-egg problem in which students could benefit from experience with a larger application, but larger applications tend to be inaccessible without experience. TealPlay can provide opportunities to practice communications, project management, tool use, and testing skills.
CompileIt Updates: Supporting UT Single Sign-On Migration
ABSTRACT. CompileIt is a Firebase-hosted Angular web application developed by Bob Bradley (with early contributions from Kurt Wesner) nearly a decade ago. It enables students in UT Martin’s introductory Python and C++ programming courses to write, compile, test, and submit assignments entirely online. A key feature of the platform is its integration with both UT Martin’s single sign-on (SSO) system and the Canvas learning management system.
Starting Fall 2025, UT Martin ITS began the process of provisioning all student email accounts in Microsoft Exchange instead of Google Workspace. Furthermore, an initiative from the UT System prompted ITS to configure its SSO environment to be able to use Microsoft Azure authentication for its campus services. Although Firebase supports multiple authentication providers, adapting CompileIt to work with both Google and Microsoft sign-ons required significant updates and coordination with university IT systems.
This presentation describes the technical and architectural changes needed to support dual SSO integration in CompileIt. The process involved collaboration with Steven Robertson in UT Martin’s ITS department and highlighted several design considerations when maintaining long-lived academic web applications. Time permitting, additional updates to the CompileIt platform will also be demonstrated.
13:20
Ken Nguyen (Clayton State University, Morrow, GA 30260, United States) Muhammad Rahman (Clayton State University, Morrow, GA 30260, United States) Xiangdong An (Clayton State University, Morrow, GA 30260, United States)
Vibe Programming vs. Human Developers: Why Software Engineers Still Matter in the Age of AI
ABSTRACT. The fields of computer science (CS) and information technology (IT) are at a critical juncture. A confluence of economic factors has led to a widely reported downturn in tech employment, while some universities are seeing their first dip in CS enrollments after a decade of growth. This has led some institutions to consider program cutbacks. Simultaneously, the proliferation of generative AI has fueled a narrative suggesting the obsolescence of traditional software development roles. In this research we look at the potential impact of AI and "vibe programming" on software development. We argue that conclusions and actions to undermine human software developers are premature and strategically flawed. We present evidence that the current downturn is a cyclical correction. Furthermore, we contend that while AI and "vibe programming" offer benefits for rapid prototyping, their inherent limitations highlighted by high profile system failures, catastrophic data loss incidents, low pilot success rates, and an inability to handle long-term maintenance—will drive a stronger demand for professionals with deep, foundational CS and IT expertise. Citing overwhelming labor market data and national strategic interests, we conclude that now is the time for expansion, not contraction, of CS and IT education.
13:40
Karen Carter (University of Virginia at Wise, United States)
Global Engagement in AI & ML Curricula: A Research Framework
ABSTRACT. This Commonwealth Cybersecurity Initiative (CCI) funded continuing exploratory research paper centers on UVA-Wise and its five sister international institutions. A background on the “summers and winters” experiences of Artificial Intelligence (AI)/ Machine Learning (ML) informs this research, followed by a discussion of the research framework utilized to explore how education, specifically higher education, is struggling to embrace AI/ ML concepts within curricula and finally a review of the ethical concerns presented when preparing students for experiential learning leading to workforce entry.
14:00
Mir Hasan (Austin Peay State University, United States) Joseph Elarde (Austin Peay State University, United States) Barry Bruster (Austin Peay State University, United States)
AI-Assisted Literature Review: Tools and Techniques
ABSTRACT. With the growing volume of scholarly publications, conducting a thorough literature review has become increasingly time-consuming and challenging. In recent years, artificial intelligence (AI)–powered tools have emerged to help researchers locate, organize, and synthesize relevant studies more efficiently. This session introduces participants to the concept of AI-assisted literature reviews, focusing on practical tools and techniques that can enhance research productivity and accuracy. The talk will provide an overview of widely used AI platforms that can identify key themes, summarize findings, and uncover emerging trends in a research area.
ABSTRACT. This paper presents a comprehensive taxonomic framework for analyzing beamforming×ML research through automated ArXiv literature mining. We introduce a dual-domain tagging system that distinguishes between what we know and how we figured it out - algorithmic techniques and learning methodologies - enabling systematic identification of research gaps and trends. Our analysis of 512+ research papers spanning 2008-2025 reveals significant growth in supervised learning applications to beamforming, with neural networks and reinforcement learning showing emergent prominence. We have previously produced a paper on this topic that explored AI variation only but considered beamforming (BF) as a monolithic topic. Our platform provides researchers with tools for systematic literature analysis, trend identification, and cross-domain research opportunity discovery, with the capability to rerun our code as the field advances to obtain up-to-the-minute results. Key findings include the dominance of adaptive beamforming paradigms in ML applications and the emergence of deep learning techniques in massive MIMO and millimeter-wave systems.
13:20
Rahul Raj (Columbus State University, United States) Morgan Brown (Columbus State University, United States) Chandler Carabajal (Columbus State University, United States) Luka Wilmink (Columbus State University, United States)
Zero Trust for Non-Terrestrial Systems
ABSTRACT. Introduction
Cyber threats in the Non-Terrestrial Network (NTN) environment have shown that traditional perimeter-based security models are insufficient for specialized missions. NTNs are a critical service to enable global connectivity and are exceptional targets for nation-state cyber actors. To meet proper network security posture, we are collaborating with INSuRE Research to integrate the Zero Trust framework. Our investigation focuses on identifying gaps in existing protocols and overviewing the emerging Low Earth Orbit (LEO) NTN, CubeSats. In our study, we have accumulated literature reviews of essential protocols within space communications, CubeSats architecture, and case studies of successfully launched CubeSats.
Space Packet Protocol
To gain a broader better understanding of how space communications take place, we are investigating the Space Packet Protocol (SPP) and the various recommended protocols that are chained to it for the sake of security enhancement. This protocol is designed to set standards for the efficient transfer of data under the technological and logistical constraints of non-terrestrial missions. We are actively analyzing both the Space Packet Protocol’s structure and the potential security gaps and attack vectors left available by the security protocols that it works alongside. By taking a top-down approach to researching space communications recommendations, we hope to properly evaluate these protocols for their potential towards Zero Trust adaptations.
Space Data Link Security Protocol
The Space Data Link Security Protocol (SDLSP) is crucial for securing space communications. SDLSP enhances packet security by adding an additional header and trailer. Our research will involve analyzing the protocol in-depth, focusing on key management strategies, and its ability to prevent a variety of potential cyber-attacks. We will also analyze its software implementation and evaluate how SDLSP aligns with Zero Trust principles to improve overall security.
Core Flight System (cFS)
The core flight system is the framework Operating System (OS) that NASA deploys in their CubeSats. It has multiple constituent parts but is in effect a process that emulates an operating system using NASA’s Operating System Abstraction Layer (OSAL). With this system, we can test how Zero Trust practices could be properly implemented in mission critical systems. Our team is currently setting up a development environment where preexisting satellite software can be tested, and new software could be written if deemed necessary. After finalizing the setup, we will perform a gap analysis on existing security measures in the core flight system.
Advancing Building Energy Consumption Forecasting through Time Series Clustering
ABSTRACT. Time series forecasting has come a long way in recent years as well as the need for its application in energy conservation, cost reduction, and environmental sustainability. This growing demand that led to the development of Building Fault Detection (BFD) framework — a multi-step system designed for continuous monitoring, forecasting, and fault detection of multiple buildings. The dataset utilized in this framework includes four primary components: energy data, weather data, occupancy data, and building metadata. Every building is pre-categorized that is present in the building metadata and a Extreme Gradient Boosting (XGBoost) model is trained for each category. Model validation using the Coefficient of Determination (R²) showed high general accuracy (R² > 0.8), ensuring the efficiency of the framework in most cases. However, a few buildings within the same category underperformed (R² < 0.5), and it is not sufficient to justify that categorical grouping by definition alone accounts for the heterogeneity in temporal and behavioral building energy consumption behavior.
In order to address this limitation, the present research integrates time-series clustering algorithms—i.e., Time-Series K-Means (TSKMeans) and Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN)—to re-cluster the buildings according to their energy use behavior instead of inferring based on a priori operational types. This time-series clustering approach allows for data-driven reclassification that recognizes concealed patterns in temporal consumption behaviors, operating schedules, and environmental responsiveness. By grouping buildings by consumption profiles instead of structural or functional typologies, the system achieves a more detailed representation of energy behavior with clusters that better indicate real-world operating distinctions. The research is in its evaluation phase, where the focus is on quantifying the influence of these behavioral clusters on model stability and prediction accuracy. Early tests suggest that the buildings within clusters demonstrates close energy consumption patterns. Ultimately, it is hoped extend the existing BFD framework—albeit one that continuously embeds behavioral intelligence, adaptively reacts to changing operating conditions, and facilitates proactive energy management by delivering greater accuracy in fault detection and forecasting fidelity.
14:00
Amr Hamdy (Columbus State University, United States) Sergio Hernandez (Columbus State University, United States) Danish Rao (Columbus State University, United States) Andrew Massey (Columbus State University, United States)
The Intelligent Robotic Companion
ABSTRACT. This project presents an intelligent robotic companion powered by machine learning and multimodal sensing technologies, designed to perceive, understand, and respond to human interaction in real time. The system integrates classification algorithms for visual perception, sensor fusion for environmental awareness, and adaptive decision-making algorithms based on reinforcement learning to enable context-sensitive responses. By demonstrating personalized and adaptive interaction, the platform illustrates how AI-driven robotics can enhance human–robot relationships across diverse embodiments highlighting the potential of intelligent systems to advance the development of socially aware, interactive, and autonomous technologies.
14:20
Bernice Santana (CSU Student Chapter of ACM, United States) Adan Gutierrez (CSU Student Chapter of ACM, United States) Muhammad Rahman (Advisor of CSU Student Chapter of ACM, United States)
Disaster Mapping enhanced with AI
ABSTRACT. The integration of artificial intelligence into disaster mapping represents a critical advancement in modern emergency management. AI-driven mapping holds immense potential to improve preparedness, response, and recovery efforts during natural or man-made disasters by enabling faster and more accurate situational awareness. Communities across the globe, particularly those with limited infrastructure, face major challenges in disaster response due to delays in accurate mapping and resource allocation. Traditional methods are often too slow or lack precision. The complexity of managing real time data and predicting hazard impact zones cannot be adequately addressed using outdated approaches, highlighting the urgent need for AI-enhanced solutions in certain areas. This project explores the implementation of machine learning and computer vision models for processing satellite images, aerial photography, and sensor data. The approach involves training AI systems to automatically detect damage zones, predict risk areas, and classify infrastructure vulnerabilities. Studies have shown that AI-based disaster mapping significantly reduces response time and improves the accuracy of information delivered to first responders. Our research shows that artificial intelligence has the capacity to revolutionize disaster mapping by providing scalable, adaptable, and real time solutions. The outcome suggests that implementing AI driven systems could strengthen community resilience and minimize disaster-related losses. This makes AI-driven disaster mapping a crucial component of future emergency management frameworks.
Comparative Evaluation of Large Language Models for Cybersecurity Code Generation: GPT-4o, Claude-3, and Gemini
ABSTRACT. Software code generation, such as cybersecurity applications, is increasingly being done with the help of Large Language Models (LLMs). The question of whether these models can reliably produce functional, secure code is an open question, however. This paper is a comparison of three of the most well-known LLM: GPT-4o (OpenAI), Claude-3 (Anthropic), and Gemini (Google DeepMind), on ten security-related coding prompts. The output generated by each model was tested to execute successfully, and pass or fail was recorded. In this regard, the research shows that there are dramatic discrepancies in performance, with Gemini being the most dependable and Claude-3 being the worst performer. This paper provides an overview of the state of the art of LLM-based security-focused software development coding tools.
Akshith Nukala (Columbus State University, United States) Rania Hodhod (Columbus State University, United States) Vineetha Bandla (Columbus State University, United States)
Strategic AIML Integration for Volunteer Management Mobile Application
ABSTRACT. Artificial Intelligence and Machine Learning (AIML) have revolutionized the way the world works, from chat assistants in mobile phones to autonomous navigation in cars in personal life as well as the business world. In recent days, AI has advanced to a greater extent to the lengths of becoming the greatest programming agent/assistant as well as handling various applications with data such as image categorization, chatbots , and so on. Nevertheless, due to the hype which led to numerous AI solutions ready to integrate, the selection of AI solutions and efficient deployment strategy are also important factors affecting cost and system reliability.
This case study presents our approach towards integrating AIML technologies with a tailored cross-platform mobile application designed to assist non-governmental organizations with volunteers' event coordination and communication. Primary application functionalities of the target application include event management, and collaborative discussion forums, as well as administrator blogs.
The AIML applications involved in this case study are categorized into the following three main categories:
Content Intelligence: The application of LLMs for content generation and refinement such as event descriptions, outreach and engagement emails enables healthy communication. Additionally, light weight ML applications for summarizing and tagging event details, and recommending similar events also boost the community engagement and active participation.
Community Moderation and Health : Real time filtering of conversations for violence and abuse is enabled in a proactive manner to foster a safe conversational space. This is achieved through sentiment analysis and media content analysis, with discussion threads being assessed to a rapid understanding of volunteers' emotional state and disposition quickly.
Agentic Automation and Personalization: Agentic AI for the enhancement of operational effectiveness and the streamlining of administrative interaction by suggesting curated actions that perform tasks on a single click , such as automating monthly newsletter , blog creations, etc.
This study systematically evaluates the major solutions from cloud platforms APIs and open source alternatives analyzing their tradeoffs , particularly costs, functionality, and the justification for the chosen solution.
Constraint-Aware Counterfactual Explanations: A Human-Centric Framework for Transparent and Actionable Graduate Admission Decisions
ABSTRACT. The utilization of "black box" machine learning models is increasing in prevalence in important decisions like graduate admission and medical and law raises critical concerns about fairness, trust, transparency, and interpretability. These models provide the accurate results but fail to advise the applicants who have concerns about their decision. This study addresses these deficiencies by presenting the Constraint-Aware Counterfactual Explanations (CACE) framework, an innovative system that transforms a predictive model into an interactive, intelligent advisor. Our approach begins with a strict hyperparameter optimization pipeline that uses the Optuna framework to determine the best predictive engine. After that, we conduct a formal algorithmic fairness audit, which demonstrates the empirical proof for an advanced XAI system. The main novelty is about the architectural design of this CACE Framework’s intelligent triage system. This innovative system determines the applicant's case and directs them to one of three specialized explanation modules: (1) a SHAP-based Explanatory Report for admitted candidates, highlighting the key factors that contributed to their success; (2) SHAP-based Foundational Advice for clear rejections, identifying the primary areas of weakness; and (3) a Counterfactual-based Prescriptive Report for borderline cases, which generates diverse, realistic, and actionable pathways to achieve admission. By successfully demonstrating all three counseling paths, this work presents a complete, human-centric blueprint for building more ethical and empowering AI. It moves beyond standard feature importance explanations (like PDP and PFI) to provide the transparent, actionable counsel that is essential in promoting trust and equity in algorithmic decision-making.
PlantBase: A Knowledge-Based System for Gardening Success
ABSTRACT. Extensive knowledge of plant care is essential for successful gardening which can take years to acquire. Here we introduce PlantBase, an application designed to provide users, experienced or not, with information they may need to care for and discover different plants. PlantBase features a robust database that outputs tailored care plans and information based on user-defined criteria such as seasonality, plant type, difficulty level, etc. Stored with substantial amounts of data, PlantBase will be able to filter and query results that best fit the user's needs. The system integrates AI image recognition to identify subjects and then conducts a health analysis to detect issues such as diseases and pests, providing actionable treatment suggestions. Users are also able to track and log the progress of their plants, with notifications to help them stay consistent and in tune with their plants. All in all, PlantBase is a useful tool aimed at promoting agriculture and making information on gardening easily accessible for everyone.
Sarah Thompson (Western Kentucky University, United States) Kelly Miller (Western Kentucky University, United States) Emma Simpson (Western Kentucky University, United States) Lisa Hendricks (Western Kentucky University, United States)
VR Climate Science Simulation: Extreme Heat
ABSTRACT. A significant challenge in climate science modeling is visualizing data in a way that is both scientifically accurate and accessible to a broad audience. By leveraging virtual reality’s (VR’s) immersive capabilities, this project will enable users to experience climate scenarios firsthand rather than relying solely on traditional 2D maps or numerical models. The Climate Science Modeling project will have an interactive 1:1 replica of Western Kentucky University's campus, which users can explore and learn about how different climate scenarios affect WKU’s campus. The data needed to create these climate scenarios will be pulled from various climate science case studies and repositories. Climate scenarios will include extreme heat, flooding as a result of rainfall, and snow and ice. In each of these scenarios, users will be able to interact with WKU’s campus to discover the safe and unsafe places to be on campus, and will be able to see a visual simulation of the event. This VR environment will serve as a long-term educational resource, fostering interdisciplinary collaboration between climate scientists, engineers, urban planners, and emergency management professionals.
Mustafa Atici (Western Kentucky University, United States) Ferhan Atici (Western Kentucky University, United States) Olusegun Adebayo (Western Kentucky University, United States)
Generalized Master Theorem in Analysis of Divide-and-Conquer Algorithms
ABSTRACT. The complexity of a divide and conquer algorithm is typically analyzed using recurrence relations. A common form of recurrence relation for such algorithms is
T(n)=aT(n/b)+f(n).
Here T(n) is the time-complexity of a given problem with input size n. Integer a is the number of subproblems created in recursion. Input size of each subproblem is n/b. The function f(n) is the “cost” of dividing problems into subproblems and combining their solution to obtain the solution of the given problem.
The Master Theorem determines the time-complexity of a given problem by comparing f(n) and n^log_ba . There are three cases where f(n) asymptotically less than, equal, or greater than n^log_ba .
There are two main constraints in the Master Theorem. The first one is to divide given problem into equal sized subproblems, that is, a number of subproblems with input size n/b when given problem with input size n. The second one is that divide and combine cost f(n) is polynomial. In this work, we will focus on the following recurrence relation and its solution:
T(n)=a_1 T(n/b)+a_2 T(n/b^2)+f(n)
Like the Master Theorem, we will determine solutions for T(n) based on relations between a_1, a_2,b, and function f(n).
Generative AI, Digital Technology and its Impact on Education and Learning: Implications for Studying Computer Science
ABSTRACT. Great advances in the last 20 years in AI have made it available to the general public. It is in much of our
software, hardware, and is a buzzword that almost everyone has heard. There is no reason that it cannot be
a useful tool. But like most tools it cannot be a crutch on which we place absolute trust. How does its use
in K-12 impact problem solving skills? Also how does use from K-12 of digital learning technology impact
problem solving skills? I will explore some of the impacts and likely impacts of the use of generative AI
and digital technology on Education and Learning. We will look at some of the best practices as well as
places it should not be used. Some of the current K-12, college, and university uses will also be explored.
13:20
Jianhua Yang (Columbus State University, USA, United States)
Stepping-stone Intrusion Detection and its Development Trend
ABSTRACT. Cyber threats emerge when more and more people use the Internet. One of the most popular and critical threat is stepping-stone intrusion. Unlike other cyber-attacks, most computers under stepping-stone intrusion have no idea they are actually under attacks. Stepping-stone intrusion detection is thus so important to secure our internet infrastructure. In this abstract, we summarize some typical approaches developed since 1995 to detect stepping-stone intrusion. In order to understand them well, we categorize the detection approaches into two categories: HSSI (host-based stepping-stone Intrusion) and NSSI (network-based stepping-stone Intrusion). In HSSI detection approaches, we present content thumbprint, time thumbprint, number of packets, packet random-walk process, and cross-over packets. In NSSI detection approaches, we primarily focus on Yung’s approach, Step-function, Clustering-Partitioning, K-means data mining, and RTTs distribution of network traffic. We finally discuss the future trend of stepping-stone intrusion detection. One of the important trend in stepping-stone intrusion detection is upstream detection because this is a significant way to lower the false-negative detection errors.
In recent years, one clear trend of the development of stepping-stone intrusion detection is moving to upstream detection from downstream detection. Upstream detection is to estimate the length of the connection chain from the Sensor to the Attacker. If we can make progress for upstream detection, we can detect stepping-stone intrusion not only reducing false-negative detection errors, but also protecting the Victim side. That is the Sensor is the Victim. Most traditional approaches proposed for NSSI detection do not work for upstream detection. Little progress has been made in terms of stepping-stone intrusion upstream detection even though lots efforts have been put onto this open research project.
13:40
Qing Wang (The University of Tennessee at Martin, United States)
The VC-Dimension of Visibility on the One-Sided Guarding Terrains
ABSTRACT. Visibility in computational geometry describes the relationship between points or objects in a geometric space, specifically whether one point or object can “see” or have an unobstructed line of sight to another. This concept is crucial for solving various geometric problems, including path-finding and motion planning. This problem has broad applications in fields like robotics and surveillance.
An interesting measurement of the complexity of a set system is the notion of VC-Dimension. In this paper, we show that the VC-Dimension of visibility on the terrain where guards can see either their left or right is exactly 3. We give a lower bound construction and prove that shattering 4 points for such terrain is not possible.
Teaching Operating Systems with Impactful Programming & Soft Skills
ABSTRACT. I have been teaching an Operating System course for over 15 years. I have also attended many conferences and workshops on best practices about teaching Operating Systems in undergraduate level courses. Through all these years I have been experimenting with trying to find the happy crossroads between helping students understand the major components of an operating system and how it will impact them as they move into their careers. So often OS classes are taught with the major focus on algorithms that are used in an OS and having students programming an OS to help understand how those algorithms are implemented. I have heard from so many faculty how it seems that it is the best programmers that succeed in this class while leaving the weaker student programmers frustrated and many times missing the real learning goals of the course.
I have presented before on how I try to work into my courses many of the “soft skills” that the tech world is wanting our students to enter the workplace with and I have continued to develop those into my current iteration of my OS course. I have combined team skills, research, and programming into the course, but programming is focused on using the OS and its assets that most students will interact with in their future. In this presentation I will describe what I have been attempting and what the impact on students has had so far.
At this meeting, we will discuss what worked, what didn't work, and how we can improve our conference for next year. All conference officers will attend. Professionals are encouraged to attend. For students, this meeting is optional.
At dinner, we will socialize, have an amazing meal, and celebrate our students who presented at this year's conference. Prizes will be awarded to the top presentations and posters of the day.