ACM-MIDSE-2023: ACM MIDSOUTHEAST
PROGRAM FOR FRIDAY, NOVEMBER 17TH

View: session overviewtalk overview

08:00-09:00 Session 1: Keynote Talk
Location: Azalea
08:00
Distributed Analysis of Wireless at Nextscale

ABSTRACT. Improving radio frequency (RF) technologies (e.g., mobile phones, radar, satellites, IoT) requires scientists and engineers to discern and exploit relevant physics. Computational modeling can provide these key insights in cases where necessary experiments are intractable and pen-and-paper calculations are inadequate. However, the computational cost of these models often requires a compromise: one must reduce either the complexity of the scenario or the fidelity of the simulation, as full-resolution simulation of interacting systems can tax even the most capable supercomputers. The recent achievement of exascale computing and the near-term industry goal of zettascale suggest the importance of reassessing what is possible with physics-based simulation, particularly for RF applications. Consequently, the Cosmic Team at Oak Ridge National Laboratory is developing Distributed Analysis of Wireless at Nextscale (Cosmic DAWN), a collection of tools enabling scalable design space exploration and physics-based simulation of RF systems. In this talk, I introduce Cosmic DAWN at a high-level via notional use-cases. I characterize where we are currently and where we intend to go in the near term. Finally, due to the wealth of concepts used under the hood (from, e.g., high-performance computing, applied mathematics, electrical engineering, digital signal processing, AI/ML, and more), I’ll identify research and development opportunities that would benefit the community as it ventures toward nextscale.

09:15-10:35 Session 2A: Professional Presentations
Location: Highlander I
09:15
Game Controller Design Alternatives for Non-Traditional Applications – Proof of Concept

ABSTRACT. Game controllers, ranging from a potentiometer-based paddle game controller used for gaming in the 1970s (Retrogame Deconstruction Zone 2019) to the recently released PlayStation VR2 (Sony Interactive Entertainment LLC 2023), are merely interfaces to a computing platform. They digitize physical phenomena to be used as input to the system and receive digital signals to generate feedback to the user. Contemporary controllers read inputs including joystick position, button or trigger force, trackpad touch coordinates, and three-dimensional gyroscope and accelerometer readings while providing feedback in the form of vibrations, clicks, increased button or trigger resistance, and video and audio output.

These dynamic user interfaces suggest applications beyond entertainment. A controller could act as an interface to applications such as physical therapy (Yuan, et al. 2020), mental health therapy (Holmes, et al. 2009), educational games to increase motivation and attention in students with learning disabilities (Garcia-Redondo, et al. 2019), and games for the visually or physically disabled (Swaminathan, et al. 2018). In some instances, applications could require specially designed controllers that might monitor and provide feedback for a medical device like a hinged knee brace, mimic a device like a peripheral intravenous catheter (PIVC) for training purposes, or attach to a device like a walker to record patient progress.

This work examines four alternatives to consider when developing a controller-based interface for a non-traditional application: building a custom device from components, interfacing via Bluetooth or USB to a commercial controller, writing an application for a smart watch, or interfacing via Bluetooth to an active pen/stylus. It compares these alternatives based on criteria identified as crucial to the development of prototypes and adaptability to a classroom laboratory setting.

References

Garcia-Redondo, Patricia, Trinidad Garcia, Debora Areces, Jose Carlos Nunez, and Celestino Rodriguez. 2019. "Serious Games and Their Effect Improving Attention in Students with Learning Disabilities." International Journal of Environmental Research and Public Health 16 (14). https://www.mdpi.com/1660-4601/16/14/2480.

Holmes, Emily A., Ella L. James, Thomas Coode-Bate, and Catherine Deeprose. 2009. "Can Playing the Computer Game “Tetris” Reduce the Build-Up of Flashbacks for Trauma? A Proposal from Cognitive Science." Plos One. https://doi.org/10.1371/journal.pone.0004153.

Retrogame Deconstruction Zone. 2019. "Digression: The Paddle Controller." Retrogame Deconstruction Zone. August 26. https://www.retrogamedeconstructionzone.com/2019/08/digression-paddle-controller.html.

Sony Interactive Entertainment LLC. 2023. "PlayStation VR2 tech specs." PlayStation Official Site. https://www.playstation.com/en-us/ps-vr2/ps-vr2-tech-specs/.

Swaminathan, Manohar, Sujeath Pareddy, Tanuja Sunil Sawant, and Shubi Agarwal. 2018. "Video Gaming for the Vision Impaired." ASSETS '18: Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility. New York, NY. 465-467. https://doi.org/10.1145/3234695.3241025.

Yuan, Rey-Yue, Shih-Ching Chen, Chih-Wei Peng, Yen-Nung Lin, Yu-Tai Chang, and Chien-Hung Lai. 2020. "Effects of interactive video-game-based exercise on balance in older adults with mild-to-moderate Parkinson's disease." J Neuroeng Rehabil. 17 (1). https://pubmed.ncbi.nlm.nih.gov/32660512/.

09:35
Exploring the Impact of Lesson Summaries for Data-Driven Teaching Practices

ABSTRACT. Originally developed to improve student engagement in the face of communication barriers, lesson summaries are a simple tool with many benefits. Over five semesters, students in 14 computer science courses completed brief post-lesson forms with self-assessment and summarization components. In addition to promoting engagement, the activity helped students clarify their understanding and reinforce their comprehension skills. Reflecting on the lesson, students noted any areas of confusion and follow-up tasks. The summaries served as a valuable aid for communication as well as exam preparation. For the instructor, immediate access to student comprehension data provides a holistic view of the class’s understanding. This enabled targeted adjustments and clarification of any misconceptions in real time. Post-semester analysis helped to identify the content areas requiring the most attention and guided recommendations for continuous improvement in courses assessed for accreditation, e.g. ABET. Collected over multiple semesters, the data revealed trends that informed course design and development. Furthermore, the summaries may supplement traditional student evaluations of teaching (SETs). Whereas SETs are collected only once and may suffer from low response rates, the summaries capture student perception of understanding over the entire semester and from all students. Overall, the lesson summaries foster student engagement, provide valuable insights for instructors, and support data-driven, reflective teaching practices.

09:55
Efficacy and efficiency of academic dishonesty using generative artificial intelligence

ABSTRACT. The emergence of robust, human-like generative artificial intelligence (AI) has raised important questions for higher education. Can generative AI be used for academic dishonesty? Can generated text be readily identified by instructors? Over the Summer of 2023, an instructor used ChatGPT 3.5 and 4 to emulate human responses through four online courses at a rural community college. The results of the study showed AI-generated text to be exceedingly impressive, but it may not (yet) be a one-size-fits-all tool for cheating in college classes.

10:15
Strengthening the Undergraduate Computer Science Curriculum at Tennessee State University through Robotics: Fostering Diversity, Interest, and Retention in STEM

ABSTRACT. The rapid advancement of technology in the 21st century has revolutionized the way we live and interact. In this era, the pursuit of education in computing technology plays a crucial role in addressing the intrinsic need to improve our lives through technological advancements. However, the field of computer science faces challenges as many students enter programs without a clear understanding of what computer science entails and lacking the necessary problem-solving skills required in STEM education. This study aims to strengthen the undergraduate computer science curriculum at Tennessee State University (TSU) by infusing robotics, with a focus on fostering diversity, generating interest, and improving retention in STEM fields. By incorporating robotics into the curriculum, students will have the opportunity to learn programming fundamentals in an engaging and hands-on manner. The objective is to promote and propel students towards better opportunities in careers, research, and education. The main advantage of integrating robotics into the curriculum is that it aligns with the mission of the study, emphasizing the cultivation of problem-solving skills while providing students with practical experience in robotics, artificial intelligence (AI), and machine learning. These areas are highly sought-after in today's market, necessitating the preparation and success of students in these domains. By gaining experience in robotics, students also have the opportunity to expand their knowledge and awareness of robotic research, including coding equality and biases. Furthermore, the inclusion of robotics in the beginner programming course curriculum can aid in recruitment and retention efforts at the university. Overall, this study aims to have a circular impact on students entering, progressing through, and graduating with a degree in computer science from TSU. By equipping students with the necessary technological skills, problem-solving abilities, and research insights, the study aims to prepare students for success in their chosen careers and contribute to the broader field of computer science.

09:15-10:35 Session 2B: Undergraduate Presentations
Location: Azalea
09:15
An Effective Approach for Stepping-stone Intrusion Detection Resistant to Intruders’ Chaff-Perturbation via Packet Crossover

ABSTRACT. Today’s intruders usually send attacking commands to a target system through several stepping-stone hosts, in order to reduce the chance of being detected. With stepping-stone intrusion (SSI), the intruder’s identity is hidden behind a long interactive chain of hosts and very hard to detect. An effective approach for SSI detection (SSID) is to estimate the length of the chain. This type of method is called network-based SSID. Most existing network-based SSID worked effectively only when intruders’ session manipulation was not present. These known SSID algorithms are either weak to resist intruders’ chaff-perturbation manipulation or having very limited capability in resisting attacker’s session manipulation. This paper develops a novel network-based SSID algorithm resistant to intruders’ chaff-perturbation by using packet crossover. Our proposed SSID algorithm is simple and easy to implement as the number of packet crossovers can be easily computed. We conduct rigorous technical proofs to verify the correctness of our proposed algorithm. The experimental results show that our proposed SSID algorithm works effectively and perfectly in resisting intruders’ chaff-perturbation up to 50% chaff rate.

09:35
INTEGRATED FRAMEWORK FOR PROACTIVE DATA LOSS PREVENTION AND AUTO DATA RECOVERY

ABSTRACT. With the proliferation of interconnected systems and the exponential growth of data, organizations across various sectors face mounting challenges in protecting their valuable information. Organizations face increasingly sophisticated cyber threats in the rapidly evolving digital landscape that often result in significant data breaches and loss. Data loss prevention and recovery are critical components of a robust cyber security framework, aiming to protect sensitive information and ensure its availability. Over the years, we have seen the implementation of DLP and DR as two separate entities. This research paper investigates and proposes innovative approaches to enhance data loss prevention and recovery mechanisms, considering emerging technologies, evolving threat landscapes, and regulatory requirements. The research paper aims to propose an approach where DLP and DR can be merged into one integrated framework with the aid of Artificial Intelligence tools and software. By addressing the gaps in existing solutions, this research paper aims to contribute to the advancement of cyber security practices in safeguarding valuable data assets.

09:55
QR Code Tip Lines Using TCP/IP Network for Assisting Police with Information from the Public

ABSTRACT. A Quick Response (QR) code is a two-dimensional bar code which can be scanned by QR code scanner apps in mobile devices for accessing a web locator. In this paper, we explain how QR code is implemented for crime reporting tip line for the Atlanta police department. We also discuss the QR code tip line's use TCP/IP networking protocols at the network layer to function properly. The application of this protocol along with police department’s support for QR code implementation show a promise for it to be adopted throughout the nation.

10:15
Exploring the Boundless Realm of Digital Realities in Gamification

ABSTRACT. The metaverse concept has ignited individual and industrial imaginations, presenting a vision of an interconnected digital realm that transcends physical confines. However, this research focuses on the transformative potential of gamification within the metaverse. Gamification is a crucial element that has the potential to redefine how people interact with this digital reality and revolutionize various aspects of our lives. Delve into the technological foundations of the metaverse, including virtual reality, augmented reality, blockchain, and artificial intelligence, with a specific lens on how these technologies intersect with gamification. By combining gamification principles with these technological pillars and exploring the potential to reshape industries, redefine human interaction, and revolutionize entertainment, education, and commerce within the metaverse. Furthermore, many people scrutinize gamification's challenges and ethical considerations in this context, including privacy, security, and the digital divide. It is essential to navigating these issues as this research embarks on this journey toward a more gamified metaverse. This research offers valuable insights into the synergies between gamification and the metaverse, emphasizing the need for further research and development in this rapidly evolving field. Our exploration lays the foundation for harnessing the full potential of gamification in the digital realities of the metaverse.

09:15-10:35 Session 2C: Undergraduate Presentations
Location: Dogwood I
09:15
Revitalizing Solar Insights: A Dashboard for West Tennessee Solar Farm

ABSTRACT. This project we constructed aims to be an interactive dashboard for displaying solar irradiance data collected at a photovoltaic power station. Given a recent push by the University of Tennessee Research Foundation toward revitalizing its use, the West Tennessee Solar Farm will serve as a template. This location is of particular interest due to its proximity to Blue Oval City (the site of the new Ford manufacturing plant, near Stanton, TN). With the farm’s existing dashboard being non-functional, there is a demand for a solution, which we will achieve through MySQL, Python, Google Drive API, R-Shiny, Shinyapps.io, and Google Cloud Console.

MySQL serves as our data hub, efficiently organizing solar energy data by sensor location. Python, coupled with the Google Drive API, simulates real-time data collection. The core of the project is an R-Shiny dashboard offering real-time data visualization, interactive maps, detailed sensor information, and access to historical data and analysis. Users can select their desired time frames. Shinyapps.io hosts the dashboard, ensuring accessibility across diverse platforms, such as web browsers and various operating systems. This approach allows users from all major operating systems to access the dashboard, promoting widespread accessibility. To further fortify data security and enhance user convenience, Google Cloud Console safeguards our API information. Our dashboard incorporates an export function, enabling users to extract data. In addition, we constructed an easy-to-use webpage that is accessible across various major operating systems. This approach ensures that our project is widely available and caters to a diverse audience; thus, making valuable solar irradiance data easily accessible to all. This project aims to provide researchers, policymakers, and the public with real-time insights into solar irradiance data at the West Tennessee Solar Farm, supporting sustainable energy solutions.

09:35
Bullet Blitz

ABSTRACT. Bullet Blitz is a multiplayer first-person shooter (FPS) game based on classical shooters from the past. In the game, players will assume the roles of intelligent rodents armed with an array of weapons. This project is built on Unreal Engine 5 while using assets made from Blender and Ultimate Doom Builder. Bullet Blitz is meant to engage players in a fast paced FPS game with an emphasis on fast movement to keep players in fights with minimal downtime. Players will fight each other or artificial intelligent (AI) enemies on different kinds of maps, find weapons, and eliminate other opponents to score points.

Bullet Blitz has networking capabilities for multiplayer interactions, reliable damage handling, and efficient spawning systems to keep the action flowing smoothly. Multiple kinds of maps will be available to play on, as well as various environments in those maps that require the player to use the movement systems in order to gain an advantage over other players. The weapons are balanced into two groups, primaries and power weapons. While primaries are common and the player can equip two, power weapons are much stronger, but are combined with the map's environment, making them only accessible via the movement system.

09:55
Experimentation with AI-generated Malware Behavior and Various Detection Techniques

ABSTRACT. In the ever-evolving cybersecurity landscape, the emergence of malware generated by text-based artificial intelligence (AI) poses a formidable challenge. As the AI generated attacks get harder to detect, they will become undetectable by traditional security systems. AI-generated malware emphasizes its capacity for polymorphic transformations and adaptive features. By developing effective detection mechanisms against AI-generated threats it becomes essential to protect digital systems and data. Sandboxing environments and emulation tools serve as arenas for simulating real-world scenarios, so we can observe how malware will interact with the environment.

Our project, a work in progress, delves into the analysis of AI-generated malware, employing a comprehensive array of detection methods. This study began with creating a System32 virus, an AI generated malware and creating a sandbox environment. We plan to observe, analyze, and use different detection methods through the use of a virtual machine. Using machine learning algorithms, the research aims to develop models capable of discerning patterns within the code structures to identify these malicious entities. The goal is to fortify traditional signature-based detection with behavioral insights, creating a robust defense mechanism against AI-generated threats.

Ultimately, this approach aims to see how accurately traditional detection and AI detection methods are used to detect AI-generated malware.

10:15
Eye Movement Desensitization and Reprocessing Therapy in Virtual Reality: Proof of Concept

ABSTRACT. This research focuses on the integration of Eye Movement Desensitization and Reprocessing therapy (EMDR) and Virtual Reality (VR). EMDR is a therapeutic approach using bilateral stimulation to process distressing memories, emotions, and experiences. It is widely employed for conditions like PTSD, anxiety, and depression. The study involves the development of a proof-of-concept virtual reality tool tailored for EMDR therapy sessions. The tool comprises a bilaterally moving sphere within the user's VR environment, controllable by the therapist through a graphical interface on a computer. The therapist can dynamically adjust the sphere's color, speed, and sound to enhance the therapeutic process. The study's findings affirm the feasibility of creating a VR tool that supports therapists in conducting effective EMDR therapy sessions.

09:15-10:35 Session 2D: Peer-Reviewed Presentations
Location: Dogwood II
09:15
A Polynomial Algorithm to Compute Geodetic Ratio of a Graph

ABSTRACT. For a given simple graph $G$, the definition of geodetic number is given in late 1990's. It has been shown that there is no known polynomial algorithm to compute geodetic number of a given graph $G$. In this study, we introduce new graph theoretic parameter called {\it geodetic ration($gr$)}, driven from the parameter geodetic number, and give a polynomial algorithm to compute $gr(G)$ for the given graph $G$.

09:35
Cybersecurity Threat within Southwest Virginia's Arriculture

ABSTRACT. This exploratory research was to gain insight into rural Southwest Virginia (SWVA) agricultural producers’ awareness and utilization of smart technologies, as well as to effectively determine training needs to assist in the access to services, personal information privacy (PIP), proprietary information safeguards, and IP address protection. Agriculture is recognized as one of the 16 critical infrastructures sectors important to both the U.S. economy and national security. SWVA farmers have traditionally applied generationally farming methods but are now transitioning to smart technology utilization in efforts to increase crop and herd production quality. However, along with increased quality, the risk of a cyberattack is also increased and could easily exploit these smart agriculture vulnerabilities to access sensitive data, steal resources, and destroy equipment. Increased awareness and access to smart technology training will provide a long-term utilization model for education, training, and agriculture quality production to rural agricultural producer. Findings noted a lack of training opportunities for the producers within the SWVA region. While some general training has been made available for the FSA agency personal, cybersecurity training is not being provided to the local producer. Thus, a lack of awareness of the critical steps needed to protect PIP, proprietary information safeguards, and IP address protection is an issue of vital concern.

09:55
Smart building energy consumption prediction deployed on the cloud using Spark and XGBoost

ABSTRACT. This research focuses on employing the XGBoost algorithm to predict energy consumption across more than 50 smart facilities. Using a dataset limited to climatic and energy consumption readings in kWh, spanning initially 18 months, the challenge lay in identifying and modeling intricate non-linear relationships. To address this, the XGBoost algorithm was adopted, configured to include lag features to model the temporal nature of our problem, and trained to predict consumption over 48-hour windows. The standout feature of our system is its live nature; continuously ingesting new data, the scale of the system is growing every day, improving the predictive power at every retraining interval. Operationalized to scale on the cloud, this work underscores the practical utility of machine learning, specifically XGBoost, in real-time energy management scenarios with specific data constraints, while highlighting the potential obstacles introduced by a limited dataset.

10:15
Enhancing Chest X-ray Analysis with ACGAN: Generating Diverse and Privacy-Preserving Medical Images
PRESENTER: Khem Poudel

ABSTRACT. This study presents an Auxiliary Classifier Generative Adversarial Network (ACGAN) tailored for enriching the realm of chest X-ray diagnostics. The ACGAN architecture is devised to create synthetic chest X-ray images resembling real cases, addressing the scarcity of diverse medical imaging datasets while upholding patient privacy. By focusing on the diagnosis of lung diseases, specifically pneumonia, the model introduces a conditional framework within the GAN setup. The discriminator not only differentiates between authentic and synthesized medical images but also identifies the pathological conditions present in the generated images. A key innovation in this approach is its capacity to mitigate common GAN-related issues, including training difficulties and mode collapse. The ACGAN's dual functionality not only enhances the generation of authentic-like medical images but also improves the accuracy of deep convolutional neural networks in diagnosing lung diseases. The research demonstrates the potential of ACGAN in simulating a variety of normal and pathological chest X-ray images, laying a foundation for augmenting limited medical imaging datasets and bolstering the efficacy of diagnostic systems for lung diseases.

10:40-12:00 Session 3A: Professional Presentations
Location: Highlander I
10:40
The Role of a Computer Science Definition in Teaching an Introductory Course

ABSTRACT. Samford University offers a one semester course, Introduction to Computer Science, for students who are not majoring in computer science. This is a service course for the university that can also fulfill a General Education science requirement for nonmajors. Most students come to this course with little knowledge of computing, beyond what they know of their laptop, tablet, or smartphone. As a result, students need an appropriate definition of computer science that helps them to connect the topics in this broad introductory course and to identify how computing is affecting their daily lives and the world around them.

In this talk, I will discuss how I use a particular definition of computer science in this course as a conceptual framework to introduce various computer science topics and tie them together. I’ll talk about how this definition reinforces the objectives of the course. I will also present ideas for how it might be useful as an assessment tool to gauge student comprehension of course material. Finally, I will discuss how this might be a useful tool in other computer science courses.

11:00
Systematic Review of Empathy in Technology
PRESENTER: Kriti Chauhan

ABSTRACT. Various societal, ethical, and human elements are involved in developing, using, and supporting computer systems. Empathy, an ability “to perceive the internal frame of reference of another with accuracy”, is one of those elements. Researchers have studied empathy in several disciplines, and numerous discipline-specific empathic models exist in the research literature.

In computer science and information technology, empathy has been, and continues to be, a research focus in many different contexts. The research field of empathic computing, a paradigm that enables a system to understand human states and feelings, has been growing for almost two decades. It aims to use technology to create deeper shared understanding or empathy between people or people and computing systems. Empathic design, a user-centered design approach for creating computing solutions, impacts computer system development. Empathy plays an important role in online communications, between people, and between people and computer systems, such as AI (Artificial Intelligence) chatbots, other generative AI, and immersive technologies. AI, robotics, IoT (Internet of Things) devices, and immersive technologies all involve connections to humans that can be investigated and improved through the lens of empathy and, in turn, influence empathy in people. Researchers can use computer systems to train and encourage empathy. Consequently, there are emerging discussions around the ethics of using such technology.

This research aims to summarize the current state of empathy definitions, models, implementations, and usages in computer science. To that end, we will systematically review empathy under the lens of computer science, followed by an athematic analysis of identified research articles. I hope to compare, contrast, and highlight seminal studies and, ultimately, identify gaps in the literature for future studies.

11:20
Designing a Software Component to Handle Course Scheduling Constraints

ABSTRACT. The course scheduling administrator of a university department has a difficult task when creating the course schedule. This difficulty is caused by the many parameters that must be considered. He or she has to consider the courses to be offered, the available instructors, the available time periods, and the available locations. In addition, there are hard and soft constraints that affect how these inputs can be combined. Often, the problems with the schedule manifest themselves later in the term when it is very difficult and costly to fix. In this presentation, I will present a design for a software component that takes an existing course schedule and its constraints and then reports any violation of the constraints.

11:40
3 Laws are not enough: Developing an AI Acceptable Use Policy

ABSTRACT. When Isaac Asimov developed his Three Laws of Robotics, he believed that humans would also follow these laws and they were obvious. In the age of Generative AI and the many ways that AI has been woven into our lives, it is important to look at how it should and should not be used. Protected information, copyrighted information, and even academic integrity can be greatly affected by its use. It is important for any institution to determine what is acceptable and what is not. These are the things we deemed to be important parts of such a policy or set of policies.

10:40-12:00 Session 3B: Undergraduate Presentations
Location: Azalea
10:40
Development of an automated bat box detection system to assess use in response to microclimate

ABSTRACT. Many bat species use man-made structures for nightly roosts due to habitat loss from human disturbance. Recent research suggests that these bat houses often over-heat during the summer due to size, box placement, and over-crowding, ultimately leading to mortality in vulnerable bat species. Though we have begun research to define both the temperature limits of bats as well as what bat box designs can maintain temperatures within these limits, we have yet to assess how bats respond to temperature of bat boxes. We developed an inexpensive detection system and datalogger that can be deployed on commonly used bat boxes to record the exact time a bat enters and exits a bat box. Each system consists of two passive infrared (PIR) sensors connected to a microcontroller (ATMega328p) that records the time when a PIR sensor detects a change in infrared radiation, e.g., when a bat enters or exits a box. By placing two PIR sensors at a specified distance apart, we can determine if the bat is entering (triggering the lower PIR sensor before the upper PIR sensor) or exiting (triggering the upper PIR sensor before the lower PIR sensor) the bat box. Each microcontroller references a real-time clock and is connected to two thermocouples (one internal, one external) to record temperature whenever a PIR sensor is triggered. To reduce the battery-related maintenance requirements of the detection system and datalogger (changing out batteries) and avoid requiring a constant power source, steps were taken to minimize power draw and extend operational life. These included tuning microcontroller characteristics, reducing operational voltage, depowering electronic components when not in use, and other efforts. We aim to deploy these systems on public-owned bat boxes to assess the response of bats to temperature of bat boxes.

11:00
The Tale of Adlez

ABSTRACT. The Tale of Adlez is a two-dimensional, top-down, single- player action-adventure video game in the style of classic video games from the late 1980s and early 1990s. Our main source of inspiration for this project is The Legend of Zelda series and our primary goal is to provide players with a similar gaming experience. This project was created using the Unity game engine along with C#. The Tale of Adlez follows an un- named hero as they embark on a quest to collect three magical artifacts scattered across the game’s fantasy-inspired world. These artifacts are required to progress to the final area where the player faces off against the main antagonist.

The Tale of Adlez contains enemy AI, a combat system, an in- teractive game world, an inventory system, and a shop system where the player can purchase items with in-game currency collected during their adventure. The game world spans many biomes, including deserts, forests, and dungeons, which have all been created using free to use game graphics purchased online. The Tale of Adlez also contains NPCs scattered across the world that the player can dialogue with, multiple weapons and a variety of enemies with differing attack and movement behaviors. The Tale of Adlez was designed to become increas- ingly difficult as the player progresses through the story to better emulate the style of the classic video games it draws inspiration from.

11:20
ParkSense

ABSTRACT. Insufficient parking spaces on our campus, exacerbated by a new building occupying a large section of a previous lot, have forced students to park farther away. Consequently, this has led to increased travel times to classes, resulting in disruptions and attendance issues.

To address this challenge, we present an IoT solution utilizing OpenCV, YOLOv8, a Raspberry Pi, and a camera module. Our custom-built object detection software, leveraging OpenCV’s real-time computer vision and YOLOv8’s deep learning capa- bilities, focuses on identifying available and occupied parking spots.

The project’s scope involves real-time parking availability detection using computer vision and a camera stream. The software, trained on a tailored dataset for precision, runs on a Raspberry Pi, handling data collection, calculations, and pattern recognition results.

Users can access a website interface for accurate, periodic updates on parking spot availability. While the technology’s implementation is the primary focus, future iterations may ex- plore additional features, such as insights into optimal parking times.

However, we acknowledge the speculative nature of this aspect and emphasize delivering the core functionality of real-time parking detection.

11:40
Gardener’s Best Friend

ABSTRACT. Gardener's Best Friend is a mobile app for people who want to maintain a journal tracking the well-being of their plants. Within this digital journal, gardeners can meticulously record plant-related information, including health status, watering schedules, and sunlight preferences. The primary goal of this app is to help users maintain their plants' schedule, strengthen their connection with their gardens, and enhance their gardening knowledge by capturing progress photos and documenting their plant journey in a personal journal. Gardener's Best Friend aims to enhance the interaction that gardeners have with their plants, providing users with tools to improve the overall wellness of their gardens.

The app offers further information about each of the user's plants, such as its known preferences regarding sunlight exposure retrieved from a comprehensive database. The database should give the user some insight into their plants' specific needs, ensuring that users can optimize their gardens' care routine. The digital journal even employs a notification system, reminding the user when they need to water their plants, based on the user provided watering schedule. In addition to these features, the app utilizes user location and weather conditions to determine if watering is necessary on a particular day. Overall, Gardener's Best Friend intends to cater to average novices and expert gardeners alike, looking to keep records of their plants while receiving guidance on how to take care of them.

10:40-12:00 Session 3C: Undergraduate Presentations
Location: Dogwood I
10:40
Unleashing CUDA: Turbocharging Parallel Computing

ABSTRACT. CUDA, abbreviated as Compute Unified Device Architecture, occupies a central role in our quest to unveil groundbreaking progress in high-performance computing, focusing specifically on NVIDIA's CUDA framework. This paper embarks on an illuminating exploration of parallel computing intricacies, with the intent to motivate and support developers, researchers, enthusiasts, and engineers in adopting CUDA's adaptability and strength. By immersing ourselves in CUDA's foundational principles, the essentials of parallel programming, and a rich toolkit, our objective is to clarify how this technology can accelerate scientific discoveries, empower AI innovations, and enhance computational performance across diverse domains. Drawing inspiration from CUDA's capabilities, we embark on a journey that encourages others to discover and leverage the potential of parallel computing, enabling them to expand the horizons of what's attainable within their own projects. The paper will commence with an introduction to CUDA, providing an overview of its core concepts, capabilities, advantages, versatility, real-world applications, potential, and the advancements in CUDA technology.

11:00
Understanding Random Number Generators

ABSTRACT. Random number generators play a crucial role in modern computing with a wide range of applications including cryptography, authentication and so on. It is therefore essential to understand how these random numbers are generated and what security guarantees they can provide within their implementation.

Most random number generators are deterministic random number generators (DRNG), because the application of true random number generators (TRNG) is too slow for the practical application within common systems directly. DRNGs rely on the input of entropy to produce indistinguishable random outputs. Random number generators collect entropy, a measure of the randomness within a closed system, through entropy extractors. Entropy extractors may collect entropy from various hardware sources such as ring oscillators, device interrupts and other hardware events. These extractors can be physical, sources with known behavior that is often designed specifically for entropy collection, or non-physical, sources with unpredictable behavior difficult to test and model. Once the entropy is collected, it is then processed by conditioning functions that redistribute entropy among bits to improve the strength and quality of random numbers. The National Institute Standards of Technology (NIST), in their publication NIST 800-90B[2], has vetted some conditioning functions, classifying them as well known and secure. These include HMAC, CMAC and CBC-MAC. HMAC is a hash-based message authentication code, CMAC is a block cipher message authentication code and CBC-MAC is Cipher Block Chaining message authentication code. The output from a conditioning function is limited by the width of the conditioning function which is smaller than or equal to the input size. Hence, conditioning functions intake larger inputs and produce smaller, higher quality bits. They are incapable of adding additional entropy but could result in loss of entropy in the process.

Through this project we aim to better understand the process of random number generation focusing on entropy generation/collection by examining ring oscillators as a source of entropy and comparing the vetted and unvetted conditioning functions’ behavior with varying inputs. We will construct a ring oscillator from scratch and collect the jitter entropy it produces within repeating cycles, which we will analyze using the entropy estimation algorithms detailed in the NIST documentation. We will compare this construct to the Stefan Mueller’s jitter program, an efficient entropy source designed to function within any system environment and provide quality entropy [1], and establish the sampling needed from the construct for similar quality of outputs from the program. These extractors will then be used as the input source in testing vetted and unvetted conditioning functions using existing Python libraries to process the output for better entropy distribution between bits. Through these experiments, we will better understand the components to DRNGs and what qualities enable quality random number generation. With this project's conclusion, the different classifications and desirable traits for random number generators will be defined, allowing informed considerations on the security and robustness of RNG implementations when selecting systems and libraries reliant on them.

References

[1]S.Mueller, “SMUELLERDD/jitterentropy-library: Jitter entropy library,” GitHub, https://github.com/smuellerDD/jitterentropy-library (accessed Oct. 13, 2023).

[2] M. Turan, E. Barker, J. Kelsey, K. McKay, M. Baish, M. Boyle, Recommendation for the Entropy Sources Used for Random Bit Generation, https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-90B.pdf (accessed Oct. 13, 2023)

11:20
Autonomous Fertilizing Robot

ABSTRACT. Two students pursuing research at Georgia State University - Perimeter College are researching how to make an autonomous soil testing and fertilizing robot. The purpose of this robot is to create a system that can spread fertilizer according to the level of nutrients in the soil. The robot will test the Nitrogen, Phosphorus, and Potassium (NPK) levels in the soil, then determine how much more NPK is needed based on the values already in the soil. It will dispense that amount using a drop spreader over a 5-foot course. This robot will use programming knowledge to control the stepper motors, movement of the probe, and a dispenser to spread the needed amount of fertilizer using Arduino and a control algorithm. The robot has a two-layered design, with the fertilizer held on the upper layer; it is weighed and dispensed using a control algorithm, then deposited into the drop spreader on the lower level. The drop spreader spins when the robot moves, and thus fertilizer is spread across the interval at a constant rate. The interval needed for each fertilization round was discovered through the soil density. The NPK sensor is attached to a Cartesian robot arm to probe the soil. This sensor adds significant weight to the system; as such, the students have discovered the optimal wheel radius to support the weight of the robot. This robot can be built upon in the future to include complete autonomous motion and be used with different plants in community gardens and lawns.

10:40-12:00 Session 3D: Peer-Reviewed Presentations
Location: Dogwood II
10:40
Supporting infrastructure and devops for smart building fault detection
PRESENTER: Kaleb Horvath

ABSTRACT. We have designed a smart building fault detection (BFD) system on Microsoft Azure’s workspace ‘Databricks’, a high-performance computing (HPC) environment for big-data intensive applications powered by Apache Spark. Thanks to Databricks’ built-in scheduling interface, a continuous pipeline of real-time ingestion, integration, cleaning, and analytics workflows capable of energy consumption prediction and anomaly detection was implemented and deployed in the cloud. Seamless interaction between our workspace and Azure DataLake Storage (ADLS) allowed for secure and automated initial ingestion of raw data provided by a third-party via the Secure File Transfer Protocol (SFTP) and the Azure Blob File-system Secure (ABFSS) protocol drivers. With a powerful Python binding to the Apache Spark distributed computing framework, PySpark, these actions were coded into collaborative notebooks and chained into the aforementioned pipeline. Through intensive studies of the API documentation of PySpark and other libraries provided by the DataBricks runtime, the pipeline was successfully managed and configured throughout the lifetime of the project and is continuing to meet our needs in deployment. In this paper, we present details surrounding the underlying technology stack of our pipeline and enumerate some of the necessary configuration steps required to maintain and develop this big-data analytics application in the cloud.

11:00
Generative Literary Analysis Using a Multi-Stage Natural Language Processing Pipeline

ABSTRACT. Procedural computational literary interpretation was done on novels through an automated pipeline. Several books were analyzed using a hybrid of sentiment analysis, character-based clustering, plotting, and network analysis. Quantitative analysis was supplemented with generative AI for data extraction and manipulation. Pipeline output is an interactive Flask application.

11:20
Designing an energy fault detection pipeline for smart buildings

ABSTRACT. Collecting and analyzing large sets of data is becoming massively important in all industries. The manner in which data is stored and interpreted guides multiple decisions and priorities for any given organization. The potential insight gained by the correct interpre- tation of raw data should never be overlooked, for it can make the difference between leaving or finding inadequacies that affect the entire organization. Sometimes, the more data an organization has access to, the more unaware it becomes of the shortcomings otherwise made clear by reviewing the data—there is too much to process. Consider the following: power consumption is fairly informative regarding the nature of building functioning. Human- ity’s increasing energy demands are necessitating more deliberate mechanisms for monitoring energy consumption: deployment of Energy Incident Management (EIM). Critical to this is the inges- tion, preparation, and analysis of raw power consumption data, which classifies readings for users and/or provides more sophisti- cated analysis at some stage in the EIM pipeline. This system must scale. We have been presented with the challenge of analyzing raw power consumption data for a site consisting of over 80 smart buildings. With our system, engineers will be better able to monitor and respond to potential energy faults and gain clearing insight in per-building trends, which will ideally reduce energy usage and increase savings.

11:40
Replacing Regular Expressions in Autograder Feedback

ABSTRACT. Autograders are very useful for faculty and students, but both are frustrated when quality submissions receive zero points. Regular expressions help to increase the evaluation flexibility, but students (especially introductory students) have difficulty in understanding them and correcting their mistakes. Here, we describe our refined feedback tool that provides examples and explanations for missing terms (instead of regular expressions) in automated feedback. We present implementations in Java and Python that will work with almost any grading pipeline. Both implementations are available at https://github.com/hyrumcarroll/RefinedFeedback.

13:00-14:40 Session 4A: Professional Presentations
Location: Highlander I
13:00
Visibility Graph of Polygons

ABSTRACT. Geometric covering problems have been a focus of research for decades. Most variants of the problem are NP-hard, and therefore most research on geometric set cover focuses on designing polynomial-time approximation algorithms whose approximation ratio is as good as possible. This problem is classically referred to as the art gallery problem as an art gallery can be modeled as a polygon and the points placed by an algorithm represent cameras that can “guard” the art gallery. This has been one of the most well- known problems in computational geometry for many years.

One very closely related issue which has received a lot of attention in the community is the visibility graph of a simple polygon. Two major open problems regarding visibility graphs of simple polygons are the visibility graph characterization problem and the visibility graph recognition problem. The visibility graph characterization problem seeks to define a set of properties that all visibility graphs satisfy. The visibility graph recognition problem is the following. Given a graph G, determine if there exists a simple polygon P such that G is the visibility graph of P.

13:20
Two Week Sprints - What Can Students Really Learn?

ABSTRACT. We have had a in our curriculum a class we just called “Current Topics”. This class was designed for a faculty member to be able to teach any current topic in the field of computer science. When I was asked to teach it in 2017, I decided to teach on the topic of mobile development. This was a very hot topic at the time, and we did not offer our students the chance to learn this skill and knowledge anywhere else in our curriculum. Over the summer of 2017 I began to prepare for this class and was looking at many different textbooks, tools, frameworks, languages, platforms, etc. As I thought about how best to present this material, I realized I did not want this course to just be a bunch of lectures with a small amount of programming. I then realized the students would gain more if they had a real project they could work towards and at first, I thought I would just select some neat projects myself, then I thought maybe they should select the app to develop. This worked fine, but the next year I had received a grant to purchase VR equipment and decided to send out an email to full university faculty and see if anyone had a VR project they would like to try out. After receiving 40 project ideas that first year, this class has become one of my favorite classes to teach. In this presentation I will cover how this course has continued to evolve and has now become our software engineering course. The course now has elements that allow students to have full experience in software development including agile, daily standups, storyboarding, customer communication, two-week sprints, etc. The final element of the course is students presenting before an audience of professional developers from the community. I will share the lessons learned in developing this course and feedback from students to those who attend the final presentations. This talk will also share how to assess a course like this.

13:40
Unsustainability With Robots and UAVs on Planets and Moons

ABSTRACT. Space exploration presents numerous challenges when it comes to investigating planets and moons. One of the primary constraints is the limitation of resources that can be transported from Earth. Regardless of the celestial body, nearly all of them feature rugged terrain. Rovers have traditionally been the only practical choice for conducting surveys of the surrounding environment. However, rovers are restricted to operating in relatively flat regions, even though many planets and moons predominantly consist of valleys and hills, which are areas that require extensive exploration. A viable and practical solution to overcome these limitations is the deployment of Unmanned Aerial Vehicles (UAVs), commonly referred to as drones. Researchers have explored various methods, such as plasma spray and nuclear combustion, for achieving multi-directional propulsion in space. Currently, MIT is in the experimental phase with a model designed to hover above the surface using field polarization. However, it’s essential to acknowledge that this technology has its own set of limitations, encompassing both environmental and technical aspects. Our research involves an in-depth examination of these models, assessing their advantages and disadvantages, and exploring and testing the feasibility of various theories and techniques.

14:00
A Flexible and Broad Operating System Project

ABSTRACT. A course in operating systems is uniquely challenging to plan and execute effectively. One likely reason is that operating systems are complex, and class time is precious. Traditional approaches vary in how they balance theory and practice, with the typical course focusing on the development of a small-scale operating system project. One disadvantage of this approach is that it tends to focus deeply on one type of operating system architecture and does not provide adequate breadth of exploration. This is a presentation of a project and course approach which seeks to correct this problem. Students study an existing monolithic kernel while producing their own microkernel system by adapting and redesigning code. The system is flexible enough to provoke discussion of both Unix-like systems and the Microsoft Windows system. Finally, while fully successful students will see and edit virtually all of the system, each assignment is a stand alone assignment which invites the student to write and test a specific aspect of the system, with all other parts provided as precompiled modules. Each assignment provides a module which is seen and tested as part of the larger enclosing system, with experiments run on a complete system in each assignment.

14:20
Text-based Animation to Teach Introductory Computer Science Courses

ABSTRACT. We applied a pedagogical approach where text-based animation of stick figures are used to teach programming concepts in introductory computer science courses. The use of graphics in introductory programming courses is not new. But these graphics animations mostly use visual programming environment that requires students to do drag and drop types of programming. Although these graphics are fun, the students do not often understand the programming logic behind these graphics animations. These tools provide complex environment and learning the tool become burdensome to the beginners. In our approach we do not use any such graphics environment. Students drew stick figures (such as humans, birds, animals, etc.) by writing simple programming language instructions then wrote code to move such figures across a text-based command window or a terminal screen. Text-based stick figures are drawn by printing some of the letters of the alphabet and special characters such as underscores, dashes, forward slashes, back slashes, periods, tildes, parenthesis, brackets, braces, etc. Programming assignments and projects created for various animation tasks to introduce and reinforce basic programming language constructs. Through the creation of animations students learned to create programming logic involving decision structures, repetition, arrays, recursion, etc. Besides the learning of programming concepts this approach generated enthusiasm and love for programming among the learners.

13:00-14:40 Session 4B: Graduate Student Presentations
Location: Azalea
13:00
Remote Surveillance for Soldier and Civilian Recognition

ABSTRACT. A remote surveillance system for real-time detection and classification of individuals in live video feeds has been developed. The methodology used is based on a YOLO (You Only Look Once) model, which combines computer vision techniques with deep learning, for the reliable detection of civilians and soldiers. The system generates a message that can be used for swift decision-making in security, military, and disaster response contexts. By automating the initial classification of civilian and non-civilian persons, this system can be used to enhance situational awareness, reduce cognitive load on humans, and advance remote surveillance capabilities. The system, trained on a dataset of more than 5,000 images and video frames, has been tested on video streams containing civilians, soldiers, or both, in various outdoor situations and daytime lighting conditions. The results obtained so far show a Mean Average Precision (MAP) value of more than 0.8.

13:20
Enhancing Low-Code Development Platforms for Error Detection and Test Expression by End-Users

ABSTRACT. Low-code development allows end-users to develop software that meets their requirements without in-depth knowledge of traditional programming languages (e.g., Java, C++, Python). Although low-code development ideas have been around for a decade, the emergence of low-code development tools for industry adoption and associated research investigations has surged recently. However, aside from the base functionality of the low-code development platforms, little attention has been paid to the support for testing low-code software developed by end-users. Low-code development platforms still need more support and research into software testing concepts, including: end-user specification of test cases, end-user understanding of the test results, and testing the interaction between modules and workflows. End-users who use low-code development platforms need the ability to find bugs while they address quality assurance from a global view of their product. All of the testing support also needs to be offered at a level of abstraction appropriate for end-users who are not trained as programmers. This presentation introduces our work with Bubble.io, a low-code platform, to explain the difficulties in supporting software testing with existing low-code platforms. We describe our challenges and initial solutions in testing to help end-users better understand their errors so that they can make targeted changes.

13:40
The application of deep learning to diagnose plant diseases

ABSTRACT. Diagnosing plant diseases plays a critical role in promoting sustainable agriculture and ensuring food security. Early and frequent disease detection is of utmost importance in this context. Traditional approaches for identifying plant diseases rely on human experts visually inspecting and analyzing leaf symptoms, a process that is both time-consuming and prone to errors. Delayed disease detection can lead to significant harm for individual farmers or even catastrophic consequences for entire forests. In recent years, deep learning techniques, particularly Convolutional Neural Networks (CNNs), have demonstrated remarkable potential in tasks such as plant disease identification through image recognition. This study introduces a CNN-based methodology for the detection and classification of diseased plant leaves compared to healthy ones. In the course of our experiments, we developed two distinct models: a Binary classification model and a Multiclassification model. The Binary classification model effectively distinguishes between diseased and healthy plant images, while the Multiclassification model not only identifies diseased images but also provides the name of the disease along with its probability. For our research, we sourced our dataset from Kaggle, a publicly available platform, as collecting data directly from trees and diseases in the field is a time-intensive endeavor. This dataset comprises 4,236 images, with 2,107 representing diseased plants and 2,129 representing healthy ones. To manage this substantial dataset, we employed the TensorFlow dataset API, which allows us to process datasets that may not fit entirely in memory. The API operates on the concepts of Datasets and Iterators. Our training data consisted of 70% of the total dataset, while the remaining 30% was allocated to validation, ensuring an impartial evaluation of our model's performance during hyperparameter tuning. Our model incorporates various layers, including Flatten Layer, Max-Pooling Layer, Dropout Layer, and Dense Layer. In the course of our experiments, we tested different optimizers such as Adam, SGD, and Adamax, and we varied the number of training epochs between 15, 20, 25, and 30. The Binary classification model achieved an accuracy exceeding 85%, showcasing the potential of deep learning for diagnosing diseased plants. The Multiclassification model, while achieving slightly lower accuracy, proved its capacity to handle large volumes of data and provide disease identifications. Currently, we are working on elevating the accuracy further by developing a hybrid model. This hybrid model will involve training the neural network using genetic algorithms, aiming for even more precise predictions in plant disease diagnosis.

14:00
Analyzing what entrepreneurs actually do using artificial intelligence

ABSTRACT. The literature on entrepreneurs describes individuals and the activities they perform in the creation of new businesses. Historically, the literature has extolled the virtues of writing a business plan. The leading textbooks all have sections on writing a business plan. However, recent observations and academic studies suggest that entrepreneurs do not actually do all of the things previously linked to them. In this project, we use artificial intelligence to evaluate the written feedback from 100 entrepreneurs based upon interviews, to analyze the following three research questions: What factors or people influenced the decision to start a new business?, which lessons did entrepreneurs learned from starting a new business? and which factors impacted growth and survival of the business? This study is aiming to provide empirical evidence that fills in the gap in our understanding of what entrepreneurs actually do prior to starting and once they have their business in operation. At the core of this project is the creation of a machine-learning model to conduct content analysis.

The project will use natural language processing to process the large amount of written feedback from the entrepreneurs. By using natural language processing, we can make and use computer models to extract the meaning from this unstructured and to some degree open-ended text, for example through summarization and finding patterns. We can cluster similar elements together based on the content to give a high-level overview of the information gathered from the written feedback. An interesting algorithm to use in natural language processing is topic modeling, by using contextual analysis on the text we can determine topics which are common and potentially combine with data labeling the different element of the written feedback in order to identify the factors and lessons from our research questions and their importance to what an entrepreneur do. Thereby we would be able to use machine learning to answer the three research questions.

14:20
SQL, NoSQL, or NewSQL: Choosing the Right Database is No Longer a simple Query

ABSTRACT. In an ever-growing data-dependent industry, databases are a crucial component in most computer systems today. As the demand for efficient data management grows with the surge of big data, businesses face the challenge of selecting the most suitable Database Management System (DBMS). This paper delves into the pivotal decision between Relational Database Management Systems (RDBMS) based on Structured Query Language (SQL) and Non-Relational Database Management Systems (NRDBMS), commonly known as NoSQL. Additionally, it introduces the emerging paradigm of New Structured Query Language (NewSQL). The choice of database technology hinges on specific requirements, with SQL databases excelling in structured, consistent, and highly available data scenarios. NoSQL databases shine in unstructured data environments, prioritizing scalability and flexibility. NewSQL databases present a promising alternative, merging the strengths of both paradigms. Future trends indicate NewSQL's increasing prevalence in organizations dealing with data-intensive applications. Ultimately, the decision lies with the database administrator, considering the organization's goals and evolving industry trends.

13:00-14:40 Session 4C: Poster Presentations
Location: Dogwood I
Empowering AI’s Trust and Big Data Resources in Healthcare Industry with Blockchain
PRESENTER: Samir Poudel

ABSTRACT. AI applications are being deployed rapidly in the workplace, health care, finance, transportation, education, and social media platforms. However, there is very little transparency about the algorithms, or the data used in these AI applications. Society learns from unvalidated social media (and not from other credible news sources), believing these sources are truthful, as they devour the data and information. To compound this issue, humans pass these cultural biases and processes to the following generations. It is concerning to see that there is a big disconnect between the big data technologies that are responsible for management of massive electronic health records (EHRs) in health care industry, the validity of AI models deployed in the day-to-day life of citizens, and the blockchain technology that can be employed to secure and validate big data resources and AI models. These significant developments in the industrial evolution are progressing rapidly without any synergy. To our knowledge, there is not much effort fusing the parallel developments to create a trustable economic engine to allow the safe evolution of this expanding world of digital healthcare for the progress of human culture, social transformation, and next-generation industrial evolution. We believe the first building block is the integration of AI algorithms and the blockchain platform to improve transparency and trustworthiness. As a baby step in this direction, this poster describes a new approach to integrating AI algorithms, big data resources related to healthcare, and the blockchain platform for the research and development of trustworthy and privacy-preserving applications for validating and tracking the development of digital healthcare industry.

A Deep Learning Approach for Decision Support in Efficient 3D Object Manufacturing Method Selection

ABSTRACT. The global Auto Parts Manufacturing Market is expected to expand at a CAGR (Compound Annual Growth Rate) of 7.2% between 2023 and 2030, reaching a value of $1185.95 Billion by 2030. Diverse manufacturing methods, including Injection Molding, Waterjet Cutting, Laser Cutting, CNC Machining, and 3D Printing, are employed across industries. Certain objects may be produced with greater efficiency using a particular manufacturing process, whereas others might require different techniques. The choice of a production method for a new 3D object is influenced by several factors, including its shape and the materials used. In current practices, manufacturers consider existing similar objects during the decision-making process to select the best manufacturing method. However, manually identifying existing similar objects can be time-consuming due to the large variety of data classes. To address this problem, we propose a deep learning-based clustering framework that includes a feature extractor based on a Multiview Multi-Slice CNN (Convolutional Neural Network) network and a robust clustering algorithm. The proposed feature extractor identifies key features from multiple slices captured from different viewpoints of the 3D object. To enable effective clustering of these features, we shall use the well-established DBSCAN (Density-Based Spatial Clustering of Applications with Noise) clustering algorithm. The DBSCAN clustering algorithm adapts to varying data densities, eliminating the need to specify a fixed number of clusters to address the uncertainty on the potential number of clusters. We believe this research will significantly contribute to improving design-to-manufacturing workflows and fostering long-term success in the industry.

Identifying Colocation Patterns in Local and Global Scale Using Global Terrorism Dataset

ABSTRACT. A spatial colocation pattern refers to subsets of spatial features commonly found near each other within a geographical area. Examples of colocation patterns include symbiotic relationships between species, such as Nile Crocodiles and Egyptian Plover. Colocation pattern mining plays crucial roles in fields like epidemiology, for identifying relationships between diseases and environmental factors, and crime analysis, for finding links between crime event types and potential crime generators. Existing work in colocation mining primarily focuses on addressing computational challenges, such as the exponential growth in candidate patterns with the increasing number of spatial features and the computational expense of spatial neighborhood relationship checks for a large number of feature instances. However, very few research works address the challenge of varying interestingness of a colocation pattern across different regions. Interesting colocation patterns are context-dependent and can be influenced by a range of factors, including local context, cultural differences, and so on. For instance, a candidate colocation pattern which is interesting in Japan may not be interesting in USA. In this paper, we address the challenge of identifying colocation patterns at both local (city, country) and global scales. The spatial neighborhood relationships criteria to identify the interesting colocation patterns may vary from one region to another. To address this, we propose a novel colocation mining algorithm that incorporates a spatial clustering algorithm (Uber h3) to identify diverse clusters and an adaptive spatial neighborhood relationship-based colocation mining algorithm. We plan to evaluate our proposed approach on Global Terrorism Dataset that includes various terrorist attack events around the world.

Utilizing Neural Networks to Predict Water Temperatures in a Thermal Refuge

ABSTRACT. Cold-stunning syndrome is a condition that affects cold blooded aquatic life when water temperatures drop. Cold stunning leaves aquatic life lethargic and eventually unable to swim. The Laguna Madre is a shallow body of water along the southern Texas coast of the Gulf of Mexico where endangered sea turtles and different fish species reside. During strong cold fronts, water temperatures can drop rapidly and leave sea turtles and fishes cold-stunned. Without interventions, many stunned turtles can grow ill or even die. In 2021, 13,000 sea turtles experienced cold-stunning syndrome and it is estimated that 80% of these turtles died. During cold-stunning events, marine life often seek warmer waters. These warmer waters are often referred to as thermal refuge areas, providing marine life with protection against cold stunning. However, these locations can also become prime targets for fishermen who may take advantage of the accumulated marine life that are vulnerable to capture during cold-stunning events. New Texas regulations are protecting marine life in these thermal refuges. Stakeholders within the Coastal Bend community have communicated that there is a need to verify thermal refuge locations along the southern Texas coast, in order to improve responses during cold-stunning events. This research proposes to use a machine learning model to nowcast water temperatures in the canals of the Laguna Madre, a possible thermal refuge, where real time measurements do not exist. The proposed model takes its inputs from the Laguna Madre itself for which there are not only real time measurements but also models which forecast water temperature several days in advance. This research also tested several loss functions in order to optimize performance. Further analysis found a weighted Mean Absolute Percentage Error (MAPE) loss function helped to improve water temperature predictions below 15°C with minimal impact on the overall performance of the model. This model is also capable of forecasting the water temperatures in the Laguna Madre canals by replacing measured inputs from the Texas Coastal Ocean Observation Network (TCOON) with forecasted inputs from the Conrad Blucher Institute operational AI model and the National Digital Forecast Database air temperature predictions.

13:00-14:40 Session 4D: Peer-Reviewed Presentations
Location: Dogwood II
13:00
The CTEEAM Process: A Pedagogical Approach to Digital Forensics Evidence Management and Bias Mitigation

ABSTRACT. This paper introduces the CTEEAM process, a novel methodological approach designed to enhance digital forensic investigations within an educational context. CTEEAM, an acronym for Critical Thinking, Ethical Evidence Analysis, and Management, presents a streamlined, efficient, and systematic framework for student investigators to analyze digital evidence. Grounded in the fusion of traditional forensic principles and modern digital tools, CTEEAM promotes consistency, thoroughness, and objectivity in investigations. Drawing upon the rich discourse in digital forensics, the paper presents the CTEEAM process as an approach to improving investigative methodologies for students. The paper concludes with recommendations for future research, including methodological refinement and case studies.

13:20
Smart cars and Image classification using SVM

ABSTRACT. This research work presents traffic sign recognition system designed for the visual be in-car camera during the operation of an autonomous smart vehicle. The main goal is to use the car's onboard camera to precisely recognize and classify traffic signs, such as stop and speed limit signs. Our approach makes use of traditional feature extraction techniques that are well-suited for real-time processing within the confines of an in-car camera. We utilize the Histogram of Oriented Gradients (HOG), color histogram, and texture HOG features to provide vital information about the shape and structural patterns of road signs. Furthermore, colored histograms are used to capture the distribution of colors within the images, which allow us to differentiate between the signs based on their unique color characteristics. The resulting multi-modal feature vectors serve as input to a Support Vector Machine (SVM) classifier. The SVM is selected to construct optimal decision boundaries in multi-dimensional feature spaces providing precise sign classification. Primary results from this work will be presented at the conference.

13:40
Navigating the Frontier: AI in Cybersecurity

ABSTRACT. The integration of artificial intelligence (AI) into the realm of cybersecurity has ushered in a new era of innovation and transformation. AI technologies promise to enhance threat detection, response, and overall security posture. However, this dynamic landscape is marked by a duality of risks and benefits that necessitate comprehensive examination. This research paper endeavors to provide a balanced exploration of the multifaceted relationship between AI and cybersecurity.

The primary goal of this study is to shed light on the interplay of risks and benefits that accompany the deployment of AI in the cybersecurity ecosystem. By conducting a thorough review of the literature and examining real-world use cases, this research brings into focus the positive and negative aspects of AI's integration. On one hand, AI technologies are instrumental in accelerating threat detection. Machine learning algorithms can swiftly analyze vast datasets, discerning subtle anomalies that evade traditional security measures. The rapid response capabilities of AI systems help thwart cyberattacks in real-time, bolstering an organization's defense against evolving threats. Moreover, AI augments the cybersecurity workforce, automating routine tasks and enabling security professionals to focus on strategic initiatives. This shift from manual to cognitive work empowers security teams with improved efficiency and the ability to address complex challenges. However, the introduction of AI also brings risks into the cybersecurity domain. Adversarial attacks, where threat actors manipulate AI models, pose a significant threat. Ensuring the robustness of AI systems against adversarial manipulation requires constant vigilance and model refinement. Ethical considerations emerge, especially regarding privacy and data protection. The collection and analysis of sensitive data for AI-driven threat assessment demand rigorous ethical standards and compliance with evolving regulations.

Bias in AI decision-making is another critical concern. Biases within AI algorithms can inadvertently discriminate against certain groups or produce inaccurate threat assessments. Eliminating bias in AI algorithms is an ongoing challenge, requiring proactive measures to promote fairness and equity.

In conclusion, this research underscores the dual nature of AI in cybersecurity. While AI offers substantial benefits in threat detection, response, and efficiency, it also introduces risks in terms of adversarial attacks, ethical concerns, and bias. By comprehensively understanding these dynamics, cybersecurity professionals can harness the full potential of AI while safeguarding against its associated risks.

16:30-17:00 Business Meeting

Come to the business meeting to learn about the budget for the conference and discuss concerns. All professionals are encouraged to attend. Students may also attend.

Location: Highlander I
19:00-21:00 Awards Banquest

At our awards banquet, we join with a big dinner and celebrate the accomplishments of our students.

Location: Azalea