ACM-MIDSE-2021: ACM MID-SOUTHEAST CHAPTER FALL 2021 CONFERENCE
PROGRAM FOR FRIDAY, NOVEMBER 12TH

View: session overviewtalk overview

08:10-09:00 Session 1

Keynote Address by Dr. Jack Dongarra, University of Tennessee

Location: Azalea
09:15-10:35 Session 2A

Professional Speakers - Session Chair: Mir Hasan

Location: Highlander II
09:15
The Hackable Clock Project

ABSTRACT. Nominally, students learn to program computers by first being taught a programming language that provides formal abstractions and structures that can be used to reason about what a computer is "doing" or even "thinking". As learners gain experience, they naturally construct a mental model of how the computer operates based on their interaction with it through the abstractions provided. Teachers can use pedagogies that leverage these mental models called "notional machines" to intentionally guide students' conceptual knowledge of how machines work. As computer science educators, we believe that using a guided hardware-based activity where students create software to directly manipulate hardware will move their notional machine conception closer to reality.

In the 2021 fall semester, we designed and fabricated kits that included a printed circuit board and all the components necessary to assemble a digital alarm clock. The project was organized as an extracurricular activity through our student chapter of ACM. Twenty-one students started the project in August. In weekly meetings, we used component datasheets to guide the students through the process of creating the clock software, including the required hardware drivers. The project featured real-world experience with multi-core programming, semaphores, bit-banging, and most importantly, it stripped away many of the common abstractions that normally isolate programmers from their hardware. We gathered data from participants throughout the project using surveys to identify and study any evolution in the complexity of their mental models. In this talk, we will discuss the project materials, software architecture, and our preliminary conclusions regarding our hypothesis that working directly with hardware will have a positive impact on students' notional machines.

09:35
Can deep learning hit a moving target? A closer look at deep learning for TBI research

ABSTRACT. Despite subjective variances, regularity exists in healthy brain development. Yet, the concept of brain development introduces uncertainty and further complexity to the studies of traumatic brain injury (TBI) in the pediatric population. Time-varying associativity of TBI-induced pathophysiology with neuropsychological outcomes is evidence of added complexity. To unravel these complexities, approaches based on brain network modeling cater to valuable perspectives to the study of brain dynamics after TBI. Deep learning has recently gained an increasing role in the era of medical investigations. The talk will discuss the complexity of developmental TBI, and highlight the current promises and potentials of deep learning for TBI scientific research and discoveries.

09:55
doqmnt: Lowering Barriers for Better Documentation

ABSTRACT. A common difficulty in introductory programming classes is the creation of helpful, clear, and consistent documentation of code. A poor grasp of proper style in earlier classes can lead to more problems in later courses, as well professionally as after graduation. In order to encourage students to build proper comment style earlier on, we have introduced doqmnt—an Emacs library for semi-automatic comment generation. doqmnt autogenerates Doxygen and Javadoc-compliant skeletons by parsing prototypes and built-ins and prompting users for information as needed. doqmnt is still a very new project which was initially developed in 2021 for fall semester deployment and testing. Early results show promise, with the majority of students in relevant sections providing better structured and more complete documentation in their programming assignments.

10:15
Teaching Data Representation using Real-World Applications

ABSTRACT. As computer science educators, we feel strongly about the importance of student learning surrounding data representation and storage. Specifically, students should understand number systems and computing-related concepts such as encoding schemes, range, precision, compression, overflow, and round-off error. Students are often resistant when presented with these ideas and struggle to find the practical relevance. To address this, we created a 6-module, 1-week lesson using real-world applications. We began with a practical introduction to the binary number system which showed students how to conserve candles on birthday cakes using binary counting. Next, we reinforced that concept and introduced binary encoding by examining a secret message scheme similar to the one encoded on the Mars 2020 lander parachute. Efficient schemes such as Huffman encoding helped students understand the ideas involved in data compression when examining Morse code. An introduction to the hexadecimal number system showed students that binary numbers can be used to represent not only characters, but also multimedia data such as colors and images. Students put this knowledge to work to create a simple web page color scheme. A discussion of the famous Y2K event helped students learn the concept of overflow and the importance of choosing an appropriate date representation scheme. This led into a related discussion of Y2K38, a similar event on the horizon concerning UNIX time. Related to the notion of fixed storage size is the concept of round-off error. A discussion of a fatal 1991 failure of the Patriot missile defense system, which occurred because of a fixed-point round-off error, illustrated the potentially serious consequences of poor data storage design decisions. We gave students a test/survey both before and after the lesson. Results show that both student perceptions about these concepts, as well as their skills regarding number system computations, improved after the lesson.

09:15-10:35 Session 2B

Graduate Speakers - Session Chair: James Church

Location: Azalea
09:15
Clustering Theses and Dissertations by Relevance

ABSTRACT. The Internet is the most powerful tool that researchers use to find online resources that can benefit their research work. Although a huge number of research papers can be found in online journals and conference proceedings, these papers may be short and consequently terse or lacking in explanation at times. Thus, researchers may be inclined to read full theses or dissertations. However, finding relevant theses can be difficult as theses can be very long and might include references to non-pertinent or now-irrelevant works. Our approach involves clustering theses based on relevance using existing natural language processing techniques. The documents are first mapped to a TF-IDF (term frequency-inverse document frequency) matrix, from which we determine the difference in orientation of the documents as vectors via their normalized dot products. This measure is called cosine similarity and is better than the Euclidean distance between the documents, as Euclidean distance can be artificially large for documents of different sizes, even if they are “about” the same things. Thus, by comparing the orientation of vectors via their cosine similarity instead, two documents that have the same relative frequencies of vocabulary will be collinear and thus have no distance between them. Our work involves clustering the theses with respect to one another, such that each class represents an area of study. This should facilitate the extraction of future works listed in the theses in order to help new researchers to land on new topics for their research projects. This work is part of a larger project, aimed at creating not only convenient means by which researchers can find relevant theses that cover research works of interest, but also helping new researchers to identify and connect with other researchers who do work in the same area. Our presentation will include a demonstration of the achieved results.

09:35
Tracking Encounters while Respecting Privacy

ABSTRACT. The COVID19 outbreak in 2019 has rendered the entire human kind in a state of complete chaos and distress. An air borne virus that claimed the lives of many was enough to keep the health officials on toes to try and curb the spread of the virus as much as possible. It was of utmost importance that people were made aware of the consequences of not wearing masks and how the virus transmission works. But even with the best preventative measures, one could not be complacent and the knowledge of any infected person around them was very crucial. Our project is extensively on building an effective way to notify an individual if and when someone around them got infected which should prompt them to take the COVID19 test to be cautious keeping in mind to maintain the privacy of the infected individual. The basic outline of our project, “Tracking encounters while respecting privacy”, is to store encounter IDs in a bluetooth dongle when two such bluetooth dongles come in close proximity i.e. within 6ft. These encounter IDs are generated every minute to respect the individuals’ privacy and later the dongle could be connected to their laptops where the encounter IDs stored would be transferred to a basic tracking software we develop under the credentials of the user of the dongle. They are then sent to a central server where the encounter IDs are stored for cross reference to notify users when someone around their vicinity was tested positive. We are implementing ring signatures to account for the privacy of the users to send the encounter IDs fetched from a dongle for that day to the server.

09:55
Distributed Smart Embedded Vision for IoT Applications

ABSTRACT. Recent advances in both Artificial Intelligent (AI) and Internet of Things (IoT) make it possible to implement surveillance systems that can detect suspicious objects and recognize human faces in an automatic manner. In the existing surveillance system, a vision sensor is directly wired to the microcontroller, which makes the whole system exposed to an unsecured environment. In the proposed system, Raspberry Pi (RPi) along with portable Arducam IoTai is installed for security reasons. Arducam IoTai is based on the ESP-32S module and makes it possible to capture and process the real stream image. The designed system utilizes RPi as a computing server, which receives the captured image from Arducam IoTai wirelessly and executes object detection and face recognition. In the proposed system multiple RPis are implemented in a distributed manner and communicate over ethernet cable, where each RPi can serve as a controller and share the captured images to another RPi in real-time. We use Python programming language to implement Machine learning algorithms and Classifier Cascade face detection algorithm using an open-source computer vision and machine learning software (OpenCV) library, which can detect the faces of humans and unusual objects and allow for easy accommodation with high performance. We also use a set of Python classes (Image ZMQ) that can transport live video streams from one controller to another for a distributed image processing network. The proposed system is cost-efficient and has high performance so that it is suitable for home security, industrial wireless control, and other IoT applications.

10:15
Protected Spreadsheet Containers for Smart Manufacturing

ABSTRACT. In the emerging topic of smart manufacturing, researchers and businesses are trying to find ways to secure data in transit and at rest. This may include data in between automation equipment, Computerized Maintenance Management System (CMMS), or a Supervisory Control and Data Acquisition System (SCADA). We propose a PROtected Secure SPrEadsheet Container with Data (PROSPECD) for smart manufacturing. PROSPECD is an encrypted spreadsheet file which stores data and access control policies, encrypted with different keys generated on-the-fly. With this solution, we generate AES-256 bit CBC keys with a hash of our authentication system's private key, the worksheet name, and the hash of of our metadata worksheet as inputs. With PROSPECD, we allow the use of Role Based Access Control (RBAC) and Attribute Based Access Control (ABAC). With RBAC we give roles such as read or write access to individual or shared worksheets and individual attributes (columns) inside worksheets. PROSPECD contains four datasets, each being their own worksheet: asset inventory, administrative documents, maintenance data, and supplier information. We also allow for advanced ABAC, in which our web interface detects which browser and Operating System (OS) a user is running. If the browser or OS is out of date, we disallow certain information to the user.

To reduce the possibility of data leakages, we do not allow the user to encrypt irrelevant information for the worksheet they have write access for. An example of this is if a administrator tries to enter personal information of sales managers and maintenance staff into the asset inventory worksheet, instead of a sheet with administrative data, we do not allow that and raise an alert. We do this with word searching of a predefined dictionary of words. To summarize, PROSPECD provides data protection in transit and at rest, as well as fine-grained RBAC and ABAC.

09:15-10:35 Session 2C

Undergraduate Speakers - David R. Luginbuhl

Location: Dogwood I
09:15
COVID19 Chest Xray Classification Through Deep Learning in Google Colab

ABSTRACT. CT scans and X-rays have long been used as diagnostic tools to see the internal structure of the body. Chest CT scans and X-Rays play an important role in the diagnosis of COVID19 during this pandemic. It is difficult for radiologists to quickly and accurately process large volumes of images to distinguish COVID19 cases from normal or other respiratory infections. In this study, we aim to use deep neural networks to identify the presence of COVID19 disease. We use convolutional neural networks (CNN) to extract the image features and perform classification. Colaboratory (Colab) is a hosted Jupyter notebook environment for machine learning. We use the TensorFlow framework in Colab to build our CNN model. We use 290 images (145 for COVID19 and 145 for normal) to train the neural network. The neural network architecture that yields the best performance for this dataset is as follows: four blocks of layers (include the input layer) plus the dense output layer. Each block has two conv2D layers and one max_pooling layer. While the training dataset is small, we obtained an impressive classification accuracy on 58 test images. For future work, we will use the deep neural network to classify COVID19 and other respiratory infectious diseases.

09:35
Robot-ouille: The In-Home Cooking Assistant that Serves Independence with Food

ABSTRACT. With age causing natural disabilities in mobility and cognitive decline, the population of the elderly, especially those with disabilities is growing, thus raising the need for independence in the home. This paper presents a simulation study of a chef robot system. The system of Robot-ouille is designed with inclusivity and disability in consideration, - while giving users customizable options, such as the delivery of ingredients for meals, to include a wide range of disabilities to work with. An optional connection to a meal delivery service such as HelloFresh allows users to choose their meals based on the shown ingredients and nutrition facts in case of a restrictive diet. The size of the robot is of importance so as not to hinder mobility around the system and ensure the robot can fit in homes of various sizes and can fit on stoves of many sizes. Typically, assistive robots for cooking are built for a single specialty. In contrast, the proposed system is designed that would allow widespread usage. The goal is to ensure people with paraplegia and other differently abled users the ability to make their meals confidently while remaining independent, practicing fine motor control through use, adopting healthier gastronomy, and remaining aware of important upcoming events within the cooking scene and IRL calendar. A key feature of our design is the interface that instills trust in the system between the user and the robot and allows interaction. Within the simulation, the user can interact with the systems interface through four means: a touch screen application interface, verbal/auditory-based communication, and visual and vibration cues to ensure preciseness in the cooking process.

09:55
Self-driving Car Using Finch 2 Robot

ABSTRACT. The task of this robot is to go through a predefined path, which in our experiment, is 25 feet long with 4 sharp turns, then self-parking into a free spot before ending its job. A computer program in Python is developed to accomplish this task. During the trip, if detecting any obstacle, it will stop and flash its front red light for five (5) times to signal a potential hazardous or unexpected condition. If the obstacle remains in place for more than a specific time, the robot concludes that the path is blocked and tries to find an alternative route. There are three routes to the destination. All three routes are implemented in the program so that the robot could find the next least-cost route to bypass the blocked route.

Even though the robot was able to complete its task, its accuracy and efficiency in handling unexpected accidents are still limited. After a number of test runs, we found that the robot direction angle will be compromised if the driving surface is not clean and smooth. Dusts and small particles could attach to the wheels, make them imbalanced, leading to lane drifting and diagonal motions. This is similar to the rough and uneven road conditions in real world. To overcome this problem, we tried to use the compass function to control the angle of the robot. Unfortunately, the accuracy of the compass did not meet our expectation. The results of this research indicate that the operation of a reliable and efficient self-driving vehicle depends heavily on having a wide variety of sensors on vehicle coupled with programs to intelligently make use of data generated by these sensors.

Faculty Advisor/Mentor: Dr. Masoud Naghedolfeizi (Feizi), feizim@fvsu.edu

10:15
Aerial Crane Operation Using Drone Swarming Technology

ABSTRACT. The object deliveries using multiple drones, aircrafts, or choppers with human pilots are considered very risky and challenging since pilots need to execute a high level of coordination. However, a computer-controlled system could increase the level of coordination in a more safe and reliable manner.

In this research, two small programmable drones have been employed to work collaboratively as an aerial crane for object deliveries. Drone swarm programming has been utilized to develop a computer program in Python to achieve the research objectives. The flight path was considered to be from a ground level location to a predetermined location above the ground. The cargo net is attached to the belly of the drones using flexible robber bands to reduce instability in object delivery during takeoff, flight, and landing. The drones are approximately set four feet apart from each other and take off simultaneously upon the execution of the computer program.

The entire flight path has been implemented into the program. Approximately halfway during the flight the drones need to a make right angle turns toward destination. This requires an accurate coordination between two drones since one drone needs to complete a 90-degree turning radius while the other remains at its position. Once the object has been delivered the drones return to their starting locations using the same path but in opposite direction. After the first successful round trip flight, the same flight was run 10 times to find out the reliability of this aerial crane operation. Out of the10 runs, 7 were successful and 3 resulted in crashes. The main reasons for these crashes could be attributed to lost UDP packets and sensor malfunctions. The results of this research indicates that inaccuracies in sensors could influence the coordination of drones

Research advisor: Dr. Masoud Naghedolfeizi

09:15-10:35 Session 2D

Undergraduate Speakers - Session Chair: Ziwei Ma

Location: Dogwood II
09:15
Performance Profiling and Load Balancing in Bioinformatics Clouds

ABSTRACT. Load balancing is the process of distributing job requests among servers. There are many different load balancing algorithms, such as round robin, weighted round robin, and least connections, that can be used to process bioinformatics tools. Cloud computing can be used with bioinformatics to create a BioCloud program. A dynamic load balancing algorithm needs to be developed to properly handle distributing bioinformatics jobs. In this experiment, the FastQC job was used for all test cases. Four algorithms were designed to distribute the one type of job: system load, %CPU, free RAM, and round robin. The results of the data show that the FastQC job is CPU intensive, but not RAM intensive. The most efficient algorithm was %CPU.

09:35
Investigation of the Performance of Machine and Deep Learning Techniques on Kentucky Motorcycle Crash Data

ABSTRACT. This study aims to apply machine learning and deep learning techniques, including the random forest classifier, principal component analysis, and neural networks, in order to analyze the factors affecting motorcycle crash severity outcomes in the state of Kentucky. Severe motorcycle crashes are defined as crashes resulting in either serious injury or fatality. Recent five-year motorcycle crash data (2015 to 2019) from the Kentucky State Police collision database were used for the analysis. Crash data in 2020 were omitted due to the potential confounding effect of the COVID-19 pandemic on the results. The random forest classifier was applied to rank each feature's importance in influencing the severity outcomes of motorcycle crashes. The principal component analysis produced composite features that were constructed from a subset of the most important features (as determined by the random forest classifier). These composite features were then passed into a neural network to predict whether or not a crash was severe. The neural network demonstrated that driver-related (e.g., age), vehicle-related (e.g., type of vehicle), and environmental-related factors (e.g., lighting and weather conditions) could successfully predict the motorcycle crash severity with a high degree of accuracy. This study demonstrates that machine learning and deep learning techniques are able to achieve high performance in predicting the injury severity outcomes of motorcycle crashes and suggests that they be applied to other traffic data in order to create more informed traffic safety legislation.

09:55
Can We Trust Neural Networks? An Analysis of Neural Network Uncertainty by the Learned Feature Space

ABSTRACT. In the past decade, neural networks have shown promising results in various computer vision tasks, including facial recognition, autonomous driving, and disease diagnosis. Unlike conventional machine learning approaches that use handcrafted features for decision making, neural networks use features learned directly from the training data, making neural networks more precise and relevant to their specific task. The performance of neural networks for computer vision tasks is also fairly stable. Given two neural networks with the same architecture and same training setting, the accuracies of the two models are usually very similar. However, due to the complex nature of neural networks (e.g., it is easy for a model to have tens of millions of trainable parameters), they are typically regarded as black-box models. In this work, we attempt to open the black box and evaluate the learned feature space of neural networks. We trained a total of 12 models of both convolutional neural networks (CNN) and Vision Transformers (ViT) with three subtypes: fixed-feature extractors, fine-tuned neural networks, and fined-tuned neural networks with fixed seeding, a pair of models for each unique combination. Then, we analyzed the differences in the learned feature spaces of each pair using model interpretability algorithms. Our results demonstrate that the learned feature spaces for different models are usually extremely inconsistent, even with the same training setting. Additionally, this inconsistency is related to model complexity and model depth. Comparison between conventional machine learning techniques, such as SVM, and neural networks, indicate that conventional machine learning techniques are more consistent but have significantly lower performance. The inconsistent feature spaces of neural networks indicate that neural networks can use different decision-making criteria for the same task. Guiding neural networks to use a specific set of features may also help further improve the performance and consistency of neural networks.

10:40-12:00 Session 3A

Professional Speakers - Session Chair: Saeid Samadidana

Location: Highlander II
10:40
Does ABET Accreditation Matter for a Computer Science Program?

ABSTRACT. Many institutions of higher education obtain program-level accreditation for some or all of their academic programs to ensure prospective students and other stakeholders that their degree programs are rigorous and nationally/internationally recognized for having quality education.

In the U.S.A, the Engineering Accreditation Commission (EAC) and Computing Accreditation Commission (CAC) of ABET are responsible for the program-level accreditation of engineering and computing disciplines, respectively. The ABET accreditation process for baccalaureate and associate degree programs is based on eight (8) general criteria. These are: Students, Program Educational Objectives, Student Outcomes, Continuous Improvement, Curriculum, Faculty, Facilities, and Institutional Support.

Nearly all engineering and engineering technology programs in U.S. have ABET accreditation. The graduates of ABET accredited engineering programs will generally have easier time to obtain professional licensure (known as Profession Engineer) and find good paying engineering jobs. However, accreditation is optional for the field of computing and as a result, many computer science programs choose not to have ABET accreditation. For example, out of 978 computer science programs in the nation only 284 are ABET accredited, and among 25 top computer science programs only 13 have ABET accreditation; and in the southeast region, there are only about 47 ABET accredited computer science programs. Perhaps the main contributing factor to a relatively low ABET accreditation rate in the U.S. is the fact that professional licensure does not exist for computer science programs.

This paper presents an overview of ABET accreditation for computer science program and discusses factors contributing to seek or not seek accreditation.

11:00
Brushing the Dust Off: A Modern Approach to Technique Re-evaluation

ABSTRACT. In modern computing, there is a disturbing trend where older techniques are abandoned and forgotten because of some inability or difficulty revolving around the use of said technique. Some of these techniques can find fresh life in newer state-spaces similar to their originally developed environment. It is this practice of revisiting older techniques, adding to these techniques, and creating newer more robust uses for them that allows for advancement in many fields that have seen stagnancy in execution and assessment. Examples of older techniques seeing new usage include but are not limited to: • Survival Analysis • Neural Networks • Natural Language Processing These 3 techniques show a disturbing trend among the computer science research and professional environments. That trend being, if a technique is less useful or encounters a difficulty in execution or operation, it is abandoned until a much later point in which it will be revisited and re-evaluated. Breaking this cycle is beyond important moving forward.

11:20
Language Choice for First Programming Course, Theory and Anecdotal Experiences

ABSTRACT. It can be argued that the first programming course that Computer Science students take is the most important course in the curriculum. Many students who take this course fail. Most who fail move to another major. With the demand for computer science students continuing to rise, it seems prudent to ask whether there might be a way to lay the groundwork for more students to succeed.

There are some factors about our students that we cannot change. High School education for the most part seems to be more about fact retention than the creative thinking needed to be a programmer. What steps can we take to make this transition easier?

The language chosen for the first programming course has the potential to affect success rates. Until last Fall, The University of Virginia’s College at Wise started students off with a course in C++. Failure rates were around 50%. We then switched to Python as the first language. I am in the second year of teaching this new course.

In my presentation I plan to discuss the results so far of this switch. I will also consider possibly unforeseen negative consequences. I will conclude with some pedagogical techniques that I am trying out in an effort to improve student outcomes.

11:40
Supervisory Control and Data Protection in Cyber-Physical Systems

ABSTRACT. It is essential to protect data in Cyber-Physical Systems (CPS) because they are widely used in critical infrastructures such as oil refinery plants, power grids, etc. With the requirement to support remote control, which is highly desirable in many CPS, data protection task becomes more complicated and important. Data exchange mechanisms must provide data confidentiality and integrity guarantees.

Some CPS still use an old generation hardware and communication protocols, which were designed with no cybersecurity in mind. It creates opportunities for attackers to inject malicious data packets in the CPS communication channels. Malicious commands may result in process failures, financial loss and even impose threat to human lives.

Since it might be expensive for businesses and organizations to replace old generation hardware and software, it is highly desirable to design and implement data protection solutions on top of existing CPS infrastructures. The general approach is to deliver plaintext sensor data to a computationally powerful node, Secure Data Gateway (SDG), where the traffic is encrypted and digitally signed. SDG must be located as close to the sensor as possible to reduce the attack surface. For example, ESP32 board can encrypt and sign sensor data traffic. To reduce energy consumption, lightweight encryption protocols, such as Tiny AES, can be used. As an extra data protection layer, the SDG can inject fake sensor data packets to the communication channel. This would add confusion and protect from cyber attacks that rely on timing.

Supervisory Control and Data Acquisition (SCADA) systems can be used for sensor data collection, archiving and visualization, as well as for sending remote commands. Modern communication protocols, such as MQTT, OPC UA and Modbus TCP, supported by many SCADA systems, provide data delivery guarantees and can be used in combination with secure data containers, data encryption protocols and digital signatures.

10:40-12:00 Session 3B

Graduate Speakers - Session Chair: Shamin Khan

Location: Azalea
10:40
Creating Direct Sensing Data to Enable Flood System Land-Cover

ABSTRACT. Despite the fast technological development for flood detection and monitoring systems, the cost and use of these types of systems have not made it historically possible to invest in a large footprint direct sensing system. The major assets and resources that are exposed and vulnerable to flooding are being outfitted with sensing systems and are included in modeling and simulations. Nevertheless, there is no sufficient flooding data being collected to form a good land-cover flood system that serves to monitor, forecast, warn and support analysis and decision-making for different stakeholders and investors. To address this need, research including the design and testing of a cost-effective flood direct sensing system began. The goals of the system are to: be cost effective in deployment and operation, requiring minimum to no hardware maintenance; to be configurable for large-scale deployment, and to supply data to software tools or service platforms for decision making. The current prototype system utilizes retail microcontrollers and a wide variety of sensors to create a custom control system. The use cases for transportation infrastructure entail providing accurate water levels to critical passing points. These points have been identified for the University of Alabama Campus in Tuscaloosa, the city of Tuscaloosa itself, the city of Mobile and the highway system for the State of Alabama. Such information can greatly reduce flash-flooding related deaths from vehicles attempting to traverse flooded roads. The initial results form the basis for current technological enhancements and for future controlled field tests. This article presents the accomplishments of the on-going project, its current research focuses, and future steps in development and implementation. Lessons learned related to the business process, the development of the technology, and deployment of technology to the field are described.

11:00
We Are Peace Keepers! In Cyber Space…

ABSTRACT. Today in our nation, there are almost half a million jobs that remain unfilled because of lack of qualified individuals. With modern lives increasingly dependent on technology, it is absolutely essential that we as a nation are prepared to defend our experiences with technology by being cyber conscious and by contributing to build an effective, strong, and diverse cybersecurity workforce that can help keep peace in our cyber space.

As students affiliated with the Cybersecurity Education, Research and Outreach Center (CEROC) at Tennessee Tech University, we are on a mission to promote public awareness in cyber and empower ourselves to be cyber defenders of tomorrow. In this talk, we will present how we are preparing ourselves as future peacekeepers through various education, research, and outreach efforts, not just for our own readiness but also to serve our community.

11:20
An Implementation of Extended Reality Technology for Field-specific, Manufacturing Education and Training

ABSTRACT. Extended Reality Technology has been used as a means of providing entertainment and enhanced life experiences for several years. One such enhanced life experience is related to the realm of industry, specifically manufacturing processes. Augmented Reality has provided employees means of seeing data overlays and other visualizations surrounding equipment and manufacturing functions. Interestingly, some overlap in this area and entertainment is the rise of simulator games. While popular, these games are very generic (generalized applications) and sometimes more arcade-like in “look-and-feel” (reduction in realism for increased entertainment value.) Companies like Microsoft and Google provide Mixed Reality Technology solutions for businesses, which resolve issues with generic applications and realism. However, these solutions can be costly, requiring specific computing devices, peripherals, and configurations. For this project, I propose the design and implementation of an inexpensive, platform-independent, near-realistic, XR-based simulator framework. This framework would be specific to a manufacturer, using 3D models of objects and equipment which are found in specific contexts within the manufacturer’s facilities. Ideally, this framework would promote quicker turnaround and an easier means of creating new simulators and scenarios, for the purposes of training and educating employees of the manufacturer regarding their processes. Furthermore, the simulators built within the framework would help provide a safer, controllable environment for learning the manufacturing processes practiced by the manufacturer.

11:40
Improving Anomaly Detection on Smart Grid with Dynamic Time Warping

ABSTRACT. Anomaly Detection has gained increased importance as the number of cyber-attacks against critical infrastructure have been surging. Improving the speed of anomaly detection is one way to counteract these attacks due to more rapidly identifying anomalous activity. In this work, the goal is to improve the speed and accuracy of anomaly detection by using overlapping electrical measurements from a smart grid. We found 349 times more anomalies when considering anomalies from overlapping measurements, instead of just looking at one of them. Although prior work considered fusing measurements, using overlapping electrical measurements for anomaly detection has not been done before. Therefore, by applying Dynamic Time Warping to the measurements an increase in anomalies was found.

10:40-12:00 Session 3C

Undergraduate Speakers - Session Chair: Saman Sargolzaei

Location: Dogwood I
10:40
Campus Mania

ABSTRACT. Our project is a 2D top-down map traversal game akin to Pacman, styled to represent a student spending time studying and collecting knowledge while navigating away from distractions, which would waste the student’s precious time. We used the Unity Editor to create the scenes for the game as well as Visual Studio to write scripting files for the player’s movement as well as the AI for the distracting enemies. A variety of school-themed maps can be chosen between, and the difficulty of the game will rely on which AI script is chosen to control the enemy movements. The purpose of the project is to test out various AI pathfinding scripts on the enemies to test our knowledge of AI methods and to see which implementations are the most fair, enjoyable, or interesting to the player, which offer more of a challenge to the player but are still playable, and which are absolutely cruel and impossible to play against.

11:00
Mark the Mighty

ABSTRACT. The purpose of this project is to create a roguelike combat game with a puzzle to venture through. This game was inspired by one of the older Zelda games that had the more pixelated design that games don’t typically have anymore, so we have brought that look back with this game. We used Google Firebase to make this be a web-based game for many people to be able to play freely. For all of the map design and scripts for the game we used Unity as the game engine to create. To touch upon one of the main components of this game we have included a main town that operates as a game hub where players are able to buy certain perks to use in the game that will help them with combat. For the combat in the game there will be opponents for you to fight against and will take damage dependent upon how many hits you take from the opponent. Some of the other features of this game are traps that will stun or kill your character depending on how many traps your character goes through. For anyone playing the game there will be one character to pick from named Mark, the protagonist of the game that we created. There is a high score feature that will allow the player to see their current score and their highest score on the game. Another main aspect of our game is a maze that gives the player an option to take 2 different paths to be able to return to the town. If the player finds their way through the maze back to the town then they will be able to play through the game again at that point.

11:20
A COVID-19 Analysis

ABSTRACT. The COVID-19 virus has rapidly spread across the world and has been a driving factor of changes to daily life, especially in the United States. The COVID-19 virus is the most prominent pandemic in our lifetime. Other countries, such as Taiwan, have been able to keep cases at exceptionally low levels. However, the United States has faltered in its response to COVID-19. Considering that we are in the digital age, mankind’s ability to respond to such pressures has greatly improved since previous pandemics; the wealth of data available during this pandemic plays an important role in our ability to analyze our response to the situation. Through data science we can derive observations regarding the most effective practices to stop the spread of this, and future, outbreaks.

We will be analyzing data at the county level to find what variables and factors are most significant in the spread of COVID-19. Using a Python-based ETL (Extract, Transform, Load) package, we will extract data from the CDC and other sources, clean said data, and load the cleaned county-level data into a SQL Server database. This database will be updated weekly, pulling the most recent data possible, and will update the PowerBI dashboard to show new results. We will perform exploratory data analysis to find trends and anomalies in the dataset using Python and SQL. This will act as a foundation for our findings and further analysis.

Following preliminary analysis, we then create a machine-learning model to predict the spread and potential hotspots for COVID-19 in the future. To showcase our findings, we will use a PowerBI dashboard to allow users the ability to interact with our data and findings. By leveraging data analysis, machine learning, and data visualization, we are attempting to provide insight into what counties were most affected by the pandemic.

11:40
Learning Algorithms on the Web

ABSTRACT. Algorithms can be complex and difficult to understand. To mitigate the confusion, we decided to create an application that makes them as clear as possible with visualizations, explanations, and a live code editor with a compiler. Our goal is to give users the information needed to know why to use algorithms, when to use algorithms, and how to implement algorithms. We want it to be as widely available as possible so for this, we chose to make a web application.

Our application uses MERN stack (MongoDB, Express, React, and Node) so it is responsive, interactive, and persistent. Learn Algorithms uses visualizations as a teaching tool showing how the algorithms move data, this aids users to reach a deeper understanding and for some it may finally ‘click’. We employ Material UI to make the application have a pleasing, intuitive interface on every type of screen. Our web app logs in using OAuth and stores algorithm implementation data for each user. This gives the users the functionality of logging in, learning the algorithms, and testing an implementation all from within the webpage. The implementations are savable to the database; thus, the users can work on code over multiple sessions and will always have their previous solutions in quick access. The technologies and features culminate to an application that makes learning algorithms simple and fast.

10:40-12:00 Session 3D

Undergraduate Speakers - Session Chair: Greg Kawell

Location: Dogwood II
10:40
Smart UAV Low-Cost LiDAR Obstacle Detection Systems Through the Implementation of 3-D Point Clouds

ABSTRACT. Research in unmanned aerial vehicles (UAVs) involves implementing sensors to perform collision avoidance algorithms in different types of environments for varying missions. One of these sensors that became prominent in research is a laser-based sensor called a LiDAR (Light Detection and Ranging). A LiDAR calculates the distance from certain objects by measuring the time it takes for the laser to be reflected off the target object and returned to the sensor. A LiDAR can be used to create a 3-D point cloud map that can be used to perform successful obstacle detection and avoidance algorithms to autonomously navigate a UAV through environmental obstacles. The purpose of this research involves developing a low-cost solution of a UAV obstacle detection system in different environments by implementing a single LiDAR rangefinder. The LiDAR is placed on the top of the UAV to detect objects by creating a 3-D point cloud through adjusting the flight angle and altitude of the UAV. This point cloud is created with aid from a magnetometer and accelerometer to create the 3-D aspect of it in a Cartesian coordinate system. An obstacle detection algorithm was developed to filter through the point cloud to locate an open space for the UAV to fly through. A successful collision avoidance algorithm can then be generated by combining the concepts of the edge detection and artificial potential field algorithms by navigating through the point cloud. This LiDAR UAV system was tested to create accurate 2-D and 3-D point clouds that can be implemented with the obstacle detection methods. Due to UAV hardware limitations, it was only able to be tested indoors by manually flying the UAV rather than through autonomous flight code. Yet, it still produced a successful 3-D point cloud of the indoor environment that the obstacle detection algorithm correctly worked on.

11:00
Facial Recognition Through Deep Learning with Colab

ABSTRACT. Facial recognition is being employed for a wide range of applications, such as automated identity verification, law enforcement, and surveillance systems. Different algorithms and methods have been developed for facial recognition. Most recently, deep learning has been applied to facial recognition and achieved high performance. Most of facial recognition research has been conducted in the visible spectrum. We are looking into training deep neural networks (DNN) with visible images of faces and testing the DNN with other types of images, such as thermal and polarimetric thermal images. These images represent heat measurements and are beneficial because light exposure can affect the visibility of an image. Deep learning involves large volumes of data and has particularly demanding needs in term of computational resources that cannot be easily satisfied. Colab is a free Jupyter notebook environment that runs entirely in the cloud. We use the Tensorflow library in Colab to build our DNN model. A dataset of 720 visible images and is loaded into the model and separated into 60 different classes. Each class consists of 12 slightly different images of the same person’s face. Through deep learning, the model can successfully classify each visible face with roughly 96 percent accuracy. The thermal and polarimetric datasets are loaded into the trained model and tested against the visible model for accuracy. This test classifies images the same way as the visible images but is not nearly as accurate. To improve the performance, we added thermal or polarimetric thermal images into the training dataset, which significantly increased the accuracy. Our future work is to implement pre-processing to reduce the gap between vii This proves that images of thermal and polar-thermal can be used for facial recognition, but improvements are necessary if we hope to use it for real-world applications.

11:20
Artificial Intelligence in Gaming

ABSTRACT. Artificial intelligence has completely revolutionized the computer science industry. Not only has this technology revolutionized things like travel and cybersecurity, but it has changed how we view video games. With simple artificial intelligence for games like chess making a two-person game able to play solo, there is no doubt that artificial intelligence has come very far in the last few decades. Though the thought of artificial intelligence integration in video games sounds like it would not be manipulated, it has caused a major dispute in the gaming industry. Many companies are having to try harder to remove these programs to prevent unfairness and cheating in their game. This has even caused some companies and game developers to lose their audience. I plan to research how these programs are developed and what makes these programs work. I also want to research and discuss how these computer programs could be better than a human player. I want to base this research on effectiveness and time efficiency in many aspects of games. I plan to present a personal programmed artificial intelligence on the classic game Pac-man. I will discuss my process in developing the program as well as the difficulties that arose and the benefits of the program over human gameplay.

11:40
Language Choice of Program Assignments in Introductory Computer Programming Courses and the Effects on Female Student Retention

ABSTRACT. Students have a lot to learn within their first programming course, but there is more than just learning the structure of the language. Examples for homework and for projects assume previous knowledge. The examples often come from areas that are mostly of interest to males. This leaves females having more work than what is intended by the professor. Females also face discrimination by their peers which leaves them having to do tasks such as documentation on group projects. This has them learning less programming and creating a cycle of weakness in female students. This in turn can lead them to withdraw from the computer majors.

What is needed is a rework of projects and homework to be less male centric and institute more neutral listings. They could instead either pick program assignments from a wider range of topics or alternatively provide needed background information instead of making assumptions.

I intend to do research using professional papers as well as my own experiences and interviews with other female students in computer science and software engineering. I also intend to interview a few male students to be able to see the scope of their background knowledge.

My presentation will show the unseen bias women face and how it can affect their desire to stay in the major and possible solutions to start the closure of the gender gap.

13:00-14:20 Session 4A

Professional Speakers - Session Chair: Bob Bradley

Location: Highlander II
13:00
A Collaborative Partnership between CS and Math Faculty to Develop Lessons that Teach Generalization through Writing Python Programs

ABSTRACT. Working with the Alabama State Department of Education (ALSDE) and the Alabama Math, Science, and Technology Initiative (AMSTI), a team of computer science and math faculty developed an Instructional Model (IM) that uses computer programming to explore math concepts and build generalization skills. Over the last ten years, the Collaborative Partnership to teach mathematical Reasoning using Computer PRogramming (CPR2) project has held more than 150 PD sessions and worked with over 200 K-12 teachers. The team is currently funded by the NSF to take CPR2 to 7th and 8th grade math teachers and their students. Numerous lessons across grade levels have been developed using the CPR2 IM. In this session, we will discuss the Python programming in the lessons and how writing the mini-Python programs, and exploring a concept using the programs, pushes students to generalize over the concept. We will also share results from the NSF research study.

13:20
Designing CS Materials for Out-of-field Middle School Teachers

ABSTRACT. As K-12 schools incorporate computer science one of the challenges they face is preparing teachers for their first CS course. In this paper we discuss our experiences providing CS professional development for middle school teachers using a curriculum based on our NSF INCLUDES supported CS Makers course with micro:bit hardware

13:40
A Brief History of the Web

ABSTRACT. This talk will be a brief history of the world wide web. We will cover the birth of the web server, web browser and web page. Next we will look at the start of the dynamic web using CGI and templating. We will talk about the beginnings of CSS, JavaScript and AJAX. We will talk about the different versions of JavaScript, REST APIs and Web 2.0. We will talk about single page apps, server side rendering, progressive web apps and web assembly. Finally we will talk about Web 3.0 and the future of the web.

14:00
Developing Firebase Apps with Angular

ABSTRACT. This talk will discuss developing Firebase hosted apps with Angular. It will include a quick background of what is Firebase and what is Angular. Next I will talk about the conference registration system and the CompileIt system that I have been working on for several years now. We will see the ongoing problem of how quickly Angular and Firebase are updating, and the node/npm problem of having the firebase project be within the angular project and will see a new new project layout I designed to fix it. Lastly, we will look at the new google site and talk about sending emails from Firebase using sendgrid.

13:00-14:20 Session 4B

Graduate Speakers - Session Chair: Melissa Wiggins

Location: Azalea
13:00
Anomaly Detection in Smart Farming using Machine Learning

ABSTRACT. Smart farming is a critical infrastructure domain in which IoT devices are used to increase production efficiency and optimize Crop output. The use of IoT devices increases the attack surface for cyber threats. These threats could be used to halt production, gain profit, or cause any number of negative consequences. Some attacks can produce anomalies within collected sensor data. Detection of this anomalous data, whether naturally occurring or as a result of an attack or accident, is a vital part of a fully secured cyber system. Timely and accurate detection can allow systems to be used online with real-time actuation. The goal of our work is to collect a large dataset of sensor data, train and test an Autoencoder model, and evaluate metrics for the model. We were able to achieve 98.98% accuracy with a test set with more than 18,000 data points across 4 sensors that was collected over two separate week long spans. This work provides a baseline for anomaly detection specific to smart farming operations. The sensors we deploy provide an accurate look into how a real life farming operation could monitor their crops, as well as how anomalies are represented within the readings of these sensors. Our autoencoder approach provides a unique unsupervised machine learning method that is not reliant on being trained on anomalous and normal data points. This is a valuable solution since there is a much higher availability of normal data compared to anomalous data. This approach allows us to train a sophisticated model without having to inject artificial anomalies for training purposes.

13:20
A Taxonomy of Data Poisoning Attacks: Domain, Approach and Target

ABSTRACT. Last decade has seen a tremendous academic effort and industrial growth for advancing and adopting machine learning (ML) technologies. With increased application of ML in different domains, motivation for deceiving these models has reached more than ever. Due to increasing incentives, serious security threats are being unleashed against machine learning models. These models are trained on huge volume of data and therefore, it becomes an easy avenue for attacker to pollute the training data. With state-of-art researches being focused on topics like open design principle, federated learning, and crowd-sourcing, the integrity of data is at even higher risk. As ML models depend on different stakeholders for obtaining datasets, there are no existing reliable automated mechanisms to verify the veracity of data from each source. This has lead to question the trustworthiness of ML models including modern deep learning as these technologies can be compromised by data poisoning attacks. Fake data insertion or manipulation in training leaves backdoors in trained models, affecting the integrity and performance of that model.

This work is intended to provide a comprehensive understanding of current threats, ongoing research and future directions of poisoning attack across different domains. Since the adversarial attacks based on data poisoning is a relatively new field, we would like to analyze ongoing researches in the research community. Our work will compare a representative poisoning attacks carried out against machine learning models in extended domains through different approaches. Different to existing work, our taxonomy will not be limited to the single dimension. We will taxonomize poisoning attacks in terms of attack domain, poisoning approach and target models that are poisoned. We will be evaluating development of different approaches for data poisoning across each domain. Our work also aims to provide future direction for data poisoning attacks based on the current limitations and challenges.

13:40
A Vision for Activity Control Security Models for Smart Ecosystems

ABSTRACT. The prevalence of technological advancement with artificial intelligence and cloud-architectures is one of the greatest contributions for cyber physical systems (CPS). Internet-of-things (IoT) is allowing heterogeneous interconnected devices through dynamic networks with or without assistance of human interference to build smart automated cyber systems and provide intelligent services. With this significant development of technology, the attack surfaces and security challenges are increased where unauthorized access and activities from intruders can hinder the system functionalities that can lead to big loses for both individuals and organizations. The decision for access to resources in a system depends on environmental conditions, attributes of entities, risk factors, operations performed by the system entities. Access control security models enforce decision control for requested access permissions in CPS. The foremost security challenge is to build the security models for particular type of cyber environments which consider fine grained decision factors to take decisions and increase the system efficiency along with flexibility in terms of security. Most of the research works discuss about building access control model considering the uncovered authorization factors such as current state of the devices, critical in the CPS. Existing security models need to be dynamic, real-time and adaptive, much needed for smart and connected environments. In this work, we review the state-of-the-art access control models and identify the need for “Active” models. Our aim is to work on uncovered decision factors that will increase the security in making access decisions. “Activity” on running devices is one of the significant decision factors that have impacts on secured access control. We intend to work on building such an “Active” security model that considers notion of activity in real-time cyber environment. We also analyze the adaptability of models to new domains with heterogeneous use cases.

13:00-14:20 Session 4C

Undergraduate Speakers - Session Chair: Masoud Naghedolfeizi

Location: Dogwood I
13:00
Detection of Stepping-Stone Participation using Identical Packet Sequencing with Differing Encryption Algorithms

ABSTRACT. As the capability to detect network intrusion has increased, so has attackers’ ability to avoid detection. Commonly, attackers use Secure Shell to hide their identity. SSH securely connects two hosts together and encrypts their interactions. In stepping-stone attacks, one connection of SSH leads to another on a different host, and again until the attacker becomes untraceable from a victim host. In attempts to explore a method to detect stepping-stone attacks, we are pinned between two difficult issues. First, we desire to be able to detect a stepping-stone attack from any “stone” in the link from an attacker to a victim, which runs against our other issue; we must do this without being able to access any encrypted data. This means we only have the data “surrounding” the encrypted information. While several solutions have been suggested, the work in this project is based off J. Yang et al’s work: detecting being used as a stepping-stone by viewing the lengths of encrypted packets. Their work suggested that by finding pairs of packets, one entering the host, and one leaving the host, both with identical encrypted lengths x, we could determine that a host was being used as a stepping-stone. However, their work assumed that the encryption algorithm used for both encryptions would remain the same. Our work is to determine whether we can use of the same effect, across algorithms, by focusing on the length sequences of incoming and outgoing packets. We expect that these sequences can be found when considering stream ciphers and comparing the length sequences coming in and out of a host. If this is the case, we will be able to develop detection algorithms that can identify these sequences and determine if a host is being used as a stepping-stone to access another host.

13:20
Search and Rescue with Rover Robot

ABSTRACT. The purpose of this research is to investigate the capabilities and challenges associated with search and rescue missions utilizing a vendor robot with limited sensor capabilities. A Revolution Roli Rover Robot made by EZ Robot, was utilized in this research. The robot included two major sensors: a camera and a distance sonar sensor.

An experiment was designed to simulate a search and rescue mission by the robot. The experiment included an in-house made object to be detected by the robot based on its color, a predefined path to the object, and a delivery path to rescue the object based on image processing and face recognition technology.

To conduct the experiment, we utilized vendor software called ARC by Synthiam for prescribing the rescue path, image processing, and deep learning for face recognition. A distance of 3 meters between the rover robot and the object was set to conduct the experiment. Through a program design in Python, integrated within the ARC software, the rover successfully reached the object by utilizing a combination of both its camera and sonar sensor. This success was achieved after numerous trial and errors to correct for camera and sensor inaccuracies.

For object delivery to a specific person, we utilized deep learning face recognition network technology within the software. We generated about 14,000 face images to train the network. After the network was successfully trained and tested, the rover robot was able to accurately deliver the object under perfect lighting conditions. However, we also noticed the lighting condition could impact the accuracy of both face recognition and object detection. This could result in imprecise movements by the robot and faulty face recognition.

Mentor/reserach Advisor: Dr. Masoud Naghedolfeizi

13:40
Stepping-stone Intrusion Upstream Detection Using Network Traffic Round-trip Time Distribution

ABSTRACT. Network attackers that want to conceal their identity and run little risk of detection will commonly employ stepping stones. Using stepping-stone intrusion (SSI) they can launch attacks through a chain of multiple machines that make it increasingly difficult to detect and trace. Determining the length of a connection chain is the best known method of detecting SSI, however, currently there are no accurate methods of determining the length of the upstream connection. Previously, methods of stepping-stone intrusion detection (SSID) were based on timestamps or packet contents to determine a relationship, but these were not as efficient since they were easily defeated by encryption or countermeasures. This project aims to detect SSI by determining the length of the upstream connection chain using the distribution of RTTs that resists known countermeasures. The algorithm is based on the RTTs of incoming and outgoing packets, using the difference between the distribution of RTTs to determine if SSI is present on the host and how long the upstream connection chain is. Using the RTTs can resist chaff-perturbation from interfering with the algorithm since it must be filtered out before reaching the end host, and applying time-jittering to packets in the network would cause an equal delay in the responding packet, leaving the RTTs unchanged. We expect to evaluate this algorithm on servers across the internet and find that the difference between the distributions of RTTs will become more inconsistent with a long connection chain, proving that SSI is present on the host and determining the length of the upstream connection chain by the inconsistency of the RTT’s distribution.

14:00
Detecting Stepping Stone Intrusion Using Packet Crossover

ABSTRACT. Abstract: Stepping-stone intrusion is a hacking strategy in which an attacker sends attacking commands through compromised hosts, called stepping-stones, in order to remotely access a target host. These stepping-stones form part of a long connection chain that serves as an intermediary between the target and attacker hosts, providing the attacker with increased anonymity and detection avoidance capabilities. Long chains with three or more connections often indicate malicious activity. In a long connection chain, it is possible for the sender to transmit the next request packet before the sender receives the response for the previous request. In such a case, some request and response packets may cross each other somewhere along the connection chain, producing packet crossover. In this work, we will first conduct network experiments to verify that the number of crossover packets is proportional to the length of a connection chain. Then, we will establish a quantitative relationship between these two parameters. Our work will produce quantitative metrics that detect whether a given host is used as a stepping-stone for intrusion by analyzing the number of crossover packets.

13:00-14:20 Session 4D

Peer-Reviewed Speakers - Session Chair: Karen G. Carter

Location: Dogwood II
13:00
An Introductory Guide To Autograders and Their Features

ABSTRACT. Autograders come in many different shapes and sizes. To help navigate the many different choices and options, we share this review of 23 autograders: Autogradr, Autolab, CodeCheck, CodeHS, codePost, CodeRunner, CodeWorkout, CodingBat, Codio, Github Classroom, GradeScope, INGInious, JDoodle Guru, Jutge.org, Mimir, Problets, QuizJet, replit Teams for Education, stepik.org, Test My Code, Vocareum, Web-Cat and zyLabs. We evaluate each autograder based on 11 important features: programming languages supported, cost, an exportable gradebook, local and/or hosted server, ability to easily add custom assignments, ability to use assignments from a provided library, strict input-output matching, flexible input-output matching, regex input-output matching, unit tests and scriptable evaluation.

13:20
Towards a Comprehensive Ontology for Electronic and Physical Evidence in Cybercrime

ABSTRACT. With the rapidly increasing data volume of electronic and physical evidence in cybercrime, the time to analyze and respond will continue to increase. In this study, we propose a cybercrime ontology to effectively analyze, store and respond to evidence of cybercrime. The power of our proposed ontology is to consider the difficulties of associating digital and physical evidence while integrating the law which determines who can access what. Such integration will enable powerful inference and reasoning of the formal ontology as it incorporates concepts from 1) digital evidence, 2) physical evidence, and 3) law (criminal, civil, and administrative).

We construct and represent our ontology using Web Ontology Language (OWL), the highly expressive and the most popular language for representing ontologies and their concepts. We develop a prototype system of our ontology using Protégé, a free and open-source development environment for knowledge-based systems. The logic behind this implementation is to give the machine the ability to reason and infer knowledge from the evidence and law sub-ontologies to ensure logical consistency and compute plausible relationships between the ontology's concepts to help effectively resolve cybercrimes.

13:40
Teaching and Retaining the Computer Science Appalachian Student

ABSTRACT. Deciding to attend college represents a pivotal point in a student’s life, regardless the age! This can be an especially poignant decision for the Appalachian student and often reveals images of a looking glass…who am I vs. who I want to be? Recruiting, engaging and retaining students often prompts a struggle between choosing academics over culture when it appears as though this coexistence is rare or even possible.

This paper details ongoing research as to how to recruit and retain computer science students within public school systems located in Lee and Wise Counties of Virginia, Bell and Harlan Counties of Kentucky and Hancock and Claiborne Counties of Tennessee. The U.S. Census has identified these areas as Central Appalachia where these counties, comprise the main service area of University of Virginia’s College at Wise. Potential students hailing from the Central Appalachia were raised in historically economically depressed continuous area where accessibility to computer science classes is limited.

14:00
CyberHero: A serious game to teach cybersecurity

ABSTRACT. Cybersecurity is a vital part of modern-day life, yet everyday users still struggle to grasp even the basics of cybersecurity. Organizations are taking the same approach to cybersecurity that they have with every other training topic; by using slideshows, presentations, pamphlets, manuals, webinars, seminars, or video productions. It is not being taken into account that these standard training types might be ineffective, causing trainees to feel overwhelmed, overloaded, confused, or bored. It is no secret that people are the weakest link in a security ecosystem, so great cybersecurity training is pertinent for all types of organizations. This research study aimed to develop an adaptive serious game that not only teaches users about cybersecurity in an engaging and personalized manner, but one that also measures the training’s effectiveness in a way that many existing cybersecurity serious games fail to do so. By measuring improvement within the game, we can gather more meaningful data and derive truer conclusions about the usefulness of utilizing serious games to teach cybersecurity.

14:35-15:55 Session 5A

Professional Speakers - Session Chair: Robert Lowe

Location: Highlander II
14:35
Procedural Walking Animation for a Top-Down 2D RPG Character

ABSTRACT. In Role Playing Games (RPG) games, players can move their characters around in the virtual world. The character may move around by walking to a destination. In a typical walk animation, the movement of the feet does not affect the character’s position; the position is calculated from a given direction and a given speed that are applied to the main body. I will present a procedural walking animation for a top-down 2D RPG character where the position of the character is calculated from the position of the feet. Each foot moves separately but is constrained by the main body to which it is attached. The animation is externalized but is injected into the character. The animation is parameterized with parameters such as direction, speed, and stride length. I will discuss the data structures, algorithms, and state machine used to represent the character and the animation.

14:55
Teaching OS via Collaboration, Research and Presentations

ABSTRACT. I have been teaching an Operating System class for over 15yrs and in the early days I tried to teach the course as strictly a lecture-based course and giving out the usual end of chapter assignments and creating different types of exams to assess students learning of the material. I used the multi-hundred-page Stallings textbook and several times tried other textbooks each that had way more material then could ever be covered in a single semester undergraduate course. After attending several SIGCSE workshops on using different tools to teach OS via programming I began to develop a course that would use this method but began to feel concerned that it might cause a huge hurdle for the weaker programmers in the department.

It was at this point I participated in an NSF grant on Studio Based Learning and began to redesign the whole way I taught OS. As I was beginning to experiment with some new ways of assessment in the class the ACM released its Computer Science Curricula 2013 report which gave some more support to some of the methods I was working into this course. I have now completely redesigned the course to use collaboration, research and presentations as the major components and methods of assessment. In this talk I will explain the method currently used, how I got here, lessons learned and how I continue to modify this course including how COVID effected it.

15:15
Using Chat and Teletyping for Productive Office Hours and Virtual Labs in Computer Science Education

ABSTRACT. With the COVID-19, many courses have been moved online. Screen-sharing tools have become the standard to communicate with the students during office hours. Even though the screen-sharing tools are very effective in hosting group meetings, providing effective one-to-one help can be challenging for the introductory programming courses, where students need frequent help. We introduce Virtual Office Hours, to improve the efficiency of tutoring to provide better access to help. The tool is based on a cloud IDE, where the students can code, compile and execute programs. When a student requests help, the tool allows the teacher to view the code in the cloud IDE without actual screen sharing. Additionally, the teletyping allows the instructor to view the student’s progress in fixing the issues. A video call may be initiated by the instructor when it is easier to verbally communicate the help, eliminating the need for third-party meeting software like Zoom. By maintaining a queue, the students can get help on a first-come-first-served basis. The demo will walk through the following steps in a typical help session. When an instructor is present, a student can request help from the workbench. An instructor can start a private chat room with a single click. The content in the student’s editor and related information are synced with the instructor’s view. The instructor can continue to watch the progress as the student teletype in the shared editor.

15:35
The Search for the Ideal Sparse Tensor Storage Format

ABSTRACT. Tensors, also known as n-way arrays, are seeing increased use in machine learning, data mining, and data analytics. Tensors offer many advantages over flat data matrices, and provide for many analysis techniques which can reveal latent features present in data. For example, tensors have been used to identify authorship within text documents, to represent deep learning networks, and to discover semantic meaning of learned models. Many of these recent applications have extremely high dimensional tensors which are extremely sparse. In fact, most of the time dense storage is completely intractable.

This presentation covers some of these applications and explores the current state of the art of sparse tensor storage and operations, including a few optimizations which may lead to an ideal sparse storage format.

14:35-15:55 Session 5B

Poster Session

Location: Dogwood II
Developing a Mobile Application for Professor/Student Profile Matching

ABSTRACT. Collaboration is a paramount aspect of the academic field. A recent surge in virtual connection and collaboration has shown us the need and potential for infrastructure and the platform for people to collaborate virtually. There is no shortage of software or platforms that let people collaborate with each other over the internet. Google Meets, Microsoft Teams, Skype, Zoom, etc., are all great platforms for working together in groups. What we lack right now is a platform that focuses on collaboration in academia and research. University professors work on various projects and research throughout the year. They commonly need collaborators (students or other faculty members) to work on their projects. Nowadays, mobile phones/smartphones are the most accessible computer devices, opening up boundless possibilities for communication and information exchange. This work aims to create a mobile application that allows professors and students to add their profiles and research interests. The application uses AI and natural language processing techniques to match the different profiles and suggest potential collaborators to the application users. The application has a centralized data server that saves the people and project data. Anybody with the mobile client app will then find and search for people and projects in the database. Information security will, of course, be a major priority of this project. Android studio and flutter are used to build the application. The application will be compatible with most Android and IOS devices. A demo will be available as part of the project presentation.

Gamification in a Robotic Training Virtual Reality Program

ABSTRACT. The use of robots in the manufacturing industry has become commonplace, with training new individuals on how to use robotic equipment providing possible damage costs. With the development of technology, virtual reality (VR) has become a viable method for robotic training since it offers a lifelike experience that replicates real-world training. Combined with the use of head-mounted displays (HMD), VR can realistically replicate an interactable environment. The goal of this research project is to provide more effective training through a VR program than standard training on the operation of a FANUC R-2000iC/165F robot. Gamification is a strategy that attempts to facilitate learning by adding game elements to an activity, turning the overall learning process into a game. The addition of this strategy aligns with goals of engaging the user more and creating a preferable method of learning. This presentation will include my contributions to the project in terms of gamification. The robotic training VR program is continuing development under the Western Kentucky University Extended Reality Lab team.

The Connection of STEM and Art & Design: A Visual Study of the Importance of Digital Arts in the Traditional Gallery Space

ABSTRACT. In April of 2021, eight students from varying STEAM (STEM + Art) fields were selected to form a research lab to produce AR/VR/MR programs for their collegiate community. The formation of Western Kentucky University’s XR Lab marked the beginning of a natural collaboration between STEM and Art & Design, more specifically forming a cooperative between CS, Graphic Design, UX, Game Design, Animation, and Engineering majors.

Inspired by their research in the lab, four founding XR members independently curated WKU’s first-ever augmented reality gallery, which was also the first STEAM and computer animation exhibition on campus. The overall goal of the gallery was to bring more representation to the digital arts. Because modern audiences have become desensitized to computer-based arts, there is now a stigma that digital arts are not as “profound” as the fine arts. By featuring this work in a traditional gallery space, the intention was to show that these mediums are just as thoughtfully and emotionally executed. Student and faculty works ranging from graphic design, illustration, animation, 3D/CAD models, and screen prints were considered eligible for the show. Over 25 works in total from creatives from a wide range of disciplines were collected and transformed into corresponding QR codes that guests could scan on their smart devices in the gallery space.

The gallery received enormous public support, being featured in broadcasts by Spectrum and WBKO news, as well as the College Heights Herald newspaper. An estimated 170+ attendees visited the gallery. It “raises serious questions about the future, the nature of art, and how AR can start to intersect with that world,” one attendee commented. The curators intend to hold another gallery in the future, accepting more works and opening submissions to groups in the community, specifically high schools.