BDCAT2020: 7TH IEEE/ACM INTERNATIONAL CONFERENCE ON BIG DATA COMPUTING, APPLICATIONS AND TECHNOLOGIES
PROGRAM FOR WEDNESDAY, DECEMBER 9TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:30-10:30 Session 8: Big Data Analytics and Applications II
09:30
Deepfake Detection through Deep Learning

ABSTRACT. Deepfakes allow for the automatic generation and creation of (fake) video content, e.g. through generative adversarial networks. Deepfake technology is a controversial technology with many wide reaching issues impacting society, e.g. election biasing. Much research has been devoted to developing detection methods to reduce the potential negative impact of deepfakes. Application of neural networks and deep learning is one approach. In this paper, we consider the deepfake detection technologies Xception and MobileNet as two approaches for classification tasks to automatically detect deepfake videos. We utilise training and evaluation datasets from FaceForensics++ comprising four datasets generated using four different and popular deepfake technologies. The results show high accuracy over all datasets with an accuracy varying between 91-98% depending on the deepfake technologies applied. We also developed a voting mechanism that can detect fake videos using the aggregation of all four methods instead of only one.

10:00
Effects of the Number of Hyperparameters on the Performance of GA-CNN

ABSTRACT. The performance of a machine learning algorithm is highly dependent on its hyperparameters. However, hyperparameter optimization is not a trivial task as it is problem-specific. The difficulty rises when coupled with a larger number of hyperparameters resulting in high search space dimensions. The common understanding seems to be that the optimization is only done on limited hyperparameters. Indeed, a larger number of hyperparameters have not been commonly utilized in hyperparameter optimization. This study investigates the role of hyperparameters by using a genetic algorithm (GA) as the main optimization method for a convolutional neural network (CNN). The novelty of this study is two-fold. Firstly, we defined 20 hyperparameters and their ranges, specifically for text classification. Secondly, we conducted experiments with different numbers of hyperparameters and different numbers of optimized hyperparameters. GA-CNN was evaluated using a disaster tweets dataset and compared to the other methods, i.e., grid search, random search, TPE, TPOT, CNN, LSTM, CNN-LSTM, and BERT. The experimental results demonstrated that the proposed method shows better performance over other methods. The results also showed that a larger number of hyperparameters and layer-specific hyperparameter values are indeed important.

10:30
A satellite collision avoidance system based on General Regression Neural Network

ABSTRACT. Continuous launching of new satellites and increasing numbers of space missions is making space a congested environment. Collision with space debris or other satellites is now a real problem for satellites. This problem is more acute in highly trafficked orbits. Mission operators and space agencies are in need of high accuracy collision avoidance systems for spacecrafts and in the future, autonomous or self-navigating satellites. This paper focuses on tackling the satellite collision problem by implementing a collision avoidance system using neural networks and relevant machine learning techniques. The primary model is based on General Regression Neural Network (GRNN) and the secondary models are based on Artificial Neural Networks (ANN), Random Forest Regression & Support Vector Regression. The dataset used in this paper is collected from the ESA (European Space Agency) which contains the events of risk assessment or, in other words, conjunction data messages. The proposed collision avoidance system predicts the collision risk percentage between target (a satellite of interest) and chaser (space debris or another satellite) objects. The predicted risk enables the target to maneuver accordingly and ultimately avoid collision with the chaser object. The GRNN algorithm uses lazy learning which doesn't require iterative training and makes predictions based on stored parameters. The training data has been normalized before applying the algorithm as GRNN network is sensitive to high deviation among input features. However, the GRNN model predicts the risk of collision between the target & the chaser object with an MSE (Mean Squared Error) of 11%. This is reliable enough and lower than other models' MSE to consolidate the fact that the GRNN model is best fit for our dataset.

14:30-17:00 Session 9: Keynotes

Keynote (BDCAT): “Integrating Big Data, Data Science and Cyber Security with Applications in Internet of Transportation and Infrastructures” by

Bhavani Thuraisingham (University of Texas, Dallas, US)

Break

Keynote (UCC): “AI and Science Workflow Automation” by Ewa Deelman (USC)