Download PDFOpen PDF in browser

Adversarial Machine Learning: Difficulties in Applying Machine Learning to Existing Cybersecurity Systems

8 pagesPublished: March 9, 2020

Abstract

Machine learning is an attractive tool to make use of in various areas of computer science. It allows us to take a hands-off approach in various situations where previously manual work was required. One such area machine learning has not yet been applied entirely successfully is cybersecurity. The issue here is that most classical machine learning models do not consider the possibility of an adversary purposely attempting to mislead the machine learning system. If the possibility that incoming data will be deliberately crafted to mislead and break the machine learning system, these systems are useless in a cybersecurity setting. Taking this into account may allow us to modify existing security systems and introduce the power of machine learning to them.

Keyphrases: Classifier, Cybersecurity, Evasion Attack, machine learning, poisoning attack, Spam filter

In: Gordon Lee and Ying Jin (editors). Proceedings of 35th International Conference on Computers and Their Applications, vol 69, pages 40--47

Links:
BibTeX entry
@inproceedings{CATA2020:Adversarial_Machine_Learning_Difficulties,
  author    = {Nick Rahimi and Jordan Maynor and Bidyut Gupta},
  title     = {Adversarial Machine Learning:  Difficulties in Applying Machine Learning to Existing  Cybersecurity Systems},
  booktitle = {Proceedings of 35th International Conference on Computers and Their Applications},
  editor    = {Gordon Lee and Ying Jin},
  series    = {EPiC Series in Computing},
  volume    = {69},
  pages     = {40--47},
  year      = {2020},
  publisher = {EasyChair},
  bibsource = {EasyChair, https://easychair.org},
  issn      = {2398-7340},
  url       = {https://easychair.org/publications/paper/XwRv},
  doi       = {10.29007/3xbb}}
Download PDFOpen PDF in browser