Given their crucial role in protecting networks from numerous security threats, intrusion detection systems are crucial to any cybersecurity architecture. Deep neural networks have recently shown astounding effectiveness and performance in various machine learning applications, including intrusion detection. However, it has been observed that deep learning models are highly susceptible to a wide range of attacks during both the training and testing phases. These attacks can compromise the privacy of deep learning models, such as poisoning attacks that can affect the performance of the target model during the training process and evasion attacks that can undermine the security of these models during the testing phase. Numerous studies have been conducted to understand and mitigate these attacks and to propose more efficient techniques with higher success rates and accuracy in various tasks utilizing deep learning models, such as image classification, face recognition, network intrusion detection, and healthcare applications. Despite the considerable efforts in this area, the network domain still lacks sufficient attention to these attacks and vulnerabilities. This paper aims to address this gap by proposing a framework for adversarial attacks against network intrusion detection systems (NIDS). The proposed framework focuses on poisoning and evasion attacks and tries to combine these attacks. We evaluate the proposed framework on three CIC-IDS2017, CIC-IDS2018, and CIC-UNSW datasets.
Poisoning and Evasion: Deep Learning-Based NIDS Under Adversarial Attacks