Download PDFOpen PDF in browser

Explainable Neural Networks for Interpretable Cybersecurity Decisions

EasyChair Preprint no. 14013

22 pagesDate: July 17, 2024

Abstract

In recent years, the field of cybersecurity has seen a significant increase in the use of complex machine learning models, such as neural networks, to detect and prevent cyber threats. However, one of the major challenges in adopting these models is their lack of interpretability, which hinders decision-making processes and trust in their outcomes. This paper presents the concept of Explainable Neural Networks (XNNs) as a solution to this challenge. XNNs are designed to not only provide accurate predictions but also offer explanations for their decisions, making them more interpretable to human operators. We discuss the various techniques and methodologies used to enhance the interpretability of neural networks, including feature importance analysis, rule extraction, and model-agnostic explanations. Furthermore, we highlight the importance of transparency and accountability in cybersecurity decision-making and provide recommendations for the adoption and implementation of XNNs in real-world cybersecurity systems. Through the use of XNNs, we can bridge the gap between the black-box nature of neural networks and the need for interpretable decision-making in cybersecurity.

Keyphrases: Cybersecurity, networks, neural

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:14013,
  author = {Kaledio Potter and Dylan Stilinki and Selorm Adablanu},
  title = {Explainable Neural Networks for Interpretable Cybersecurity Decisions},
  howpublished = {EasyChair Preprint no. 14013},

  year = {EasyChair, 2024}}
Download PDFOpen PDF in browser