Download PDFOpen PDF in browserExplainable AI in Network Anomaly Detection: Enhancing Transparency and TrustEasyChair Preprint 1412420 pages•Date: July 25, 2024AbstractNetwork anomaly detection plays a crucial role in ensuring the security and reliability of computer networks. With the rapid advancement of Artificial Intelligence (AI) techniques, the use of AI algorithms, particularly deep learning models, has shown great promise in detecting network anomalies. However, the lack of transparency and interpretability of these AI models has raised concerns regarding their trustworthiness and acceptance in practical applications. This research article aims to explore the concept of explainable AI in the context of network anomaly detection. It highlights the importance of transparency and interpretability in AI models, especially when applied to critical systems such as network security. The article discusses various techniques and approaches that can be employed to enhance the explainability of AI-based network anomaly detection systems. Furthermore, this study emphasizes the benefits of explainable AI in improving trust and acceptance among users, network administrators, and other stakeholders. By providing clear explanations of how AI models detect network anomalies, these systems can foster a deeper understanding of the underlying processes and enhance the confidence in their outputs. Keyphrases: AI algorithms, AI-based network, Anomaly Detection Systems
|