Abstract
An intrusion detection system (IDS) is a fundamental tool when deploying cyber defence within an organisation. The ever-evolving landscape of cyber threats has pushed an advancement in the application of artificial intelligence (AI) within such tools to pioneer more sophisticated detection techniques (Wang et al. in IEEE Access 8:73,127–73,141, 2020). Cyber security analysts rely on these technologies to make critical decisions and correctly identify and prevent malicious threats to their organisation. It is therefore imperative that analysts can understand, trust, and have confidence in the IDS decisions (Neupane et al. in “Explainable Intrusion Detection Systems (X-IDS): A Survey of Current Methods, Challenges, and Opportunities”, arXiv, Ithaca, NY, 2022). However, the advancement of these technologies has led to complex AI systems that lack transparency and are difficult for human analysts to comprehend. This research explores how these issues can be elevated by using explainable AI (XAI) to help make complex AI models more understandable (Kelley and George, “How to solve the Black Box AI problem through transparency,” 16 August 2021. [Online]. Available: https://www.techtarget.com/searchenterpriseai/feature/How-to-solve-the-black-box-AI-problem-through-transparency) and add clarity and context to their decisions. Ways in which trust can be measured, and the factors affecting trust, are identified through this research to analyse how the perceived ease-of-use, trust, and confidence of an analyst can be increased through the adoption of XAI. Key findings from this have been demonstrated through a recommended implementation approach for an XAI model, and proof-of-concept user-interface (UI) design. This research brings recognition to the need for explainability within IDSs and provides a user-centric approach to doing so.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Wang M, Zheng K, Yang Y, Wang X (2020) An explainable machine learning framework for intrusion detection systems. IEEE Access 8:73127–73141
Neupane S, Ables J, Anderson W, Mittal S, Rahimi S, Banicescu I, Seale M (2022) Explainable intrusion detection systems (X-IDS): a survey of current methods, challenges, and opportunities. arXiv, Ithaca, NY
Kelley K, George B, How to solve the Black Box AI problem through transparency, 16 August 2021. https://www.techtarget.com/searchenterpriseai/feature/How-to-solve-the-black-box-AI-problem-through-transparency
Brooks C, Alarming cyber statistics for mid-year 2022 that you need to know, 3 June 2022. https://www.forbes.com/sites/chuckbrooks/2022/06/03/alarming-cyber-statistics-for-mid-year-2022-that-you-need-to-know/?sh=174b40547864
Kleinman L, Cyberattacks: just how sophisticated have they become?, 3 November 2020. https://www.forbes.com/sites/forbestechcouncil/2020/11/03/cyberattacks-just-how-sophisticated-have-they-become/?sh=5eaa9bc44c3e
Das A, Rad P (2020) Opportunities and challenges in explainable artificial intelligence (XAI): a survey, Ithaca. ArXiv, NY
Mahbooba B, Timilsina M, Sahal R, Serrano M (2021) Explainable artificial intelligence (XAI) to enhance trust management in intrusion detection systems using decision tree model. Complexity, p 11
Darktrace (2022) 5 AI and cybersecurity predictions for 2022
Darktrace (2022) Darktrace AI: combining unsupervised and supervised machine learning [White paper]. https://darktrace.com/resources
Brown S, Machine learning, explained, 21 April 2021. https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained
Choung H, David P, Ross A (2022) Trust in AI and its role in the acceptance of AI technologies. Int J Human-Comput Interact 1–13
Ashoori M, Weisz J (2019) In AI we trust? Factors that influence trustworthiness of ai-infused decision-making processes, arXiv
McKendrick J, AI adoption skyrocketed over the last 18 months, 27 September 2021. https://hbr.org/2021/09/ai-adoption-skyrocketed-over-the-last-18-months
Glikson E, Woolley A (2020) Human trust in artificial intelligence: review of empirical research. Acad Manage Ann 627–660
Molnar C (2022) Interpretable machine learning. Independent, Munich
CheckPoint (2022) Check point 2022 cyber security report. CheckPoint, Tel Aviv- Yafo
Khraisat A, Gondal I, Vamplew P, Kamruzzaman J (2019) Survey of intrusion detection systems: techniques, datasets and challenges. Cybersecurity 1–22
Barnard P, Marchetti N, DaSilva L (2022) Robust network intrusion detection through explainable artificial intelligence (XAI). IEEE Netw Lett
Colaner N (2022) Is explainable artificial intelligence intrinsically valuable?. AI and Society 231–238
Sutton D, Deep learning and the new frontiers of model explainability, 15 November 2021. https://www.featurespace.com/newsroom/deep-learning-and-the-new-frontiers-of-model-explainability
Perez I, Skalski P, Barns-Graham A, Wong J, Sutton D (2022) Attribution of predictive uncertainties in classification models. In: Proceedings of the thirty-eighth conference on uncertainty in artificial intelligence, pp 1582–1591
Nickerson C, Interpretivism paradigm & research philosophy, 5 April 2022. https://simplysociology.com/interpretivism-paradigm.html
McCombes S, Descriptive research design|definition, methods & examples, 5 May 2022. https://www.scribbr.co.uk/research-methods/descriptive-research-design/
Fujs D, Anže Mihelič SV (2019) The power of interpretation: qualitative methods in cybersecurity research. In: ARES ‘19: proceedings of the 14th international conference on availability, reliability and security, pp 1–10
Gillath O, Ai T, Branicky M, Keshmiri S, Davison R, Spaulding R (2021) Attachment and trust in artificial intelligence. Computers in Human Behavior
Nayyar S, Why you need to build trust with your security vendor before signing the purchase order, 8 September 2022. https://www.forbes.com/sites/forbestechcouncil/2022/09/08/why-you-need-to-build-trust-with-your-security-vendor-before-signing-the-purchase-order/
Sukumar R, SHAP Part 3: Tree SHAP, 30 March 2020. https://medium.com/analytics-vidhya/shap-part-3-tree-shap-3af9bcd7cd9b
Durgia C, Using SHAP for explainability—understand these limitations first, 31 December 2021. https://towardsdatascience.com/using-shap-for-explainability-understand-these-limitations-first-1bed91c9d21
Vee A, Importance of a good User Interface (UI), 27 February 2020. https://www.linkedin.com/pulse/importance-good-user-interface-ui-anna-v/
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Shand, C., Fong, R., Butt, U. (2024). How Explainable Artificial Intelligence (XAI) Models Can Be Used Within Intrusion Detection Systems (IDS) to Enhance an Analyst's Trust and Understanding. In: Jahankhani, H. (eds) Cybersecurity Challenges in the Age of AI, Space Communications and Cyborgs. ICGS3 2023. Advanced Sciences and Technologies for Security Applications. Springer, Cham. https://doi.org/10.1007/978-3-031-47594-8_17
Download citation
DOI: https://doi.org/10.1007/978-3-031-47594-8_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-47593-1
Online ISBN: 978-3-031-47594-8
eBook Packages: Physics and AstronomyPhysics and Astronomy (R0)