Skip to main content

How Explainable Artificial Intelligence (XAI) Models Can Be Used Within Intrusion Detection Systems (IDS) to Enhance an Analyst's Trust and Understanding

  • Conference paper
  • First Online:
Cybersecurity Challenges in the Age of AI, Space Communications and Cyborgs (ICGS3 2023)

Abstract

An intrusion detection system (IDS) is a fundamental tool when deploying cyber defence within an organisation. The ever-evolving landscape of cyber threats has pushed an advancement in the application of artificial intelligence (AI) within such tools to pioneer more sophisticated detection techniques (Wang et al. in IEEE Access 8:73,127–73,141, 2020). Cyber security analysts rely on these technologies to make critical decisions and correctly identify and prevent malicious threats to their organisation. It is therefore imperative that analysts can understand, trust, and have confidence in the IDS decisions (Neupane et al. in “Explainable Intrusion Detection Systems (X-IDS): A Survey of Current Methods, Challenges, and Opportunities”, arXiv, Ithaca, NY, 2022). However, the advancement of these technologies has led to complex AI systems that lack transparency and are difficult for human analysts to comprehend. This research explores how these issues can be elevated by using explainable AI (XAI) to help make complex AI models more understandable (Kelley and George, “How to solve the Black Box AI problem through transparency,” 16 August 2021. [Online]. Available: https://www.techtarget.com/searchenterpriseai/feature/How-to-solve-the-black-box-AI-problem-through-transparency) and add clarity and context to their decisions. Ways in which trust can be measured, and the factors affecting trust, are identified through this research to analyse how the perceived ease-of-use, trust, and confidence of an analyst can be increased through the adoption of XAI. Key findings from this have been demonstrated through a recommended implementation approach for an XAI model, and proof-of-concept user-interface (UI) design. This research brings recognition to the need for explainability within IDSs and provides a user-centric approach to doing so.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Wang M, Zheng K, Yang Y, Wang X (2020) An explainable machine learning framework for intrusion detection systems. IEEE Access 8:73127–73141

    Article  Google Scholar 

  2. Neupane S, Ables J, Anderson W, Mittal S, Rahimi S, Banicescu I, Seale M (2022) Explainable intrusion detection systems (X-IDS): a survey of current methods, challenges, and opportunities. arXiv, Ithaca, NY

    Google Scholar 

  3. Kelley K, George B, How to solve the Black Box AI problem through transparency, 16 August 2021. https://www.techtarget.com/searchenterpriseai/feature/How-to-solve-the-black-box-AI-problem-through-transparency

  4. Brooks C, Alarming cyber statistics for mid-year 2022 that you need to know, 3 June 2022. https://www.forbes.com/sites/chuckbrooks/2022/06/03/alarming-cyber-statistics-for-mid-year-2022-that-you-need-to-know/?sh=174b40547864

  5. Kleinman L, Cyberattacks: just how sophisticated have they become?, 3 November 2020. https://www.forbes.com/sites/forbestechcouncil/2020/11/03/cyberattacks-just-how-sophisticated-have-they-become/?sh=5eaa9bc44c3e

  6. Das A, Rad P (2020) Opportunities and challenges in explainable artificial intelligence (XAI): a survey, Ithaca. ArXiv, NY

    Google Scholar 

  7. Mahbooba B, Timilsina M, Sahal R, Serrano M (2021) Explainable artificial intelligence (XAI) to enhance trust management in intrusion detection systems using decision tree model. Complexity, p 11

    Google Scholar 

  8. Darktrace (2022) 5 AI and cybersecurity predictions for 2022

    Google Scholar 

  9. Darktrace (2022) Darktrace AI: combining unsupervised and supervised machine learning [White paper]. https://darktrace.com/resources

  10. Brown S, Machine learning, explained, 21 April 2021. https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained

  11. Choung H, David P, Ross A (2022) Trust in AI and its role in the acceptance of AI technologies. Int J Human-Comput Interact 1–13

    Google Scholar 

  12. Ashoori M, Weisz J (2019) In AI we trust? Factors that influence trustworthiness of ai-infused decision-making processes, arXiv

    Google Scholar 

  13. McKendrick J, AI adoption skyrocketed over the last 18 months, 27 September 2021. https://hbr.org/2021/09/ai-adoption-skyrocketed-over-the-last-18-months

  14. Glikson E, Woolley A (2020) Human trust in artificial intelligence: review of empirical research. Acad Manage Ann 627–660

    Google Scholar 

  15. Molnar C (2022) Interpretable machine learning. Independent, Munich

    Google Scholar 

  16. CheckPoint (2022) Check point 2022 cyber security report. CheckPoint, Tel Aviv- Yafo

    Google Scholar 

  17. Khraisat A, Gondal I, Vamplew P, Kamruzzaman J (2019) Survey of intrusion detection systems: techniques, datasets and challenges. Cybersecurity 1–22

    Google Scholar 

  18. Barnard P, Marchetti N, DaSilva L (2022) Robust network intrusion detection through explainable artificial intelligence (XAI). IEEE Netw Lett

    Google Scholar 

  19. Colaner N (2022) Is explainable artificial intelligence intrinsically valuable?. AI and Society 231–238

    Google Scholar 

  20. Sutton D, Deep learning and the new frontiers of model explainability, 15 November 2021. https://www.featurespace.com/newsroom/deep-learning-and-the-new-frontiers-of-model-explainability

  21. Perez I, Skalski P, Barns-Graham A, Wong J, Sutton D (2022) Attribution of predictive uncertainties in classification models. In: Proceedings of the thirty-eighth conference on uncertainty in artificial intelligence, pp 1582–1591

    Google Scholar 

  22. Nickerson C, Interpretivism paradigm & research philosophy, 5 April 2022. https://simplysociology.com/interpretivism-paradigm.html

  23. McCombes S, Descriptive research design|definition, methods & examples, 5 May 2022. https://www.scribbr.co.uk/research-methods/descriptive-research-design/

  24. Fujs D, Anže Mihelič SV (2019) The power of interpretation: qualitative methods in cybersecurity research. In: ARES ‘19: proceedings of the 14th international conference on availability, reliability and security, pp 1–10

    Google Scholar 

  25. Gillath O, Ai T, Branicky M, Keshmiri S, Davison R, Spaulding R (2021) Attachment and trust in artificial intelligence. Computers in Human Behavior

    Google Scholar 

  26. Nayyar S, Why you need to build trust with your security vendor before signing the purchase order, 8 September 2022. https://www.forbes.com/sites/forbestechcouncil/2022/09/08/why-you-need-to-build-trust-with-your-security-vendor-before-signing-the-purchase-order/

  27. Sukumar R, SHAP Part 3: Tree SHAP, 30 March 2020. https://medium.com/analytics-vidhya/shap-part-3-tree-shap-3af9bcd7cd9b

  28. Durgia C, Using SHAP for explainability—understand these limitations first, 31 December 2021. https://towardsdatascience.com/using-shap-for-explainability-understand-these-limitations-first-1bed91c9d21

  29. Vee A, Importance of a good User Interface (UI), 27 February 2020. https://www.linkedin.com/pulse/importance-good-user-interface-ui-anna-v/

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Shand, C., Fong, R., Butt, U. (2024). How Explainable Artificial Intelligence (XAI) Models Can Be Used Within Intrusion Detection Systems (IDS) to Enhance an Analyst's Trust and Understanding. In: Jahankhani, H. (eds) Cybersecurity Challenges in the Age of AI, Space Communications and Cyborgs. ICGS3 2023. Advanced Sciences and Technologies for Security Applications. Springer, Cham. https://doi.org/10.1007/978-3-031-47594-8_17

Download citation

Publish with us

Policies and ethics