Skip to main content

Domain Knowledge-Aided Explainable Artificial Intelligence

  • Chapter
  • First Online:
Explainable Artificial Intelligence for Cyber Security

Part of the book series: Studies in Computational Intelligence ((SCI,volume 1025))

Abstract

The lack of human-friendly explanations from Artificial Intelligence (AI)—based decisions is a major concern for high stake applications. Towards mitigating the concern, Explainable AI (XAI) is an emerging area of research. In pre-modeling explainability, which is one of the notions of explainability, one can introduce explainability before training the model. One can also introduce explainability during training of the model or after training the model, in general, both of which are known as post-hoc explainability. Unfortunately, post-hoc explainability is not readily transparent and can be misleading, as it explains after the decision has been made and can be optimized to placate a subjective demand which is a form of bias. As a result, the explanation from it can be misleading even though it seems plausible. Explainability should incorporate knowledge from different domains such as philosophy, psychology, and cognitive science, so that the explanation is not just based on the researcher’s intuition of what constitutes a good explanation. Domain knowledge is an abstract, fuzzy, and high-level concept over the problem domain. For instance, in an image classification problem, the domain knowledge could be a dog has four legs, or a zebra has stripes, etc. However, the use of domain knowledge for explainability is under-focused and bound to problem-specific requirements. This chapter focuses on the notion of a pre-modeling explainability of AI-based “black box” models using domain knowledge. We demonstrate the collection and application of domain knowledge along with the quantification of explainability on an intrusion detection problem. Although AI-based Intrusion Detection Systems (IDS) provide accelerated speeds in intrusion detection, the response is still at a human speed when there is a human in the loop. The lack of explainability of an AI-based model is a key reason for this bottleneck as a human analyst has to understand the prediction before making the final decision towards mitigation of a problem. To mitigate this issue, in this chapter, we incorporate the CIA principle (i.e., domain knowledge) in an AI-based black box model for better explainability and generalizability of the model and demonstrate the process in detail. A major portion of this chapter is a compilation of our previously published work [1,2,3,4]. We start with a brief discussion of the problem, network intrusion detection and prevention, that we use to demonstrate our approach. Then we briefly introduced relevant domain knowledge and how it can be integrated into the overall architecture. In Sect. 3, we describe our experiments, followed by Sect. 4 which contains a discussion on results from the experiments. We conclude with limitations and future work in Sect. 5.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 179.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. S.R. Islam, W. Eberle, S.K. Ghafoor, A. Siraj, M. Rogers, Domain knowledge aided explainable artificial intelligence for intrusion detection and response, in AAAI-MAKE 2020 Combining Machine Learning and Knowledge Engineering in Practice—Volume I: Spring Symposium (2020a)

    Google Scholar 

  2. S.R. Islam, W. Eberle, S. Bundy, S.K. Ghafoor, Infusing domain knowledge in AI-based “black box” models for better explainability with application in bankruptcy prediction (2019), arXiv preprint arXiv:1905.11474

  3. S.R. Islam, W. Eberle, Implications of combining domain knowledge in explainable artificial intelligence, in AAAI-MAKE 2021 (2021)

    Google Scholar 

  4. S.R. Islam, W. Eberle, S.K. Ghafoor, Towards quantification of explainability in explainable artificial intelligence methods, in AAAI Publications, The Thirty-Third International Flairs Conference (2020b)

    Google Scholar 

  5. M. Doyle, Don’t be lulled into a false sense of security (2019), https://www.securityroundtable.org/dont-lulled-false-sense-cybersecurity/

  6. E. Hodo, X. Bellekens, A. Hamilton, P.-L. Dubouilh, E. Iorkyase, C. Tachtatzis, R. Atkinson, Threat analysis of IoT networks using artificial neural network intrusion detection system, in 2016 International Symposium on Networks, Computers and Communications (ISNCC) (IEEE, 2016), pp. 1–6

    Google Scholar 

  7. T. Alladi, V. Kohli, V. Chamola, F Richard Yu, M. Guizani, Artificial intelligence (AI)-empowered intrusion detection architecture for the internet of vehicles. IEEE Wirel. Commun. 28(3), 144–149 (2021)

    Google Scholar 

  8. T.S. Ustun, S.M. Suhail Hussain, L. Yavuz, A. Onen, Artificial intelligence based intrusion detection system for IEC 61850 sampled values under symmetric and asymmetric faults. IEEE Access 9, 56486–56495 (2021)

    Google Scholar 

  9. V Kanimozhi, T.P. Jacob, Artificial intelligence outflanks all other machine learning classifiers in network intrusion detection system on the realistic cyber dataset CSE-CIC-IDS2018 using cloud computing. ICT Exp. 7(3), 366–370 (2021)

    Google Scholar 

  10. I.F. Kilincer, F. Ertam, A. Sengur, Machine learning methods for cyber security intrusion detection: Datasets and comparative study. Comput. Netw. 188, 107840 (2021)

    Google Scholar 

  11. X. Luo, Model design artificial intelligence and research of adaptive network intrusion detection and defense system using fuzzy logic. J. Intell. Fuzzy Syst. (Preprint), 1–9 (2021)

    Google Scholar 

  12. N. Shone, T.N. Ngoc, V.D. Phai, Q. Shi, A deep learning approach to network intrusion detection. IEEE Trans. Emerg. Top. Comput. Intell. 2(1), 41–50 (2018)

    Google Scholar 

  13. J. Kim, J. Kim, H.L. Thi Thu, H. Kim, Long short term memory recurrent neural network classifier for intrusion detection, in 2016 International Conference on Platform Technology and Service (PlatCon) (IEEE, 2016), pp. 1–5

    Google Scholar 

  14. A. Javaid, Q. Niyaz, W. Sun, M. Alam, A deep learning approach for network intrusion detection system, in Proceedings of the 9th EAI International Conference on Bio-inspired Information and Communications Technologies (formerly BIONETICS) (ICST (Institute for Computer Sciences, Social-Informatics and ..., 2016), pp. 21–26

    Google Scholar 

  15. Z. Li, W. Sun, L. Wang, A neural network based distributed intrusion detection system on cloud platform, in 2012 IEEE 2nd international conference on Cloud Computing and Intelligence Systems, vol. 1 (IEEE, 2012), pp. 75–79

    Google Scholar 

  16. B. Dong, X. Wang, Comparison deep learning method to traditional methods using for network intrusion detection, in 2016 8th IEEE International Conference on Communication Software and Networks (ICCSN) (IEEE, 2016), pp. 581–585

    Google Scholar 

  17. A.H. Lashkari, G. Draper-Gil, M. Saiful Islam Mamun, A.A. Ghorbani, Characterization of tor traffic using time based features, in ICISSP (2017), pp. 253–262

    Google Scholar 

  18. I. Sharafaldin, A.H. Lashkari, A.A. Ghorbani, Toward generating a new intrusion detection dataset and intrusion traffic characterization, in ICISSP (2018), pp. 108–116

    Google Scholar 

  19. B. Matt et al., Introduction to Computer Security (Pearson Education India, 2006)

    Google Scholar 

  20. Scikit-learn: Machine learning in python (2019), https://scikit-learn.org/stable

  21. Tensorflow (2019), https://www.tensorflow.org

  22. Domain-knowledge-aided dataset (2019), https://github.com/SheikhRabiul/domain-knowledge-aided-explainable-ai-for-intrusion-detection-and-response/tree/master/data/combined_sampled.zip

  23. N.V. Chawla, K.W. Bowyer, L.O. Hall, W. Philip Kegelmeyer, Smote: synthetic minority over-sampling technique. J. Artif. Intell. Res. 16, 321–357 (2002)

    Google Scholar 

  24. H. Zhang, The optimality of Naive Bayes. AA 1(2), 3 (2004)

    Google Scholar 

  25. J. Chen, K. Li, Z. Tang, K. Bilal, Yu. Shui, C. Weng, K. Li, A parallel random forest algorithm for big data in a spark cloud computing environment. IEEE Trans. Parallel Distrib. Syst. 28(4), 919–933 (2016)

    Article  Google Scholar 

  26. G.A. Miller, The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychol. Rev. 63(2), 81 (1956)

    Google Scholar 

Download references

Acknowledgements

This chapter presents work from multiple sources [1,2,3,4]. The authors would like to thank the following who contributed to the previous efforts: Sheikh K. Ghafoor, Ambareen Siraj, Mike Rogers, and Sid Bundy.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sheikh Rabiul Islam .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Islam, S.R., Eberle, W. (2022). Domain Knowledge-Aided Explainable Artificial Intelligence. In: Ahmed, M., Islam, S.R., Anwar, A., Moustafa, N., Pathan, AS.K. (eds) Explainable Artificial Intelligence for Cyber Security. Studies in Computational Intelligence, vol 1025. Springer, Cham. https://doi.org/10.1007/978-3-030-96630-0_4

Download citation

Publish with us

Policies and ethics