Skip to main content

Black Box Models for eXplainable Artificial Intelligence

  • Chapter
  • First Online:
Explainable AI: Foundations, Methodologies and Applications

Abstract

Machine learning algorithms are becoming popular nowadays in cyber security applications like Intrusion Detection Systems (IDS). Most of these models are anticipated as a Black Box. Previously black box was a model where the user cannot see the internal logic. To reach the goal of overwhelming the crucial weakness, the cost may vary. This is related to both ethical and practical problems. Explainable Artificial Intelligence (XAI) is crucial to converting the machine learning algorithms to appreciate the management by accepting the human experts to understand the data evidence. Important role of trust management is to accept the impact of malicious data to identify the intrusions. This chapter addresses the XAI method to appreciate trust management using the decision tree models. Basic decision tree models are used to simulate a human contact to decision making by dividing the options into multiple small options for the IDS area. This chapter aims to implement the arrangement of issues labeled in the various black box methods. This survey helps the researcher to understand the classification of various black box models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Abduljabbar, R., Dia, H., Liyanage, S., Bagloee, S.A.: Applications of artificial intelligence in transport: an overview. Sustainability 11(1), 189 (2019)

    Google Scholar 

  • Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)

    Article  Google Scholar 

  • Aliramezani, M., Koch, C.R., Shahbakhti, M.: Modeling, diagnostics, optimization, and control of internal combustion engines via modern machine learning techniques: a review and future directions. Prog. Energy Combust. Sci. 88, 100967 (2022)

    Article  Google Scholar 

  • Anders, C.J., Neumann, D., Samek, W., Müller, K.R., Lapuschkin, S.: Software for dataset-wide XAI: from local explanations to global insights with Zennit, CoRelAy, and ViRelAy. arXiv preprint arXiv:2106.13200 (2021)

  • Arrieta, A.B., Díaz-Rodríguez, N., Del Ser, J., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)

    Google Scholar 

  • Aseen, I.S., Kumar, C.A.: Intrusion detection model using fusion of chi-square feature selection and multi class SVM. J. King Saud Univ.-Comput. Inf. Sci. 29(4), 462–472 (2017)

    Google Scholar 

  • Balakrishnan, S., Venkatalakshmi, K., Arputharaj, K.: Intrusion detection system using feature selection and classification technique. Int. J. Comput. Sci. Appl. 3(4), 145–151 (2014)

    Google Scholar 

  • Baniecki, H., Kretowicz, W., Piatyszek, P., Wisniewski, J., Biecek, P.: dalex: responsible machine learning with interactive explainability and fairness in Python. arXiv preprint arXiv:2012.14406 (2020)

  • Barbado, A., Corcho, Ó., Benjamins, R.: Rule extraction in unsupervised anomaly detection for model explainability: application to OneClass SVM. Expert Syst. Appl. 189, 116100 (2022)

    Article  Google Scholar 

  • Baur, T.: Cooperative and transparent machine learning for the context-sensitive analysis of social interactions (2018)

    Google Scholar 

  • Bendovschi, A.C., Ionescu, B.Ş.: The gap between cloud computing technology and the audit and information security. Audit Financ. 13(125) (2015)

    Google Scholar 

  • Bonfanti, M.E.: Artificial intelligence and the offence-defence balance in cyber security. In: Cyber Security: Socio-Technological Uncertainty and Political Fragmentation, pp. 64–79. Routledge, London (2022)

    Google Scholar 

  • Brito, L.C., Susto, G.A., Brito, J.N., Duarte, M.A.: An explainable artificial intelligence approach for unsupervised fault detection and diagnosis in rotating machinery. Mech. Syst. Signal Process. 163, 108105 (2022)

    Article  Google Scholar 

  • Cai, D., Wang, W., Li, M.: Incorporating visual information in audio based self-supervised speaker recognition. IEEE/ACM Transactions on Audio, Speech, and Language Processing (2022)

    Google Scholar 

  • Castelvecchi, D.: Can we open the black box of AI? Nature 538(7623), 20 (2016)

    Article  Google Scholar 

  • Chebrolu, S., Abraham, A., Omas, J.P.: Feature deduction and ensemble design of intrusion detection systems. Comput. Secur. 24(4), 295–307 (2005)

    Google Scholar 

  • Chennam, K.K., Uma Maheshwari, V., Aluvalu, R.: Maintaining IoT healthcare records using cloud storage. In: IoT and IoE Driven Smart Cities, pp. 215–233. Springer, Cham (2022)

    Google Scholar 

  • Chipman, H.A., George, E.I., McCulloh, R.E.: Making sense of a forest of trees. In: Weisberg, S. (ed.) Proceedings of the 30th Symposium on the Interface, pp. 84–92. Interface Foundation of North America, Fairfax Station, VA (1998)

    Google Scholar 

  • Chou, Y.L., Moreira, C., Bruza, P., Ouyang, C., Jorge, J.: Counterfactuals and causability in explainable artificial intelligence: theory, algorithms, and applications. Inf. Fusion 81, 59–83 (2022)

    Article  Google Scholar 

  • Das, A., Rad, P.: Opportunities and challenges in explainable artificial intelligence (XAI): a survey. arXiv preprint arXiv:2006.11371 (2020)

  • Deshpande, N.M., Gite, S.S., Aluvalu, R.: A brief bibliometric survey of leukemia detection by machine learning and deep learning approaches. Lib. Philo. Pract. 4569 (2020)

    Google Scholar 

  • Dhanorkar, S., Wolf, C.T., Qian, K., Xu, A., Popa, L., Li, Y.: Who needs to know what, when?: broadening the explainable AI (XAI) design space by looking at explanations across the AI lifecycle. In: Designing Interactive Systems Conference 2021, pp. 1591–1602 (2021)

    Google Scholar 

  • Dieber, J., Kirrane, S.: Why model why? Assessing the strengths and limitations of LIME. arXiv preprint arXiv:2012.00093 (2020)

  • Farrahi, S.V., Ahmadzadeh, M.: KCMC: a hybrid learning approach for network intrusion detection using k-means clustering and multiple classifiers. Int. J. Comput. Appl. 124(9) (2015)

    Google Scholar 

  • Fidel, G., Bitton, R., Shabtai, A.: When explainability meets adversarial learning: detecting adversarial examples using SHAP signatures. In: 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2020)

    Google Scholar 

  • Floreano, D., Wood, R.J.: Science, technology and the future of small autonomous drones. Nature 521(7553), 460–466 (2015)

    Google Scholar 

  • Fouladgar, N., Främling, K.: XAI-PT: a brief review of explainable artificial intelligence from practice to theory. arXiv preprint arXiv:2012.09636 (2020)

  • Främling, K., Westberg, M., Jullum, M., Madhikermi, M., Malhi, A.: Comparison of contextual importance and utility with LIME and Shapley values. In: International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems, pp. 39–54. Springer, Cham (2021)

    Google Scholar 

  • Gazet, A.: Comparative analysis of various ransomware virii. J. Comput. Virol. 6(1), 77–90 (2010)

    Article  Google Scholar 

  • Ghosh, I., Sanyal, M.K.: Introspecting predictability of market fear in Indian context during COVID-19 pandemic: an integrated approach of applied predictive modelling and explainable AI. Int. J. Inf. Manag. Data Insights 1(2), 100039 (2021)

    Google Scholar 

  • Gilpin, H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: Proceedings of the 2018 IEEE 5th International Conference on Data Science and advanced Analytics (DSAA), pp. 80–89. IEEE, Turin, Italy (2018)

    Google Scholar 

  • Guo, W.: Explainable artificial intelligence for 6G: improving trust between human and machine. IEEE Commun. Mag. 58(6), 39–45 (2020)

    Article  Google Scholar 

  • Han, H., Liu, X.: The challenges of explainable AI in biomedical data science. BMC Bioinform. 22(12), 1–3 (2022)

    MathSciNet  Google Scholar 

  • Hara, S., Hayashi, K.: Making tree ensembles interpretable. arXiv preprint arXiv:1606.05390 (2016)

  • Heide, N.F., Müller, E., Petereit, J., Heizmann, M.: X 3 SEG: model-agnostic explanations for the semantic segmentation of 3D point clouds with prototypes and criticism. In: 2021 IEEE International Conference on Image Processing (ICIP), pp. 3687–3691. IEEE (2021)

    Google Scholar 

  • Hermansa, M., Kozielski, M., Michalak, M., Szczyrba, K., Wróbel, Ł, Sikora, M.: Sensor based predictive maintenance with reduction of false alarms—a case study in heavy industry. Sensors 22(1), 226 (2022)

    Article  Google Scholar 

  • Hoffman, R.R., Mueller, S.T., Klein, G., Litman, J.: Metrics for explainable AI: challenges and prospects. arXiv preprint arXiv:1812.04608 (2018)

  • Hussain, F., Hussain, R., Hossain, E.: Explainable artificial intelligence (XAI): an engineering perspective. arXiv preprint arXiv:2101.03613 (2021)

  • Ilgun, K., Kemmerer, R.A., Porras, P.A.: State transition analysis: a rule-based intrusion detection approach. IEEE Trans. Softw. Eng. 21(3), 181–199 (1995). In: Proceedings of the IEEE Symposium on Security and Privacy (1999)

    Google Scholar 

  • Jiang, R., Wang, L., Tsai, S.B.: An empirical study on digital media technology in film and television animation design. Math. Probl. Eng. 2022 (2022)

    Google Scholar 

  • Kanaparthi, S.H., Swapna, M.: A statistical review on Covid-19 pandemic and outbreak. Lecture Notes in Networks and Systems vol. 301, pp. 124–135 (2022)

    Google Scholar 

  • Kaur, D., Uslu, S., Rittichier, K.J., Durresi, A.: Trustworthy artificial intelligence: a review. ACM Comput. Surv. (CSUR) 55(2), 1–38 (2022)

    Article  Google Scholar 

  • Keane, M.T., Kenny, E.M., Delaney, E., Smyth, B.: If only we had better counterfactual explanations: five key deficits to rectify in the evaluation of counterfactual XAI techniques. arXiv preprint arXiv:2103.01035 (2021)

  • Klesel, P.H.M., Wittmann, H.F.: Explain it to me and I will use it: a proposal on the impact of explainable AI

    Google Scholar 

  • Kłosok, M., Chlebus, M.: Towards Better Understanding of Complex Machine Learning Models Using Explainable Artificial Intelligence (XAI): Case of Credit Scoring Modelling. University of Warsaw, Faculty of Economic Sciences, Warsaw (2020)

    Google Scholar 

  • Kose, N., Kopuklu, O., Unnervik, A., Rigoll, G.: Real-time driver state monitoring using a CNN based spatio-temporal approach. In: 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp. 3236–3242. IEEE (2019)

    Google Scholar 

  • Kotenko, I., Izrailov, K., Buinevich, M.: Static analysis of information systems for IoT cyber security: a survey of machine learning approaches. Sensors 22(4), 1335 (2022)

    Article  Google Scholar 

  • Krishnan, R., Sivakumar, G., Bhattacharya, P.: Extracting decision trees from trained neural networks. Pattern Recogn. 32, 12 (1999)

    Google Scholar 

  • Kuppa, A., Le-Khac, N.A.: Black box attacks on explainable artificial intelligence (XAI) methods in cyber security. In: 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2020)

    Google Scholar 

  • Kuzlu, M., Cali, U., Sharma, V., Güler, Ö.: Gaining insight into solar photovoltaic power generation forecasting utilizing explainable artificial intelligence tools. IEEE Access 8, 187814–187823 (2020)

    Article  Google Scholar 

  • Lahre, M.K., Dhar, M.T., Suresh, D., Kashyap, K., Agrawal, P.: Analyze different approaches for ids using KDD 99 data set. Int. J. Recent Innov. Trends Comput. Commun. 1(8), 645–651 (2013)

    Google Scholar 

  • Lazarevic, A., Ertoz, L., Kumar, V., Ozgur, A., Srivastava, J.: A comparative study of anomaly detection schemes in network intrusion detection. In: Proceedings of the SIAM International Conference on Data Mining, pp. 25–36. SIAM, San Francisco, CA, USA (2003)

    Google Scholar 

  • Lee, W., Stolfo, S.J., Chan, P.K., et al.: Real time data mining based intrusion detection. In: Proceedings of the DARPA Information Survivability Conference and Exposition II. DISCEX’01, pp. 89–100. IEEE, Anaheim, CA, USA (2001)

    Google Scholar 

  • Li, J., Chen, J., Bai, H., Wang, H., Hao, S., Ding, Y., et al.: An overview of organs-on-chips based on deep learning. Research 2022 (2022)

    Google Scholar 

  • Lin, I.C., Chang, C.C., Peng, C.H.: An anomaly-based IDS framework using centroid-based classification. Symmetry 14(1), 105 (2022)

    Article  Google Scholar 

  • Logas, J., Schlesinger, A., Li, Z., Das, S.: Image DePO: towards gradual decentralization of online social networks using decentralized privacy overlays. In: Proceedings of the ACM on Human-Computer Interaction, 6(CSCW1), pp. 1–28 (2022)

    Google Scholar 

  • Lötsch, J., Kringel, D., Ultsch, A.: Explainable artificial intelligence (XAI) in biomedicine: making AI decisions trustworthy for physicians and patients. BioMedInformatics 2(1), 1–17 (2022)

    Article  Google Scholar 

  • Naser, M.Z.: An engineer’s guide to explainable artificial intelligence and interpretable machine learning: navigating causality, forced goodness, and the false perception of inference. Autom. Constr. 129, 103821 (2021)

    Article  Google Scholar 

  • Novikov, D., Yampolskiy, R.V., Reznik, L.: Anomaly detection based intrusion detection. In: Proceedings of the International Conference on Information Technology: New Generations (ITNG’06), pp. 420–425. IEEE, Las Vegas, NV, USA (2006)

    Google Scholar 

  • Othman, S.M., Ba-Alwi, F.M., Alsohybe, N.T., Al-Hashida, A.Y.: Intrusion detection model using machine learning algorithm on big data environment. J. Big Data 5(1), 34 (2018)

    Article  Google Scholar 

  • Pasquale, F.: The Black Box Society: The Secret Algorithms that Control Money and Information. Harvard University Press (2015)

    Google Scholar 

  • Pazzani, M.J., Mani, S., Shankle, W.R., et al.: Acceptance of rules generated by machine learning among medical experts. Methods Inf. Med. 40(5), 380–385 (2001)

    Article  Google Scholar 

  • Pedreshi, D., Ruggieri, S., Turini, F.: Discrimination-aware data mining. In: Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 560–568. ACM (2008)

    Google Scholar 

  • Peng, K., Leung, V., Zheng, L., Wang, S., Huang, C., Lin, T.: Intrusion detection system based on decision tree over big data in fog environment. Wirel. Commun. Mob. Comput. 2018, Article ID 4680867, 10 pages (2018)

    Google Scholar 

  • Perarasi, T., Vidhya, S., Leeban Moses, M., Ramya, P.: Malicious vehicles identifying and trust management algorithm for enhance the security in 5G-VANET. In: Proceedings of the Second International Conference on Inventive Research in Computing Applications (ICIRCA), Coimbatore (2020a)

    Google Scholar 

  • Perarasi, T., Vidhya, S., Leeban Moses, M., Ramya, P.: Malicious vehicles identifying and trust management algorithm for enhance the security in 5G-VANET. In: Proceedings of the Second International Conference on Inventive Research in Computing Applications (ICIRCA), Coimbatore, India (2020b)

    Google Scholar 

  • Pienta, D., Tams, S., Atcher, J.: Can trust be trusted in cybersecurity? In: Proceedings of the 53rd Hawaii International Conference on System Sciences, Maui, HI, USA (2020)

    Google Scholar 

  • Rajanikanth, A., et al.: Data security in cloud computing using ABE-based access control. In: Architectural Wireless Networks Solutions and Security Issues, pp. 47–61. Springer, Singapore (2021)

    Google Scholar 

  • Raza, A., Tran, K.P., Koehl, L., Li, S.: Designing ECG monitoring healthcare system with federated transfer learning and explainable AI. Knowl.-Based Syst. 236, 107763 (2022)

    Article  Google Scholar 

  • Roth, A.M., Liang, J., Manocha, D.: XAI-N: sensor-based robot navigation using expert policies and decision trees. In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2053–2060. IEEE (2021)

    Google Scholar 

  • Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019)

    Article  Google Scholar 

  • Ryo, M., Angelov, B., Mammola, S., Kass, J.M., Benito, B.M., Hartig, F.: Explainable artificial intelligence enhances the ecological interpretability of black-box species distribution models. Ecography 44(2), 199–205 (2021)

    Article  Google Scholar 

  • Saha, D., De, S.: Practical self-driving cars: survey of the state-of-the-art (2022)

    Google Scholar 

  • Schlegel, U., Arnout, H., El-Assady, M., Oelke, D., Keim, D.A.: Towards a rigorous evaluation of XAI methods on time series. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp. 4197–4201. IEEE (2019)

    Google Scholar 

  • Stampar, M., Fertalj, K.: Artificial intelligence in network intrusion detection. In: Proceedings of the 38th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 1318–1323. IEEE, Opatija, Croatia (2015)

    Google Scholar 

  • Svenmarck, P., Luotsinen, L., Nilsson, M., Schubert, J.: Possibilities and challenges for artificial intelligence in military applications. In: Proceedings of the NATO Big Data and Artificial Intelligence for Military Decision Making Specialists’ Meeting, Bordeaux, France (2018)

    Google Scholar 

  • Swapna, M., Viswanadhula, U.M., Aluvalu, R., Vardharajan, V., Kotecha, K.: Bio-signals in medical applications and challenges using artificial intelligence. J. Sens. Actuator Netw. 11(1), 17 (2022)

    Article  Google Scholar 

  • Swapna, M., Hegde, N.: A multifarious diagnosis of breast cancer using mammogram images—systematic review. In: IOP Conference Series: Materials Science and Engineering, vol. 1042, no. 1, p. 012012. IOP Publishing (2021)

    Google Scholar 

  • Swapna, M., Uma Maheswari, V., Aluvalu, R., Vardharajan, V., Kotecha, K.: Bio-signals in medical applications and challenges using artificial intelligence. J. Sens. Actuator Netw. 11(1), 17 (2022)

    Google Scholar 

  • Toosi, A.N., Kahani, M.: A new approach to intrusion detection based on an evolutionary soft computing model using neuro-fuzzy classifiers. Comput. Commun. 30(10), 2201–2212 (2007)

    Article  Google Scholar 

  • Tseremoglou, I., Bombelli, A., Santos, B.F.: A combined forecasting and packing model for air cargo loading: a risk-averse framework. Transp. Res. Part E: Logist. Transp. Rev 158, 102579 (2022)

    Article  Google Scholar 

  • Uma Maheswari, V., Aluvalu, R., Chennam, K.K.: Application of machine learning algorithms for facial expression analysis. Mach. Learn. Sustain. Dev. 9, 77 (2021)

    Google Scholar 

  • Urooj, U., Al-rimy, B.A.S., Zainal, A., Ghaleb, F.A., Rassam, M.A.: Ransomware detection using the dynamic analysis and machine learning: a survey and research directions. Appl. Sci. 12(1), 172 (2022)

    Article  Google Scholar 

  • Vimalkumar, K., Radhika, N.: A big data framework for intrusion detection in smart grids using Apache spark. In: Proceedings of the International Conference on Advances in Computing, Communications and Informatics (ICACCI), pp. 198–204. IEEE, Udupi, India (2017)

    Google Scholar 

  • Visani, G., Bagli, E., Chesani, F.:. OptiLIME: optimized LIME explanations for diagnostic computer algorithms. arXiv preprint arXiv:2006.05714 (2020)

  • Ye, N., Zhang, Y., Borror, C.M.: Robustness of the Markov-chain model for cyber-attack detection. IEEE Trans. Reliab. 53(1), 116–123 (2004)

    Article  Google Scholar 

  • Zaman, S., Karray, F.: Lightweight ids based on features selection and ids classification scheme. In: Proceedings of the International Conference on Computational Science and Engineering, pp. 365–370. IEEE, Vancouver, BC, Canada (2009)

    Google Scholar 

  • Zhang, Z., Shen, H.: Application of online-training SVMS for real-time intrusion detection with different considerations. Comput. Commun. 28(12), 1428–1442 (2005)

    Article  Google Scholar 

  • Zhang, Y., Weng, Y., Lund, J.: Applications of explainable artificial intelligence in diagnosis and surgery. Diagnostics 12(2), 237 (2022)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Krishna Keerthi Chennam .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Chennam, K.K., Mudrakola, S., Maheswari, V.U., Aluvalu, R., Rao, K.G. (2023). Black Box Models for eXplainable Artificial Intelligence. In: Mehta, M., Palade , V., Chatterjee, I. (eds) Explainable AI: Foundations, Methodologies and Applications. Intelligent Systems Reference Library, vol 232. Springer, Cham. https://doi.org/10.1007/978-3-031-12807-3_1

Download citation

Publish with us

Policies and ethics