Abstract
Rapid dissemination of Artificial Intelligence (AI) and machine learning in real-world problems has raised a concern about the reliability aspects of the model. A separate sub-branch of AI solicitudes on reliability is known as Explainable AI (XAI). XAI analyses the cause and impact of the decision made by AI systems. The neural network plays a significant part in AI's ability to upgrade with more recent data. However, the learning capability left the model obscure for most of its decisions. Breaking the black-box nature of the neural network model and giving understanding and insight into the functionality of the model will mitigate the uncertainty. Here, in this research, we have designed Object oriented neural representation to devise a “Feature importance” technique from the correlation between Loss and Weight distribution devised method is effective in interpreting the decision to the end user and AI practitioners with optimal time complexity (TC = (L-1) × (E × C)). The proposed neural representation also extended to incorporate domain/business rules from which we introduced a new Loss, known as business loss. From getting the impact of business loss, we obtained an earlier decline in the overall Loss and improved performance from the ablation study.
Similar content being viewed by others
References
Patil S, Patil KR, Patil CR, Patil SS (2020) Performance overview of an artificial intelligence in biomedics: a systematic approach. Int J Inf Technol 12(3):963–973
Singh N, Panda SP (2022) Artificial Neural Network on Graphical Processing Unit and its emphasis on ground water level prediction. Int J Inf Technol 14(7):3659–3666
Gupta S, Saini AK (2021) An artificial intelligence based approach for managing risk of IT systems in adopting cloud. Int J Inf Technol 13:2515–2523
Tavakoli A, Honjani Z, Sajedi H (2023) Convolutional neural network-based image watermarking using discrete wavelet transform. Int J Inf Technol 15(4):2021–2029
Mahajan A, Singh HP, Sukavanam N (2017) An unsupervised learning based neural network approach for a robotic manipulator. Int J Inf Technol 9:1–6
Hospedales T, Antoniou A, Micaelli P, Storkey A (2021) Meta-learning in neural networks: a survey. IEEE Trans Pattern Anal Mach Intell 44(9):5149–5169
Kratsios A (2021) The universal approximation property. Annals Mathematics Artificial Intelligence 89(5):435–469
Speith, T. (2022) A review of taxonomies of explainable artificial intelligence (XAI) methods. In 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 2239–2250).
Creel KA (2020) Transparency in complex computational systems. Philosophy of Science 87(4):568–589
Arulprakash E, Aruldoss M (2022) A study on generic object detection with emphasis on future research directions. J King Saud University-Computer Inform Sci 34(9):7347–7365
Arulprakash E, Martin A, Lakshmi TM (2022) A study on indirect performance parameters of object detection. SN Computer Science 3(5):1–11
Jiang P, Suzuki H, Obi T (2023) XAI-based cross-ensemble feature ranking methodology for machine learning models. Int J Inf Technol 15(4):1759–1768
Alicioglu G, Sun B (2022) A survey of visual analytics for explainable artificial intelligence methods. Comput Graph 102:502–520
Šefčík, F., Benesova, W., (2023) Improving a neural network model by explanation-guided training for glioma classification based on MRI data. International Journal of Information Technology, 1–9.
Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. (2016) “Why should I trust you?"Explaining the predictions of any classifier."In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1135–1144.
Lundberg, Scott M., and Su-In Lee. (2017) “A unified approach to interpreting model predictions." Advances in neural information processing systems 30 .
Alwadi M, Chetty G, Yamin M (2023) A framework for vehicle quality evaluation based on interpretable machine learning. Int J Inform Tech 15(1):129–136
Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. (2018) “Anchors: High-precision model-agnostic explanations.” In Proceedings of the AAAI conference on artificial intelligence, 32 (1).
Zhou, Bolei, AdityaKhosla, AgataLapedriza, AudeOliva, and Antonio Torralba. 2016 “Learning deep features for discriminative localization.” In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2921–2929.
Selvaraju, Ramprasaath R., Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and DhruvBatra. 2017 “Grad-cam: Visual explanations from deep networks via gradient-based localization.” In Proceedings of the IEEE international conference on computer vision, pp. 618–626.
Shrikumar, A., Greenside, P. and Kundaje, A., 2017. Learning important features through propagating activation differences.In International conference on machine learning (pp. 3145–3153).PMLR.
Bach S, Binder A, GrégoireMontavon FK, Müller K-R, WojciechSamek. (2015) On pixel-wise explanations for nonlinear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7):e0130140
Caruana, Rich, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, and NoemieElhadad. 2015 “Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission.” In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1721–1730.
Breiman L. Manual on setting up, using, and understanding random forests 125 v3. Tech Rep 2002;4 (1):29, https://www.stat.berkeley.edu/~breiman/Using_ 126 random_forests_V3.1.pdf. [Accessed 26 Dec 2022].
Pasricha, Sahil. 2020 Visually Explaining the Weight Distribution of Neural Networks over Time.
Confalonieri R, LudovikCoba BW, Besold TR (2021) A historical perspective of explainable artificial intelligence. Wiley Interdisciplinary Rev: Data Mining Knowledge Discovery 11(1):e1391
Dikmen M, Burns C (2022) The effects of domain knowledge on trust in explainable AI and task performance: A case of peer-to-peer lending. Int J Hum Comput Stud 162:102792
Islam, Sheikh Rabiul, William Eberle, Sheikh K. Ghafoor, AmbareenSiraj, and Mike Rogers. (2019) “Domain knowledge aided explainable artificial intelligence for intrusion detection and response.” arXiv preprint arXiv:1911.09853 .
Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. J Machine Learn Res 15(1):1929–1958
Slack D, Hilgard A, Singh S, Lakkaraju H (2021) Reliable post hoc explanations: Modeling uncertainty in explainability. Adv Neural Inf Process Syst 34:9391–9404
zmailov, P., Podoprikhin, D., Garipov, T., Vetrov, D., and Wilson, AG., (2018). Averaging weights leads to wider optima and better generalization. arXiv preprint arXiv:1803.05407.
Smith, L. N. (2017) Cyclical learning rates for training neural networks. In 2017 IEEE winter conference on applications of computer vision (WACV) (pp. 464–472). IEEE.
Bengio, Y. (2012) Practical recommendations for gradient-based training of deep architectures. In Neural Networks: Tricks of the Trade: Second Edition (pp. 437–478). Berlin, Heidelberg: Springer Berlin Heidelberg.
Goodfellow, I., Bengio, Y., Courville, A. 2016. Deep learning. MIT press.
Schober P, Boer C, Schwarte LA (2018) Correlation coefficients: appropriate use and interpretation. Anesth Analg 126(5):1763–1768
Alam M (2020) Object oriented software security: goal questions metrics approach. Int J Inf Technol 12(1):175–179
Black AP (2013) Object-oriented programming: Some history, and challenges for the next fifty years. Inf Comput 231:3–20
Blaha, Michael R., and James R. Rumbaugh. 2020 “Object Oriented Modeling and Design with UML.”
Edwin, NjeruMwendi. 2014 “Software frameworks, architectural and design patterns.” Journal of Software Engineering and Applications.
Mu, Huaxin, and ShuaiJiang.2011 “Design patterns in software development.” In 2011 IEEE 2nd International Conference on Software Engineering and Service Science, pp. 322–325. IEEE,.
P.K. Biswas 2019 . “Deep Learning IIT KGP: Back Propagation Learning”, NPTEL course.
Galatolo FA, Cimino MGCA, Vaglini G (2021) Formal derivation of mesh neural networks with their forward-only gradient propagation. Neural Process Lett 53(3):1963–1978
Kim D, June-HaakEe CY, Lee J (2021) Derivation of Jacobian formula with Dirac delta function. Eur J Phys 42(3):035006
Magnus, Jan R., and Heinz Neudecker. (2019) Matrix differential calculus with applications in statistics and econometrics.John Wiley and Sons.
Lillicrap TP, Santoro A, Marris L, Akerman CJ, Hinton G (2020) Backpropagation and the brain. Nat Rev Neurosci 21(6):335–346
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Arulprakash, E., Martin, A. An object-oriented neural representation and its implication towards explainable AI. Int. j. inf. tecnol. 16, 1303–1318 (2024). https://doi.org/10.1007/s41870-023-01432-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s41870-023-01432-2