Skip to main content
Log in

An object-oriented neural representation and its implication towards explainable AI

  • Original Research
  • Published:
International Journal of Information Technology Aims and scope Submit manuscript

Abstract

Rapid dissemination of Artificial Intelligence (AI) and machine learning in real-world problems has raised a concern about the reliability aspects of the model. A separate sub-branch of AI solicitudes on reliability is known as Explainable AI (XAI). XAI analyses the cause and impact of the decision made by AI systems. The neural network plays a significant part in AI's ability to upgrade with more recent data. However, the learning capability left the model obscure for most of its decisions. Breaking the black-box nature of the neural network model and giving understanding and insight into the functionality of the model will mitigate the uncertainty. Here, in this research, we have designed Object oriented neural representation to devise a “Feature importance” technique from the correlation between Loss and Weight distribution devised method is effective in interpreting the decision to the end user and AI practitioners with optimal time complexity (TC = (L-1) × (E × C)). The proposed neural representation also extended to incorporate domain/business rules from which we introduced a new Loss, known as business loss. From getting the impact of business loss, we obtained an earlier decline in the overall Loss and improved performance from the ablation study.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

References

  1. Patil S, Patil KR, Patil CR, Patil SS (2020) Performance overview of an artificial intelligence in biomedics: a systematic approach. Int J Inf Technol 12(3):963–973

    MathSciNet  Google Scholar 

  2. Singh N, Panda SP (2022) Artificial Neural Network on Graphical Processing Unit and its emphasis on ground water level prediction. Int J Inf Technol 14(7):3659–3666

    Google Scholar 

  3. Gupta S, Saini AK (2021) An artificial intelligence based approach for managing risk of IT systems in adopting cloud. Int J Inf Technol 13:2515–2523

    Google Scholar 

  4. Tavakoli A, Honjani Z, Sajedi H (2023) Convolutional neural network-based image watermarking using discrete wavelet transform. Int J Inf Technol 15(4):2021–2029

    Google Scholar 

  5. Mahajan A, Singh HP, Sukavanam N (2017) An unsupervised learning based neural network approach for a robotic manipulator. Int J Inf Technol 9:1–6

    Google Scholar 

  6. Hospedales T, Antoniou A, Micaelli P, Storkey A (2021) Meta-learning in neural networks: a survey. IEEE Trans Pattern Anal Mach Intell 44(9):5149–5169

    Google Scholar 

  7. Kratsios A (2021) The universal approximation property. Annals Mathematics Artificial Intelligence 89(5):435–469

    Article  MathSciNet  Google Scholar 

  8. Speith, T. (2022) A review of taxonomies of explainable artificial intelligence (XAI) methods. In 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 2239–2250).

  9. Creel KA (2020) Transparency in complex computational systems. Philosophy of Science 87(4):568–589

    Article  MathSciNet  Google Scholar 

  10. Arulprakash E, Aruldoss M (2022) A study on generic object detection with emphasis on future research directions. J King Saud University-Computer Inform Sci 34(9):7347–7365

    Google Scholar 

  11. Arulprakash E, Martin A, Lakshmi TM (2022) A study on indirect performance parameters of object detection. SN Computer Science 3(5):1–11

    Article  Google Scholar 

  12. Jiang P, Suzuki H, Obi T (2023) XAI-based cross-ensemble feature ranking methodology for machine learning models. Int J Inf Technol 15(4):1759–1768

    Google Scholar 

  13. Alicioglu G, Sun B (2022) A survey of visual analytics for explainable artificial intelligence methods. Comput Graph 102:502–520

    Article  Google Scholar 

  14. Šefčík, F., Benesova, W., (2023) Improving a neural network model by explanation-guided training for glioma classification based on MRI data. International Journal of Information Technology, 1–9.

  15. Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. (2016) “Why should I trust you?"Explaining the predictions of any classifier."In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1135–1144.

  16. Lundberg, Scott M., and Su-In Lee. (2017) “A unified approach to interpreting model predictions." Advances in neural information processing systems 30 .

  17. Alwadi M, Chetty G, Yamin M (2023) A framework for vehicle quality evaluation based on interpretable machine learning. Int J Inform Tech 15(1):129–136

    Google Scholar 

  18. Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. (2018) “Anchors: High-precision model-agnostic explanations.” In Proceedings of the AAAI conference on artificial intelligence, 32 (1).

  19. Zhou, Bolei, AdityaKhosla, AgataLapedriza, AudeOliva, and Antonio Torralba. 2016 “Learning deep features for discriminative localization.” In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2921–2929.

  20. Selvaraju, Ramprasaath R., Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and DhruvBatra. 2017 “Grad-cam: Visual explanations from deep networks via gradient-based localization.” In Proceedings of the IEEE international conference on computer vision, pp. 618–626.

  21. Shrikumar, A., Greenside, P. and Kundaje, A., 2017. Learning important features through propagating activation differences.In International conference on machine learning (pp. 3145–3153).PMLR.

  22. Bach S, Binder A, GrégoireMontavon FK, Müller K-R, WojciechSamek. (2015) On pixel-wise explanations for nonlinear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7):e0130140

    Article  PubMed  PubMed Central  Google Scholar 

  23. Caruana, Rich, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, and NoemieElhadad. 2015 “Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission.” In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1721–1730.

  24. Breiman L. Manual on setting up, using, and understanding random forests 125 v3. Tech Rep 2002;4 (1):29, https://www.stat.berkeley.edu/~breiman/Using_ 126 random_forests_V3.1.pdf. [Accessed 26 Dec 2022].

  25. Pasricha, Sahil. 2020 Visually Explaining the Weight Distribution of Neural Networks over Time.

  26. Confalonieri R, LudovikCoba BW, Besold TR (2021) A historical perspective of explainable artificial intelligence. Wiley Interdisciplinary Rev: Data Mining Knowledge Discovery 11(1):e1391

    Google Scholar 

  27. Dikmen M, Burns C (2022) The effects of domain knowledge on trust in explainable AI and task performance: A case of peer-to-peer lending. Int J Hum Comput Stud 162:102792

    Article  Google Scholar 

  28. Islam, Sheikh Rabiul, William Eberle, Sheikh K. Ghafoor, AmbareenSiraj, and Mike Rogers. (2019) “Domain knowledge aided explainable artificial intelligence for intrusion detection and response.” arXiv preprint arXiv:1911.09853 .

  29. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. J Machine Learn Res 15(1):1929–1958

    MathSciNet  Google Scholar 

  30. Slack D, Hilgard A, Singh S, Lakkaraju H (2021) Reliable post hoc explanations: Modeling uncertainty in explainability. Adv Neural Inf Process Syst 34:9391–9404

    Google Scholar 

  31. zmailov, P., Podoprikhin, D., Garipov, T., Vetrov, D., and Wilson, AG., (2018). Averaging weights leads to wider optima and better generalization. arXiv preprint arXiv:1803.05407.

  32. Smith, L. N. (2017) Cyclical learning rates for training neural networks. In 2017 IEEE winter conference on applications of computer vision (WACV) (pp. 464–472). IEEE.

  33. Bengio, Y. (2012) Practical recommendations for gradient-based training of deep architectures. In Neural Networks: Tricks of the Trade: Second Edition (pp. 437–478). Berlin, Heidelberg: Springer Berlin Heidelberg.

  34. Goodfellow, I., Bengio, Y., Courville, A. 2016. Deep learning. MIT press.

  35. Schober P, Boer C, Schwarte LA (2018) Correlation coefficients: appropriate use and interpretation. Anesth Analg 126(5):1763–1768

    Article  PubMed  Google Scholar 

  36. Alam M (2020) Object oriented software security: goal questions metrics approach. Int J Inf Technol 12(1):175–179

    Google Scholar 

  37. Black AP (2013) Object-oriented programming: Some history, and challenges for the next fifty years. Inf Comput 231:3–20

    Article  MathSciNet  Google Scholar 

  38. Blaha, Michael R., and James R. Rumbaugh. 2020 “Object Oriented Modeling and Design with UML.”

  39. Edwin, NjeruMwendi. 2014 “Software frameworks, architectural and design patterns.” Journal of Software Engineering and Applications.

  40. Mu, Huaxin, and ShuaiJiang.2011 “Design patterns in software development.” In 2011 IEEE 2nd International Conference on Software Engineering and Service Science, pp. 322–325. IEEE,.

  41. P.K. Biswas 2019 . “Deep Learning IIT KGP: Back Propagation Learning”, NPTEL course.

  42. Galatolo FA, Cimino MGCA, Vaglini G (2021) Formal derivation of mesh neural networks with their forward-only gradient propagation. Neural Process Lett 53(3):1963–1978

    Article  Google Scholar 

  43. Kim D, June-HaakEe CY, Lee J (2021) Derivation of Jacobian formula with Dirac delta function. Eur J Phys 42(3):035006

    Article  Google Scholar 

  44. Magnus, Jan R., and Heinz Neudecker. (2019) Matrix differential calculus with applications in statistics and econometrics.John Wiley and Sons.

  45. Lillicrap TP, Santoro A, Marris L, Akerman CJ, Hinton G (2020) Backpropagation and the brain. Nat Rev Neurosci 21(6):335–346

    Article  CAS  PubMed  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Enoch Arulprakash.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Arulprakash, E., Martin, A. An object-oriented neural representation and its implication towards explainable AI. Int. j. inf. tecnol. 16, 1303–1318 (2024). https://doi.org/10.1007/s41870-023-01432-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s41870-023-01432-2

Keywords

Navigation