Skip to main content

Explainable Artificial Intelligence Model: Analysis of Neural Network Parameters

  • Conference paper
  • First Online:
Applied Advanced Analytics

Abstract

In recent years, artificial neural network is becoming a popular technology to extract the extremely complex pattern in the data across different segments of research areas and industrial applications. Most of the artificial intelligence researchers are now focused to build smart and user-friendly applications which can assist humans to make the appropriate decision in the business. The aim to build these applications is mainly to reduce the human errors and minimize influence of individual perceptions in the decision-making process. There is no doubt that this technology will be able to lead to a world where we can enjoy AI-driven applications for our day-to-day life and making some important decisions more accurately. But what if we want to know the explanation and reason behind the decision of AI system. What if we want to understand the most important factors of the decision-making processes of such applications. Due to the intense complexity of inherent structure of AI algorithm, usually researchers define the artificial neural network as “black box” whereas the traditional statistical learning models are more transparent, interpretable and explainable with respect to data and underlying business hypothesis. In this article, we will present TRAnsparent Neural Network (TRANN) by examining and explaining the network structure (model size) using statistical methods. Our aim is to create a framework to derive the right size and relevant connections of network which can explain the data and address the business queries. In this paper, we will be restricting us to analyse the feed-forward neural network model through nonlinear regression model and analyse the parameter properties guided by statistical distribution, information theoretic criteria and simulation technique.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  • Anders, U. (2002). Statistical model building for neural networks. In 963 Statistical Model Building for Neural Networks.

    Google Scholar 

  • Joel, V., et al. (2018). Explainable neural networks based on additive index model. arXiv.

    Google Scholar 

  • Reed, R. (1993). Pruning algorithms—A survey. IEEE Transactions on Neural Networks, 4, 740–747.

    Article  Google Scholar 

  • Rumelhart, D. E., et al. (1986). A direct adaptive method for faster backpropagation learning-the rprop algorithm. Parallel distributed Processing.

    Google Scholar 

  • Sarle, W. S. (1995). Stopped training and other remedies for overfitting. In Proceedings of the 27th Symposium on the Interface.

    Google Scholar 

  • White, H. (1994). Estimation, inference and specification analysis. Cambridge University Press.

    Google Scholar 

  • White, H. (1989). Learning in neural networks: A statistical perspective. Neural Computation, 1, 425–464.

    Article  Google Scholar 

Download references

Acknowledgements

We use this opportunity to express our gratitude to everyone who supported us in this work. We are thankful for their intellectual guidance, invaluable constructive criticism and friendly advice during this project work. We are sincerely grateful to them for sharing their truthful and illuminating views on a number of issues related to the project. We express our warm thanks to our colleagues Koushik Khan and Sachin Verma for their support to write code in Python and R. We would also like to thank Prof. Debasis Kundu from IIT Kanpur who provided the valuable references and suggestions for this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sandip Kumar Pal .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pal, S.K., Bhave, A.A., Banerjee, K. (2021). Explainable Artificial Intelligence Model: Analysis of Neural Network Parameters. In: Laha, A.K. (eds) Applied Advanced Analytics. Springer Proceedings in Business and Economics. Springer, Singapore. https://doi.org/10.1007/978-981-33-6656-5_4

Download citation

Publish with us

Policies and ethics