Skip to main content

Integrating Deep Learning and Bayesian Reasoning

  • Conference paper
  • First Online:
Dependability in Sensor, Cloud, and Big Data Systems and Applications (DependSys 2019)

Abstract

Deep learning (DL) is an excellent function estimator which has amazing result on perception tasks such as visualization recognition and text recognition. But, its inner architecture acts as a black box, because the users cannot understand why such decisions are made. Bayesian reasoning (BR) provides explanation facility and causal reasoning in terms of uncertainty which is able to overcome demerit of DL. This paper is to propose a framework for the integration of DL and BR by leveraging their complementary merits based on their inherent internal architecture. The migration from deep neural network (DNN) to Bayesian network (BN) involves extracting rules from DNN and constructing an efficient BN based on the rules generated, to provide intelligent decision support with accurate recommendations and logical explanations to the users.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Kim, B.: Interactive and interpretable machine learning models for human machine collaboration. Doctoral dissertation, Massachusetts Institute of Technology (2015)

    Google Scholar 

  2. Lipton, Z. C.: The mythos of model interpretability. arXiv preprint arXiv:1606.03490 (2016)

  3. Castelvecchi, D.: Can we open the black box of AI? Nat. News 538(7623), 20 (2016)

    Article  Google Scholar 

  4. Ping, C. W.: A methodology for constructing causal knowledge model from fuzzy cognitive map to bayesian belief network. Unpublished Ph.D. Thesis, Chonnam National University (2009)

    Google Scholar 

  5. Cheah, W.P., Kim, Y.S., Kim, K.Y., Yang, H.J.: Systematic causal knowledge acquisition using FCM constructor for product design decision support. Expert Syst. Appl. 38(12), 15316–15331 (2011)

    Article  Google Scholar 

  6. Wee, Y.Y., Cheah, W.P., Tan, S.C., Wee, K.: A method for root cause analysis with a Bayesian belief network and fuzzy cognitive map. Expert Syst. Appl. 42(1), 468–487 (2015)

    Article  Google Scholar 

  7. Zilke, J.: Extracting rules from deep neural networks. Unpublished thesis (2015)

    Google Scholar 

  8. Zarikas, V., Papageorgiou, E., Regner, P.: Bayesian network construction using a fuzzy rule based approach for medical decision support. Expert Syst. 32(3), 344–369 (2015)

    Article  Google Scholar 

  9. Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)

    Article  MathSciNet  Google Scholar 

  10. Bengio, Y.: Learning deep architectures for AI. Found. Trends® Mach. Learn. 2(1), 1–127 (2009)

    Article  MathSciNet  Google Scholar 

  11. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436 (2015)

    Article  Google Scholar 

  12. Markoff, J.: Scientists see promise in deep-learning programs. New York Times, 23 (2012)

    Google Scholar 

  13. Deng, L.: Computational models for speech production. In: Ponting, K. (ed.) Computational Models of Speech Pattern Processing, pp. 199–213. Springer, Berlin, Heidelberg (1999). https://doi.org/10.1007/978-3-642-60087-6_20

    Chapter  MATH  Google Scholar 

  14. Deng, L.: Switching dynamic system models for speech articulation and acoustics. In: Johnson, M., Khudanpur, S.P., Ostendorf, M., Rosenfeld, R. (eds.) Mathematical Foundations of Speech and Language Processing, pp. 115–133. Springer, New York (2004). https://doi.org/10.1007/978-1-4419-9017-4_6

    Chapter  Google Scholar 

  15. George, D.: How the brain might work: a hierarchical and temporal model for learning and recognition. Stanford University, Palo Alto, California (2008)

    Google Scholar 

  16. Bouvrie, J.V.: Hierarchical learning: theory with applications in speech and vision. Doctoral dissertation, Massachusetts Institute of Technology (2009)

    Google Scholar 

  17. Pearl, J.: Causality: Models, Reasoning and Inference, vol. 29. MIT Press, Cambridge (2000)

    MATH  Google Scholar 

  18. Kleinberg, S.: Why: A Guide to Finding and Using Causes. O’Reilly Media Inc, Sebastopol (2015)

    Google Scholar 

  19. Kleinberg, S.: Causality, Probability, and Time. Cambridge University Press, Cambridge (2013)

    Google Scholar 

  20. Lake, B.M., Ullman, T.D., Tenenbaum, J.B., Gershman, S.J.: Building machines that learn and think like people. Behav. Brain Sci. 40, e253 (2017)

    Article  Google Scholar 

  21. Bojarski, M., et al.: Explaining how a deep neural network trained with end-to-end learning steers a car. arXiv preprint arXiv:1704.07911 (2017)

  22. Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., Elhadad, N.: Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1721–1730. ACM, August 2015

    Google Scholar 

  23. Holzinger, A., Biemann, C., Pattichis, C.S., Kell, D.B.: What do we need to build explainable AI systems for the medical domain? arXiv preprint arXiv:1712.09923 (2017)

  24. Goodman, B., Flaxman, S.: European Union regulations on algorithmic decision-making and a “right to explanation”. AI Mag. 38(3), 50–57 (2017)

    Article  Google Scholar 

  25. Ehsan, U., Harrison, B., Chan, L., Riedl, M.O.: Rationalization: a neural machine translation approach to generating natural language explanations. In: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 81–87. ACM, December 2018

    Google Scholar 

  26. Wang, H., Yeung, D.Y.: Towards Bayesian deep learning: a framework and some existing methods. IEEE Trans. Knowl. Data Eng. 28(12), 3395–3408 (2016)

    Article  Google Scholar 

  27. Brendel, W., Bethge, M.: Approximating CNNs with bag-of-local-features models works surprisingly well on imagenet. arXiv preprint arXiv:1904.00760 (2019)

  28. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. ACM, August 2016

    Google Scholar 

  29. Samek, W., Wiegand, T., Müller, K.R.: Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296 (2017)

  30. Tamagnini, P., Krause, J., Dasgupta, A., Bertini, E.: Interpreting black-box classifiers using instance-level visual explanations. In: Proceedings of the 2nd Workshop on Human-In-the-Loop Data Analytics, p. 6. ACM, May 2017

    Google Scholar 

  31. Burns, C., Thomason, J., Tansey, W.: Interpreting Black Box Models with Statistical Guarantees. arXiv preprint arXiv:1904.00045 (2019)

  32. Bastani, O., Kim, C., Bastani, H.: Interpreting blackbox models via model extraction. arXiv preprint arXiv:1705.08504 (2017)

  33. Shwartz-Ziv, R., Tishby, N.: Opening the black box of deep neural networks via information. arXiv preprint arXiv:1703.00810 (2017)

  34. Gunning, D.: Explainable artificial intelligence (XAI). Defense Advanced Research Projects Agency (DARPA), nd Web, 2 (2017)

    Google Scholar 

  35. Gunning, D., Aha, D.W.: DARPA’s explainable artificial intelligence program. AI Mag. 40(2), 44–58 (2019)

    Article  Google Scholar 

  36. Hailesilassie, T.: Rule extraction algorithm for deep neural networks: a review. arXiv preprint arXiv:1610.05267 (2016)

  37. González, C., Loza Mencía, E., Fürnkranz, J.: Re-training deep neural networks to facilitate Boolean concept extraction. In: Yamamoto, A., Kida, T., Uno, T., Kuboyama, T. (eds.) DS 2017. LNCS (LNAI), vol. 10558, pp. 127–143. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-67786-6_10

    Chapter  Google Scholar 

Download references

Acknowledgments

This work was supported by the Fundamental Research Grant Scheme (FRGS) from the Ministry of Education and Multimedia University, Malaysia (Project ID: FRGS/1/2018/ICT02/MMU/02/1).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wooi Ping Cheah .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tan, S.Y., Cheah, W.P., Tan, S.C. (2019). Integrating Deep Learning and Bayesian Reasoning. In: Wang, G., Bhuiyan, M.Z.A., De Capitani di Vimercati, S., Ren, Y. (eds) Dependability in Sensor, Cloud, and Big Data Systems and Applications. DependSys 2019. Communications in Computer and Information Science, vol 1123. Springer, Singapore. https://doi.org/10.1007/978-981-15-1304-6_10

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-1304-6_10

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-1303-9

  • Online ISBN: 978-981-15-1304-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics