Skip to main content
Log in

Improving transparency of deep neural inference process

  • Regular Paper
  • Published:
Progress in Artificial Intelligence Aims and scope Submit manuscript

Abstract

Deep learning techniques are rapidly advanced recently and becoming a necessity component for widespread systems. However, the inference process of deep learning is black box and is not very suitable to safety-critical systems which must exhibit high transparency. In this paper, to address this black-box limitation, we develop a simple analysis method which consists of (1) structural feature analysis: lists of the features contributing to inference process, (2) linguistic feature analysis: lists of the natural language labels describing the visual attributes for each feature contributing to inference process, and (3) consistency analysis: measuring consistency among input data, inference (label), and the result of our structural and linguistic feature analysis. Our analysis is simplified to reflect the actual inference process for high transparency, whereas it does not include any additional black-box mechanisms such as LSTM for highly human readable results. We conduct experiments and discuss the results of our analysis qualitatively and quantitatively and come to believe that our work improves the transparency of neural networks. Evaluated through 12,800 human tasks, 75% workers answer that input data and result of our feature analysis are consistent, and 70% workers answer that inference (label) and result of our feature analysis are consistent. In addition to the evaluation of the proposed analysis, we find that our analysis also provides suggestions, or possible next actions such as expanding neural network complexity or collecting training data to improve a neural network.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

References

  1. Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7), e0130,140 (2015). https://doi.org/10.1371/journal.pone.0130140

    Article  Google Scholar 

  2. Bau, D., Zhou, B., Khosla, A., Oliva, A., Torralba, A.: Network dissection: quantifying interpretability of deep visual representations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)

  3. Binder, A., Montavon, G., Lapuschkin, S., Müller, K., Samek, W.: Layer-wise relevance propagation for neural networks with local renormalization layers. In: Villa, A. E. P., Masulli, P., Rivero, A. J. P. (eds.) Artificial Neural Networks and Machine Learning—ICANN 2016—25th International Conference on Artificial Neural Networks, Barcelona, Spain, 6–9 Sept, 2016, Proceedings, Part II, Springer, Lecture Notes in Computer Science, vol 9887, pp. 63–71 (2016) https://doi.org/10.1007/978-3-319-44781-0_8

  4. Bojarski, M., Testa, D.D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., Jackel, L.D., Monfort, M., Muller, U., Zhang, J., Zhang, X., Zhao, J., Zieba, K.: End to end learning for self-driving cars. CoRR (2016) arXiv:1604.07316

  5. Bojarski, M., Choromanska, A., Choromanski, K., Firner, B., Ackel, L.J., Muller, U., Yeres, P., Zieba, K.: Visualbackprop: Efficient visualization of cnns for autonomous driving. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1–8 (2018)

  6. Choo, J., Liu, S.: Visual analytics for explainable deep learning. IEEE Comput. Graph. Appl. 38(4), 84–92 (2018). https://doi.org/10.1109/MCG.2018.042731661

    Article  Google Scholar 

  7. Dam, H.K., Tran, T., Ghose, A.: Explainable software analytics. In: Zisman, A., Apel, S. (eds.) Proceedings of the 40th International Conference on Software Engineering: New Ideas and Emerging Results, ICSE (NIER) 2018, Gothenburg, Sweden, May 27–June 03, 2018, pp. 53–56. ACM (2018). https://doi.org/10.1145/3183399.3183424

  8. Ding, W., Wang, R., Mao, F., Taylor, G.: Theano-based large-scale visual recognition with multiple gpus (2014) arXiv preprint arXiv:1412.2302

  9. Došilović, F.K., Brčić, M., Hlupić, N.: Explainable artificial intelligence: a survey. In: 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 0210–0215 (2018)

  10. Escorcia, V., Niebles, J.C., Ghanem, B.: On the relationship between visual attributes and convolutional networks. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, 7–12 June, 2015, pp. 1256–1264 (2015). https://doi.org/10.1109/CVPR.2015.7298730

  11. Fukushima, K., Miyake, S.: Neocognitron: a new algorithm for pattern recognition tolerant of deformations and shifts in position. Pattern Recognit. 15(6), 455–469 (1982). https://doi.org/10.1016/0031-3203(82)90024-3

    Article  Google Scholar 

  12. Ganesan, K.: Computing precision and recall for multi-class classification problems. http://text-analytics101.rxnlp.com/2014/10/computing-precision-and-recall-for.html (2014)

  13. Graves, A., Jaitly, N., Mohamed, A.: Hybrid speech recognition with deep bidirectional LSTM. In: 2013 IEEE Workshop on Automatic Speech Recognition and Understanding, Olomouc, Czech Republic, 8–12 Dec, 2013, pp. 273–278. IEEE (2013) https://doi.org/10.1109/ASRU.2013.6707742

  14. Grün, F., Rupprecht, C., Navab, N., Federico, T.: A taxonomy and library for visualizing learned features in convolutional neural networks. In: ICML Workshop on Visualization for Deep Learning (ICML-W) (2016)

  15. Gunning, D.: (2016) Explainable artificial intelligence (xai). https://www.darpa.mil/program/explainable-artificial-intelligence

  16. He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), IEEE Computer Society, Washington, DC, USA, ICCV ’15, pp. 1026–1034 (2015). https://doi.org/10.1109/ICCV.2015.123

  17. Hendricks, L.A., Akata, Z., Rohrbach, M., Donahue, J., Schiele, B., Darrell, T.: Generating visual explanations. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) Computer Vision—ECCV 2016—14th European Conference, Amsterdam, The Netherlands, 11–14 Oct, 2016, Proceedings, Part IV, Springer, Lecture Notes in Computer Science, vol. 9908, pp. 3–19 (2016). https://doi.org/10.1007/978-3-319-46493-0_1

  18. Hohman, F.M., Kahng, M., Pienta, R., Chau, D.H.: Visual analytics in deep learning: An interrogative survey for the next frontiers. In: IEEE Transactions on Visualization & Computer Graphics (2018). https://doi.org/10.1109/TVCG.2018.2843369

  19. Koopman, P., Wagner, M.: Challenges in autonomous vehicle testing and validation. SAE Int J Transp Saf 4((2016—-01—-0128)), 15–24 (2016)

    Article  Google Scholar 

  20. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 25, pp. 1097–1105. Curran Associates, Inc. (2012)

  21. Kuwajima, H., Tanaka, M.: Network analysis for explanation. In: Transparent and interpretable Machine Learning in Safety Critical Environments (NIPS2017 Workshop) (2017)

  22. Kuwajima, H., Yasuoka, H., Nakae, T.: Open problems in engineering and quality assurance of safety critical machine learning systems. In: Joint Workshop Between ICML, AAMAS and IJCAI on Deep (or Machine) Learning for Safety-Critical Applications in Engineering (2018)

  23. Lin, M., Chen, Q., Yan, S.: Network in network. CoRR (2013). arXiv:1312.4400

  24. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds) Advances in Neural Information Processing Systems, vol. 30, pp. 4765–4774. Curran Associates, Inc. (2017)

  25. Mikolov, T., Sutskever, I., Chen, K., Corrado, GS., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Burges, C.J.C, Bottou, L., Welling, M., Ghahramani, Z., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 26, pp. 3111–3119. Curran Associates, Inc. (2013)

  26. Miller, G.A., Beckwith, R., Fellbaum, C., Gross, D., Miller, K.J.: Introduction to WordNet: an on-line lexical database. Int. J. Lexicogr. 3(4), 235–244 (1990)

    Article  Google Scholar 

  27. Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.: Explaining nonlinear classification decisions with deep taylor decomposition. Pattern Recognit. 65, 211–222 (2017). https://doi.org/10.1016/j.patcog.2016.11.008

    Article  Google Scholar 

  28. Montavon, G., Samek, W., Müller, K.: Methods for interpreting and understanding deep neural networks. Digital Signal Process. 73, 1–15 (2018)

    Article  MathSciNet  Google Scholar 

  29. Park, D.H., Hendricks, L.A., Akata, Z., Schiele, B., Darrell, T., Rohrbach, M.: Attentive explanations: justifying decisions and pointing to the evidence. CoRR (2016). arXiv:1612.04757

  30. Pennington, J., Socher, R., Manning, C.D.: Glove: Global vectors for word representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014)

  31. Powers, D.M.W.: Evaluation: From precision, recall and f-measure to roc., informedness, markedness & correlation. J. Mach. Learn. Technol. 2(1), 37–63 (2011)

    MathSciNet  Google Scholar 

  32. Ribeiro, M.T., Singh, S., Guestrin, C.: ”why should I trust you?”: Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 Aug, 2016, pp. 1135–1144 (2016)

  33. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. (IJCV) 115(3), 211–252 (2015). https://doi.org/10.1007/s11263-015-0816-y

    Article  MathSciNet  Google Scholar 

  34. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: visual explanations from deep networks via gradient-based localization. In: The IEEE International Conference on Computer Vision (ICCV) (2017)

  35. Shrikumar, A., Greenside, P., Shcherbina, A., Kundaje, A.: Not just a black box: learning important features through propagating activation differences. CoRR (2013). arXiv:1605.01713

  36. Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning, PMLR, International Convention Centre, Sydney, Australia, Proceedings of Machine Learning Research, vol. 70, pp. 3145–3153 (2017)

  37. Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. In: Proceedings of the International Conference on Learning Representations (ICLR) (2014)

  38. Uchida, K., Tanaka, M., Okutomi, M.: Coupled convolution layer for convolutional neural network. Neural Netw. 105, 197–205 (2018)

    Article  Google Scholar 

  39. Vinyals, O., Toshev, A., Bengio, S., Erhan, D.: Show and tell: A neural image caption generator. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, 7–12 June, 2015, pp. 3156–3164 (2015). https://doi.org/10.1109/CVPR.2015.7298935

  40. Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A.C., Salakhutdinov, R., Zemel, R.S., Bengio, Y.: Show, attend and tell: neural image caption generation with visual attention. In: Bach, F.R., Blei, D.M. (eds) Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6–11 July 2015, JMLR.org, JMLR Workshop and Conference Proceedings, vol. 37, pp. 2048–2057 (2015)

  41. Young, T., Hazarika, D., Poria, S., Cambria, E.: Recent trends in deep learning based natural language processing. IEEE Comput. Intell. Mag. 13, 55–75 (2018). https://doi.org/10.1109/MCI.2018.2840738

    Article  Google Scholar 

  42. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) Computer Vision—ECCV 2014, pp. 818–833. Springer International Publishing, Cham (2014)

    Google Scholar 

  43. Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hiroshi Kuwajima.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kuwajima, H., Tanaka, M. & Okutomi, M. Improving transparency of deep neural inference process. Prog Artif Intell 8, 273–285 (2019). https://doi.org/10.1007/s13748-019-00179-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13748-019-00179-x

Keywords

Navigation