Advertisement

“Why Did You Do That?”

Explaining Black Box Models with Inductive Synthesis
  • Görkem PaçacıEmail author
  • David Johnson
  • Steve McKeever
  • Andreas Hamfelt
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11540)

Abstract

By their nature, the composition of black box models is opaque. This makes the ability to generate explanations for the response to stimuli challenging. The importance of explaining black box models has become increasingly important given the prevalence of AI and ML systems and the need to build legal and regulatory frameworks around them. Such explanations can also increase trust in these uncertain systems. In our paper we present RICE, a method for generating explanations of the behaviour of black box models by (1) probing a model to extract model output examples using sensitivity analysis; (2) applying CNPInduce, a method for inductive logic program synthesis, to generate logic programs based on critical input-output pairs; and (3) interpreting the target program as a human-readable explanation. We demonstrate the application of our method by generating explanations of an artificial neural network trained to follow simple traffic rules in a hypothetical self-driving car simulation. We conclude with a discussion on the scalability and usability of our approach and its potential applications to explanation-critical scenarios.

Keywords

Artificial intelligence Machine learning Black box models Explanation Inductive logic Program synthesis 

References

  1. 1.
    Behrendt, K., Novak, L.: A deep learning approach to traffic lights: detection, tracking, and classification. In: 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE (2017)Google Scholar
  2. 2.
    Beizer, B.: Black-Box Testing: Techniques for Functional Testing of Software and Systems. Wiley, New York (1995)Google Scholar
  3. 3.
    Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., Shadbolt, N.: ‘It’s reducing a human being to a percentage’: perceptions of justice in algorithmic decisions. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, 2018), CHI 2018, pp. 377:1–377:14. ACM (2018)Google Scholar
  4. 4.
    Biran, O., Cotton, C.: Explanation and justification in machine learning: a survey. In: IJCAI 2017 Workshop on Explainable AI (XAI), vol. 8 (2017)Google Scholar
  5. 5.
    Chu, C.-T., et al.: Map-reduce for machine learning on multicore. In: Advances in Neural Information Processing Systems, pp. 281–288 (2007)Google Scholar
  6. 6.
    Datta, A., Sen, S., Zick, Y.: Algorithmic transparency via quantitative input influence: theory and experiments with learning systems. In: 2016 IEEE Symposium on Security and Privacy (SP), pp. 598–617, May 2016Google Scholar
  7. 7.
    Freitas, A.A.: Comprehensible classification models - a position paper. ACM SIGKDD Explor. 15(1), 1–10 (2013)CrossRefGoogle Scholar
  8. 8.
    Fuchs, N.E., Schwitter, R.: Attempto controlled English (ACE). arXiv preprint cmp-lg/9603003 (1996)Google Scholar
  9. 9.
    Ghahramani, Z.: Probabilistic machine learning and artificial intelligence. Nature 521(7553), 452–459 (2015)CrossRefGoogle Scholar
  10. 10.
    Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 93:1–93:42 (2018)CrossRefGoogle Scholar
  11. 11.
    Gunning, D.: Explainable artificial intelligence (XAI), Program update November 2017 (2017)Google Scholar
  12. 12.
    Hamfelt, A., Nilsson, J.F.: Inductive metalogic programming. In: Proceedings of the Fourth International Workshop on Inductive Logic programming. Bad Honnef/Bonn GMD-Studien Nr. 237, pp. 85–96 (1994)Google Scholar
  13. 13.
    Hamfelt, A., Nilsson, J.F.: Inductive synthesis of logic programs by composition of combinatory program schemes. In: Flener, P. (ed.) LOPSTR 1998. LNCS, vol. 1559, pp. 143–158. Springer, Heidelberg (1999).  https://doi.org/10.1007/3-540-48958-4_8CrossRefGoogle Scholar
  14. 14.
    Hamfelt, A., Nilsson, J.F., Vitoria, A.: A combinatory form of pure logic programs and its compositional semantics (1998, Unpublished draft)Google Scholar
  15. 15.
    Lipton, Z.C.: The mythos of model interpretability. Commun. ACM 61(10), 36–43 (2018)CrossRefGoogle Scholar
  16. 16.
    Lloyd, J.R., Duvenaud, D., Grosse, R., Tenenbaum, J.B., Ghahramani, Z.: Automatic construction and natural-language description of nonparametric regression models. In: Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, AAAI 2014, pp. 1242–1250. AAAI Press (2014)Google Scholar
  17. 17.
    Paçaci, G.: Representation of Compositional Relational Programs. Ph.D. thesis, Uppsala University, Information Systems (2017)Google Scholar
  18. 18.
    Paçaci, G., McKeever, S., Hamfelt, A.: Compositional relational programming with nominal projection and compositional synthesis. In: Proceedings of the PSI 2017: 11th Ershov Informatics Conference (2017)Google Scholar
  19. 19.
    Poursabzi-Sangdeh, F., Goldstein, D.G., Hofman, J.M., Vaughan, J.W., Wallach, H.M.: Manipulating and measuring model interpretability. CoRR abs/1802.07810 (2018)Google Scholar
  20. 20.
    Prakken, H., Sartor, G.: Law and logic: a review from an argumentation perspective. Artif. Intell. 227, 214–245 (2015)MathSciNetCrossRefGoogle Scholar
  21. 21.
    Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (New York, NY, USA, 2016), KDD 2016, pp. 1135–1144. ACM (2016)Google Scholar
  22. 22.
    Rudin, C.: Please stop explaining black box models for high stakes decisions. CoRR abs/1811.10154 (2018)Google Scholar
  23. 23.
    ten Broeke, G., van Voorn, G., Ligtenberg, A.: Which sensitivity analysis method should i use for my agent-based model? J. Artif. Soc. Soc. Simul. 19(1), 5 (2016)CrossRefGoogle Scholar
  24. 24.
    Zurada, J.M., Malinowski, A., Cloete, I.: Sensitivity analysis for minimization of input data dimension for feedforward neural network. In: 1994 IEEE International Symposium on Circuits and Systems, ISCAS 1994, vol. 6, pp. 447–450. IEEE (1994)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Görkem Paçacı
    • 1
    Email author
  • David Johnson
    • 1
  • Steve McKeever
    • 1
  • Andreas Hamfelt
    • 1
  1. 1.Department of Informatics and MediaUppsala UniversityUppsalaSweden

Personalised recommendations