Advertisement

Explaining AI Decisions Using Efficient Methods for Learning Sparse Boolean Formulae

  • Susmit JhaEmail author
  • Tuhin Sahai
  • Vasumathi Raman
  • Alessandro Pinto
  • Michael Francis
Article
  • 92 Downloads

Abstract

In this paper, we consider the problem of learning Boolean formulae from examples obtained by actively querying an oracle that can label these examples as either positive or negative. This problem has received attention in both machine learning as well as formal methods communities, and it has been shown to have exponential worst-case complexity in the general case as well as for many restrictions. In this paper, we focus on learning sparse Boolean formulae which depend on only a small (but unknown) subset of the overall vocabulary of atomic propositions. We propose two algorithms—first, based on binary search in the Hamming space, and the second, based on random walk on the Boolean hypercube, to learn these sparse Boolean formulae with a given confidence. This assumption of sparsity is motivated by the problem of mining explanations for decisions made by artificially intelligent (AI) algorithms, where the explanation of individual decisions may depend on a small but unknown subset of all the inputs to the algorithm. We demonstrate the use of these algorithms in automatically generating explanations of these decisions. These explanations will make intelligent systems more understandable and accountable to human users, facilitate easier audits and provide diagnostic information in the case of failure. The proposed approach treats the AI algorithm as a black-box oracle; hence, it is broadly applicable and agnostic to the specific AI algorithm. We show that the number of examples needed for both proposed algorithms only grows logarithmically with the size of the vocabulary of atomic propositions. We illustrate the practical effectiveness of our approach on a diverse set of case studies.

Keywords

Explainable AI Boolean formula learning Machine learning Formal methods Interpretable AI Sparse learning 

Notes

Acknowledgements

The authors acknowledge support from the National Science Foundation(NSF) Cyber-Physical Systems #1740079 Project, NSF Software & Hardware Foundation #1750009 Project, and US ARL Cooperative Agreement W911NF-17-2-0196 on Internet of Battle Things (IoBT).

References

  1. 1.
    Abouzied, A., Angluin, D., Papadimitriou, C., Hellerstein, J.M., Silberschatz, A.: Learning and verifying quantified boolean queries by example. In: ACM Symposium on Principles of Database Systems, pp. 49–60. ACM (2013)Google Scholar
  2. 2.
    Angluin, D.: Computational learning theory: survey and selected bibliography. In: ACM Symposium on Theory of Computing, pp. 351–369. ACM (1992)Google Scholar
  3. 3.
    Angluin, D., Kharitonov, M.: When won’t membership queries help? In: ACM Symposium on Theory of Computing, pp. 444–454. ACM (1991)Google Scholar
  4. 4.
    Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., Ažller, K.-R.M.: How to explain individual classification decisions. J. Mach. Learn. Res. 11(Jun), 1803–1831 (2010)MathSciNetzbMATHGoogle Scholar
  5. 5.
    Bittner, B., Bozzano, M., Cimatti, A., Gario, M., Griggio, A.: Towards pareto-optimal parameter synthesis for monotonie cost functions. In: FMCAD, pp. 23–30 (2014)Google Scholar
  6. 6.
    Boigelot, B., Godefroid, P.: Automatic synthesis of specifications from the dynamic observation of reactive programs. In: TACAS, pp. 321–333 (1997)Google Scholar
  7. 7.
    Boneh, A., Hofri, M.: The coupon-collector problem revisiteda survey of engineering problems and computational methods. Stoch. Models 13(1), 39–66 (1997)CrossRefGoogle Scholar
  8. 8.
    Botinčan, M., Babić, D.: Sigma*: symbolic learning of input-output specifications. In: POPL, pp. 443–456 (2013)CrossRefGoogle Scholar
  9. 9.
    Cook, B., Kroening, D., Rümmer, P., Wintersteiger, C.M.: Ranking function synthesis for bit-vector relations. FMSD 43(1), 93–120 (2013)zbMATHGoogle Scholar
  10. 10.
    de Fortuny, E.J., Martens, D.: Active learning-based pedagogical rule extraction. IEEE Trans. Neural Netw. Learn. Syst. 26(11), 2664–2677 (2015)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Dutta, S., Jha, S., Sanakaranarayanan, S., Tiwari, A.: Output range analysis for deep neural networks. arXiv preprint, arXiv:1709.09130 (2017)
  12. 12.
    Ehrenfeucht, A., Haussler, D., Kearns, M., Valiant, L.: A general lower bound on the number of examples needed for learning. Inf. Comput. 82(3), 247–261 (1989)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Elizalde, F., Sucar, E., Noguez, J., Reyes, A.: Generating Explanations Based on Markov Decision Processes, pp. 51–62 (2009)CrossRefGoogle Scholar
  14. 14.
    Feng, C., Muggleton, S.: Towards inductive generalisation in higher order logic. In: 9th International Workshop on Machine learning, pp. 154–162 (2014)CrossRefGoogle Scholar
  15. 15.
    Godefroid, P., Taly, A.: Automated synthesis of symbolic instruction encodings from i/o samples. SIGPLAN Not. 47(6), 441–452 (2012)CrossRefGoogle Scholar
  16. 16.
    Goldsmith, J., Sloan, R.H., Szörényi, B., Turán, G.: Theory revision with queries: horn, read-once, and parity formulas. Artif. Intell. 156(2), 139–176 (2004)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Gurfinkel, A., Belov, A., Marques-Silva, J.: Synthesizing Safe Bit-Precise Invariants, pp. 93–108 (2014)CrossRefGoogle Scholar
  18. 18.
    Harbers, M., Meyer, J.-J., van den Bosch, K.: Explaining simulations through self explaining agents. J. Artif. Soc. Soc. Simul. 12, 6 (2010)Google Scholar
  19. 19.
    Hellerstein, L., Servedio, R.A.: On pac learning algorithms for rich boolean function classes. Theor. Comput. Sci. 384(1), 66–76 (2007)MathSciNetCrossRefGoogle Scholar
  20. 20.
    Jha, S., Gulwani, S., Seshia, S.A., Tiwari, A.: In: Oracle-guided component-based program synthesis. In: ICSE, pp. 215–224. IEEE (2010)Google Scholar
  21. 21.
    Jha, S., Raman, V., Pinto, A., Sahai, T., Francis, M.: On learning sparse boolean formulae for explaining AI decisions. In: NASA Formal Methods—9th International Symposium, NFM 2017, Moffett Field, CA, USA, May 16–18, 2017, Proceedings, pp. 99–114 (2017)Google Scholar
  22. 22.
    Jha, S., Seshia, S.A.: A theory of formal synthesis via inductive learning. In: Acta Informatica, Special Issue on Synthesis (2016)Google Scholar
  23. 23.
    Jha, S., Seshia, S.A., Tiwari, A.: Synthesis of optimal switching logic for hybrid systems. In: EMSOFT, pp. 107–116. ACM (2011)Google Scholar
  24. 24.
    Kearns, M., Li, M., Valiant, L.: Learning boolean formulas. J. ACM 41(6), 1298–1328 (1994)MathSciNetCrossRefGoogle Scholar
  25. 25.
    Kearns, M., Valiant, L.: Cryptographic limitations on learning boolean formulae and finite automata. J. ACM 41(1), 67–95 (1994)MathSciNetCrossRefGoogle Scholar
  26. 26.
    Lakkaraju, H., Bach, S.H., Leskovec, J.: Interpretable decision sets: a joint framework for description and prediction. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1675–1684. ACM (2016)Google Scholar
  27. 27.
    LaValle, S.M.: Planning Algorithms. Cambridge University Press, Cambridge (2006)CrossRefGoogle Scholar
  28. 28.
    Lecun, Y., Cortes, C.: The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/
  29. 29.
    Lee, J., Moray, N.: Trust, control strategies and allocation of function in human–machine systems. Ergonomics 35(10), 1243–1270 (1992)CrossRefGoogle Scholar
  30. 30.
    Mansour, Y.: Learning boolean functions via the Fourier transform. In: Theoretical Advances in Neural Computation and Learning, pp. 391–424 (1994)CrossRefGoogle Scholar
  31. 31.
    Nau, D., Ghallab, M., Traverso, P.: Automated Planning: Theory & Practice. Morgan Kaufmann Publishers Inc., San Francisco (2004)zbMATHGoogle Scholar
  32. 32.
    Pitt, L., Valiant, L.G.: Computational limitations on learning from examples. J. ACM 35(4), 965–984 (1988)MathSciNetCrossRefGoogle Scholar
  33. 33.
    Raman, V.: Reactive switching protocols for multi-robot high-level tasks. In: IEEE/RSJ, pp. 336–341 (2014)Google Scholar
  34. 34.
    Raman, V., Lignos, C., Finucane, C., Lee, K.C.T., Marcus, M.P., Kress-Gazit, H.: Sorry Dave, I’m Afraid I can’t do that: Explaining unachievable robot tasks using natural language. In: Robotics: Science and Systems (2013)Google Scholar
  35. 35.
    Reynolds, A., Deters, M., Kuncak, V., Tinelli, C., Barrett, C.: Counterexample-Guided Quantifier Instantiation for Synthesis in SMT, pp. 198–216 (2015)Google Scholar
  36. 36.
    Ribeiro, M.T., Singh, S., Guestrin, C.: Why Should I Trust You?: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. ACM (2016)Google Scholar
  37. 37.
    Ribeiro, M.T., Singh, S., Guestrin, C.: “Why Should I Trust You?”: explaining the predictions of any classifier. In: KDD, pp. 1135–1144 (2016)Google Scholar
  38. 38.
    Robnik-Šikonja, M., Kononenko, I.: Explaining classifications for individual instances. IEEE Trans. Knowl. Data Eng. 20(5), 589–600 (2008)CrossRefGoogle Scholar
  39. 39.
    Russell, J., Cohn, R.: OODA loop. In: Book on Demand (2012)Google Scholar
  40. 40.
    Sankaranarayanan, S.: Automatic invariant generation for hybrid systems using ideal fixed points. In: HSCC, pp. 221–230 (2010)Google Scholar
  41. 41.
    Sankaranarayanan, S., Miller, C., Raghunathan, R., Ravanbakhsh, H., Fainekos, G.: A model-based approach to synthesizing insulin infusion pump usage parameters for diabetic patients. In: Annual Allerton Conference on Communication, Control, and Computing, pp. 1610–1617 (2012)Google Scholar
  42. 42.
    Sankaranarayanan, S., Sipma, H.B., Manna, Z.: Constructing invariants for hybrid systems. FMSD 32(1), 25–55 (2008)zbMATHGoogle Scholar
  43. 43.
    Štrumbelj, E., Kononenko, I.: Explaining prediction models and individual predictions with feature contributions. KIS 41(3), 647–665 (2014)Google Scholar
  44. 44.
    Urban, C., Gurfinkel, A., Kahsai, T.: Synthesizing Ranking Functions from Bits and Pieces, pp. 54–70 (2016)CrossRefGoogle Scholar
  45. 45.
    Yuan, C., Lim, H., Lu, T.-C.: Most relevant explanation in bayesian networks. J. Artif. Intell. Res. (JAIR) 42, 309–352 (2011)MathSciNetzbMATHGoogle Scholar
  46. 46.
    Zintgraf, L.M., Cohen, T.S., Adel, T., Welling, M.: Visualizing deep neural network decisions: prediction difference analysis. arXiv preprint arXiv:1702.04595 (2017)

Copyright information

© Springer Nature B.V. 2018

Authors and Affiliations

  1. 1.SRI InternationalMenlo ParkUSA
  2. 2.United Technologies Research CenterBerkeleyUSA

Personalised recommendations