Advertisement

ICIE 1.0: A Novel Tool for Interactive Contextual Interaction Explanations

  • Simon B. van der ZonEmail author
  • Wouter DuivesteijnEmail author
  • Werner van IpenburgEmail author
  • Jan VeldsinkEmail author
  • Mykola PechenizkiyEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11054)

Abstract

With the rise of new laws around privacy and awareness, explanation of automated decision making becomes increasingly important. Nowadays, machine learning models are used to aid experts in domains such as banking and insurance to find suspicious transactions, approve loans and credit card applications. Companies using such systems have to be able to provide the rationale behind their decisions; blindly relying on the trained model is not sufficient. There are currently a number of methods that provide insights in models and their decisions, but often they are either good at showing global or local behavior. Global behavior is often too complex to visualize or comprehend, so approximations are shown, and visualizing local behavior is often misleading as it is difficult to define what local exactly means (i.e. our methods don’t “know” how easily a feature-value can be changed; which ones are flexible, and which ones are static). We introduce the ICIE framework (Interactive Contextual Interaction Explanations) which enables users to view explanations of individual instances under different contexts. We will see that various contexts for the same case lead to different explanations, revealing different feature interactions.

Keywords

Explanations Feature contributions Feature interactions Model transparency Awareness Trust Responsible analytics 

References

  1. 1.
    Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., Müller, K.-R.: How to explain individual classification decisions. J. Mach. Learn. Res. 11(Jun), 1803–1831 (2010)MathSciNetzbMATHGoogle Scholar
  2. 2.
    Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classification and Regression Trees. CRC Press, New York (1989)zbMATHGoogle Scholar
  3. 3.
    Dheeru, D., Karra Taniskidou, E.: UCI machine learning repository (2017). http://archive.ics.uci.edu/ml
  4. 4.
    Goldstein, A., Kapelner, A., Bleich, J., Pitkin, E.: Peeking inside the black box: visualizing statistical learning with plots of individual conditional expectation. J. Comput. Graph. Stat. 24(1), 44–65 (2015)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Honein, M.A., Paulozzi, L.J., Moore, C.A.: Family history, maternal smoking, and clubfoot: an indication of a gene-environment interaction. Am. J. Epidemiol. 152, 658–665 (2000)CrossRefGoogle Scholar
  6. 6.
    Lundberg, S.M., Erion, G.G., Lee, S.-I.: Consistent individualized feature attribution for tree ensembles, arXiv preprint arXiv:1802.03888 (2018)
  7. 7.
    Lundberg, S.M., Lee, S.-I.: A unified approach to interpreting model predictions. In: Conference Proceedings on Advances in Neural Information Processing Systems, pp. 4768–4777 (2017)Google Scholar
  8. 8.
    Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you? Explaining the predictions of any classifier. In: Proceedings of KDD, pp. 1135–1144 (2016)Google Scholar
  9. 9.
    Robnik-Šikonja, M., Kononenko, I.: Explaining classifications for individual instances. IEEE Trans. Knowl. Data Eng. 20(5), 589–600 (2008)CrossRefGoogle Scholar
  10. 10.
    Shapley, L.S.: A value for n-person games. Contrib. Theory Games 2(28), 307–317 (1953)MathSciNetzbMATHGoogle Scholar
  11. 11.
    Štrumbelj, E., Kononenko, I.: Explaining prediction models and individual predictions with feature contributions. Knowl. Inf. Syst. 41(3), 647–665 (2014)CrossRefGoogle Scholar
  12. 12.
    Štrumbelj, E., Kononenko, I., Robnik-Šikonja, M.: Explaining instance classifications with interactions of subsets of feature values. Data Knowl. Eng. 68(10), 886–904 (2009)CrossRefGoogle Scholar
  13. 13.
    Martens, D., Foster, P.: Explaining data-driven document classifications. MIS Q. 38(1) (2014)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Eindhoven University of TechnologyEindhovenThe Netherlands
  2. 2.Coöperatieve Rabobank U.A.UtrechtThe Netherlands

Personalised recommendations