Advertisement

Investigating Neighborhood Generation Methods for Explanations of Obscure Image Classifiers

  • Riccardo GuidottiEmail author
  • Anna Monreale
  • Leonardo Cariaggi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11439)

Abstract

Given the wide use of machine learning approaches based on opaque prediction models, understanding the reasons behind decisions of black box decision systems is nowadays a crucial topic. We address the problem of providing meaningful explanations in the widely-applied image classification tasks. In particular, we explore the impact of changing the neighborhood generation function for a local interpretable model-agnostic explanator by proposing four different variants. All the proposed methods are based on a grid-based segmentation of the images, but each of them proposes a different strategy for generating the neighborhood of the image for which an explanation is required. A deep experimentation shows both improvements and weakness of each proposed approach.

Notes

Acknowledgements

This work is partially supported by the European Community H2020 program under the funding scheme “INFRAIA-1-2014-2015: Research Infrastructures” G.A. 654024 “SoBigData”, http://www.sobigdata.eu, by the European Unions H2020 program under G.A. 78835, “Pro-Res”, http://prores-project.eu/, and by the European Unions H2020 program under G.A. 780754, “Track & Know”.

References

  1. 1.
    Angelino, E., Larus-Stone, N., Alabi, D., Seltzer, M., Rudin, C.: Learning certifiably optimal rule lists. In: KDD, pp. 35–44. ACM (2017)Google Scholar
  2. 2.
    Chapelle, O., Haffner, P., Vapnik, V.N.: Support vector machines for histogram-based image classification. IEEE Trans. Neural Netw. 10(5), 1055–1064 (1999)CrossRefGoogle Scholar
  3. 3.
    Christopher, D.M., Prabhakar, R., Hinrich, S.: Introduction to information retrieval 151(177), 5 (2008)Google Scholar
  4. 4.
    Comaniciu, D., Meer, P.: Mean shift: a robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 24, 603–619 (2002)CrossRefGoogle Scholar
  5. 5.
    Csurka, G., et al.: Visual categorization with bags of keypoints. In: Workshop on Statistical Learning in Computer Vision, ECCV, Prague, vol. 1, pp. 1–2 (2004)Google Scholar
  6. 6.
    Dabkowski, P., Gal, Y.: Real time image saliency for black box classifiers. In: Advances in NIPS 30, pp. 6967–6976. Curran Associates Inc. (2017)Google Scholar
  7. 7.
    Ester, M., Kriegel, H.-P., Sander, J., Xu, X., et al.: A density-based algorithm for discovering clusters in large spatial databases with noise. In: KDD, vol. 96, pp. 226–231 (1996)Google Scholar
  8. 8.
    Fong, R., Vedaldi, A.: Interpretable explanations of black boxes by meaningful perturbation. CoRR, abs/1704.03296 (2017)Google Scholar
  9. 9.
    Gal, Y., et al.: A theoretically grounded application of dropout in recurrent neural networks. In: Advances in Neural Information Processing Systems, pp. 1019–1027 (2016)Google Scholar
  10. 10.
    Guidotti, R., Monreale, A., Nanni, M., Giannotti, F., Pedreschi, D.: Clustering individual transactional data for masses of users. In: KDD, pp. 195–204. ACM (2017)Google Scholar
  11. 11.
    Guidotti, R., Monreale, A., Ruggieri, S., Pedreschi, D., Turini, F., Giannotti, F.: Local rule-based explanations of black box decision systems. arXiv preprint arXiv:1805.10820 (2018)
  12. 12.
    Guidotti, R., Monreale, A., Turini, F., Pedreschi, D., Giannotti, F.: A survey of methods for explaining black box models. CoRR, abs/1802.01933 (2018)Google Scholar
  13. 13.
    Guidotti, R., Soldani, J., Neri, D., Brogi, A., Pedreschi, D.: Helping your docker images to spread based on explainable models. In: Brefeld, U., et al. (eds.) ECML PKDD 2018. LNCS (LNAI), vol. 11053, pp. 205–221. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-10997-4_13CrossRefGoogle Scholar
  14. 14.
    Jebara, T.: Images as bags of pixels. In: ICCV, Washington, D.C., USA, pp. 265–272. IEEE (2003)Google Scholar
  15. 15.
    Kim, B., et al. :The Bayesian case model: a generative approach for case-based reasoning and prototype classification. In: Advances in NIPS, pp. 1952–1960 (2014)Google Scholar
  16. 16.
    Lowe, D.G.: Object recognition from local scale-invariant features. In: Computer Vision, vol. 2, pp. 1150–1157. IEEE (1999)Google Scholar
  17. 17.
    Lu, Z., Wang, L., Wen, J.: Image classification by visual bag-of-words refinement and reduction. CoRR, abs/1501.04292:197–206 (2015)Google Scholar
  18. 18.
    Malgieri, G., Comandé, G.: Why a right to legibility of automated decision-making exists in the general data protection regulation. Int. Data Priv. Law 7(4), 243–265 (2017)CrossRefGoogle Scholar
  19. 19.
    Ng, A.Y., Jordan, M.I., Weiss, Y.: On spectral clustering: analysis and an algorithm. In: Advances in Neural Information Processing Systems, pp. 849–856 (2002)Google Scholar
  20. 20.
    Pedreschi, D., Giannotti, F., Guidotti, R., et al.: Open the black box data-driven explanation of black box decision systems. arXiv preprint arXiv:1806.09936 (2018)
  21. 21.
    Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. In: KDD, pp. 1135–1144. ACM (2016)Google Scholar
  22. 22.
    Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. (IJCV) 115(3), 211–252 (2015)MathSciNetCrossRefGoogle Scholar
  23. 23.
    Selvaraju, R.R., et al.: Grad-CAM: why did you say that? Visual explanations from deep networks via gradient-based localization. CoRR, abs/1610.02391 (2016)Google Scholar
  24. 24.
    Szegedy, C., et al.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016)Google Scholar
  25. 25.
    Tan, P.-N., et al.: Introduction to Data Mining. Pearson Education, New Delhi (2007)Google Scholar
  26. 26.
    Vedaldi, A., Soatto, S.: Quick shift and kernel methods for mode seeking. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5305, pp. 705–718. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-88693-8_52CrossRefGoogle Scholar
  27. 27.
    Wachter, S., et al.: Why a right to explanation of automated decision-making does not exist in the general data protection regulation. Int. Data Priv. Law 7(2), 76–99 (2017)CrossRefGoogle Scholar
  28. 28.
    Yang, J., et al.: Evaluating bag-of-visual-words representations in scene classification. In: International Workshop on Multimedia Information Retrieval, pp. 197–206. ACM (2007)Google Scholar
  29. 29.
    Zhang, J., et al.: Top-down neural attention by excitation backprop. CoRR, 1608.00507 (2016)Google Scholar
  30. 30.
    Zhang, Q., Wu, Y.N., Zhu, S.-C.: Interpretable convolutional neural networks. arXiv preprint arXiv:1710.00935 (2017). 2(3), 5
  31. 31.
    Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Object detectors emerge in deep scene CNNs. arXiv preprint arXiv:1412.6856 (2014)
  32. 32.
    Zhou, B., Khosla, A., Lapedriza, À., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. CoRR, abs/1512.04150:2921–2929 (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Riccardo Guidotti
    • 1
    • 2
    Email author
  • Anna Monreale
    • 2
  • Leonardo Cariaggi
    • 2
  1. 1.ISTI-CNRPisaItaly
  2. 2.University of PisaPisaItaly

Personalised recommendations