Skip to main content

Investigating Neighborhood Generation Methods for Explanations of Obscure Image Classifiers

  • Conference paper
  • First Online:
Advances in Knowledge Discovery and Data Mining (PAKDD 2019)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11439))

Included in the following conference series:

Abstract

Given the wide use of machine learning approaches based on opaque prediction models, understanding the reasons behind decisions of black box decision systems is nowadays a crucial topic. We address the problem of providing meaningful explanations in the widely-applied image classification tasks. In particular, we explore the impact of changing the neighborhood generation function for a local interpretable model-agnostic explanator by proposing four different variants. All the proposed methods are based on a grid-based segmentation of the images, but each of them proposes a different strategy for generating the neighborhood of the image for which an explanation is required. A deep experimentation shows both improvements and weakness of each proposed approach.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    IDRs do not necessarily correspond to the features used by the black box in the prediction process. Indeed, such features may be not suitable for being shown as explanation.

  2. 2.

    In the rest of this work, feature, patch, area, piece are used to denote the same concept.

  3. 3.

    For the neighborhood \(N_x\) LIME generates m vectors w uniformly at random, assigning to them a weight that is proportional to their distance from the original image. The distance is used for assigning less importance to noisy images (that are too far away to be considered neighbors) and for focusing on the samples that are close to the original picture.

  4. 4.

    For the sake of space we do not report experiments varying the probability of selection.

  5. 5.

    Source code and dataset can be found at: https://github.com/leqo-c/Tesi.

  6. 6.

    For the sake of simplicity of exposure and due to length constraints, we analyze both parameters in the same plots and we remand interested readers to the repository for further details.

  7. 7.

    We do not report results using grid size lower than \(8\times 8\) (i.e., \(2\times 2\), \(4\times 4\), \(6\times 6\)) or higher than \(32\times 32\) (i.e., \(64\times 64\), \(128\times 128\)) has they have poor performance compared to those reported.

References

  1. Angelino, E., Larus-Stone, N., Alabi, D., Seltzer, M., Rudin, C.: Learning certifiably optimal rule lists. In: KDD, pp. 35–44. ACM (2017)

    Google Scholar 

  2. Chapelle, O., Haffner, P., Vapnik, V.N.: Support vector machines for histogram-based image classification. IEEE Trans. Neural Netw. 10(5), 1055–1064 (1999)

    Article  Google Scholar 

  3. Christopher, D.M., Prabhakar, R., Hinrich, S.: Introduction to information retrieval 151(177), 5 (2008)

    Google Scholar 

  4. Comaniciu, D., Meer, P.: Mean shift: a robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 24, 603–619 (2002)

    Article  Google Scholar 

  5. Csurka, G., et al.: Visual categorization with bags of keypoints. In: Workshop on Statistical Learning in Computer Vision, ECCV, Prague, vol. 1, pp. 1–2 (2004)

    Google Scholar 

  6. Dabkowski, P., Gal, Y.: Real time image saliency for black box classifiers. In: Advances in NIPS 30, pp. 6967–6976. Curran Associates Inc. (2017)

    Google Scholar 

  7. Ester, M., Kriegel, H.-P., Sander, J., Xu, X., et al.: A density-based algorithm for discovering clusters in large spatial databases with noise. In: KDD, vol. 96, pp. 226–231 (1996)

    Google Scholar 

  8. Fong, R., Vedaldi, A.: Interpretable explanations of black boxes by meaningful perturbation. CoRR, abs/1704.03296 (2017)

    Google Scholar 

  9. Gal, Y., et al.: A theoretically grounded application of dropout in recurrent neural networks. In: Advances in Neural Information Processing Systems, pp. 1019–1027 (2016)

    Google Scholar 

  10. Guidotti, R., Monreale, A., Nanni, M., Giannotti, F., Pedreschi, D.: Clustering individual transactional data for masses of users. In: KDD, pp. 195–204. ACM (2017)

    Google Scholar 

  11. Guidotti, R., Monreale, A., Ruggieri, S., Pedreschi, D., Turini, F., Giannotti, F.: Local rule-based explanations of black box decision systems. arXiv preprint arXiv:1805.10820 (2018)

  12. Guidotti, R., Monreale, A., Turini, F., Pedreschi, D., Giannotti, F.: A survey of methods for explaining black box models. CoRR, abs/1802.01933 (2018)

    Google Scholar 

  13. Guidotti, R., Soldani, J., Neri, D., Brogi, A., Pedreschi, D.: Helping your docker images to spread based on explainable models. In: Brefeld, U., et al. (eds.) ECML PKDD 2018. LNCS (LNAI), vol. 11053, pp. 205–221. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-10997-4_13

    Chapter  Google Scholar 

  14. Jebara, T.: Images as bags of pixels. In: ICCV, Washington, D.C., USA, pp. 265–272. IEEE (2003)

    Google Scholar 

  15. Kim, B., et al. :The Bayesian case model: a generative approach for case-based reasoning and prototype classification. In: Advances in NIPS, pp. 1952–1960 (2014)

    Google Scholar 

  16. Lowe, D.G.: Object recognition from local scale-invariant features. In: Computer Vision, vol. 2, pp. 1150–1157. IEEE (1999)

    Google Scholar 

  17. Lu, Z., Wang, L., Wen, J.: Image classification by visual bag-of-words refinement and reduction. CoRR, abs/1501.04292:197–206 (2015)

    Google Scholar 

  18. Malgieri, G., Comandé, G.: Why a right to legibility of automated decision-making exists in the general data protection regulation. Int. Data Priv. Law 7(4), 243–265 (2017)

    Article  Google Scholar 

  19. Ng, A.Y., Jordan, M.I., Weiss, Y.: On spectral clustering: analysis and an algorithm. In: Advances in Neural Information Processing Systems, pp. 849–856 (2002)

    Google Scholar 

  20. Pedreschi, D., Giannotti, F., Guidotti, R., et al.: Open the black box data-driven explanation of black box decision systems. arXiv preprint arXiv:1806.09936 (2018)

  21. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. In: KDD, pp. 1135–1144. ACM (2016)

    Google Scholar 

  22. Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. (IJCV) 115(3), 211–252 (2015)

    Article  MathSciNet  Google Scholar 

  23. Selvaraju, R.R., et al.: Grad-CAM: why did you say that? Visual explanations from deep networks via gradient-based localization. CoRR, abs/1610.02391 (2016)

    Google Scholar 

  24. Szegedy, C., et al.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016)

    Google Scholar 

  25. Tan, P.-N., et al.: Introduction to Data Mining. Pearson Education, New Delhi (2007)

    Google Scholar 

  26. Vedaldi, A., Soatto, S.: Quick shift and kernel methods for mode seeking. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5305, pp. 705–718. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-88693-8_52

    Chapter  Google Scholar 

  27. Wachter, S., et al.: Why a right to explanation of automated decision-making does not exist in the general data protection regulation. Int. Data Priv. Law 7(2), 76–99 (2017)

    Article  Google Scholar 

  28. Yang, J., et al.: Evaluating bag-of-visual-words representations in scene classification. In: International Workshop on Multimedia Information Retrieval, pp. 197–206. ACM (2007)

    Google Scholar 

  29. Zhang, J., et al.: Top-down neural attention by excitation backprop. CoRR, 1608.00507 (2016)

    Google Scholar 

  30. Zhang, Q., Wu, Y.N., Zhu, S.-C.: Interpretable convolutional neural networks. arXiv preprint arXiv:1710.00935 (2017). 2(3), 5

  31. Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Object detectors emerge in deep scene CNNs. arXiv preprint arXiv:1412.6856 (2014)

  32. Zhou, B., Khosla, A., Lapedriza, À., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. CoRR, abs/1512.04150:2921–2929 (2015)

    Google Scholar 

Download references

Acknowledgements

This work is partially supported by the European Community H2020 program under the funding scheme “INFRAIA-1-2014-2015: Research Infrastructures” G.A. 654024 “SoBigData”, http://www.sobigdata.eu, by the European Unions H2020 program under G.A. 78835, “Pro-Res”, http://prores-project.eu/, and by the European Unions H2020 program under G.A. 780754, “Track & Know”.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Riccardo Guidotti .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Guidotti, R., Monreale, A., Cariaggi, L. (2019). Investigating Neighborhood Generation Methods for Explanations of Obscure Image Classifiers. In: Yang, Q., Zhou, ZH., Gong, Z., Zhang, ML., Huang, SJ. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2019. Lecture Notes in Computer Science(), vol 11439. Springer, Cham. https://doi.org/10.1007/978-3-030-16148-4_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-16148-4_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-16147-7

  • Online ISBN: 978-3-030-16148-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics