Skip to main content

Contextual Semantic Interpretability

  • Conference paper
  • First Online:
Computer Vision – ACCV 2020 (ACCV 2020)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12625))

Included in the following conference series:

Abstract

Convolutional neural networks (CNN) are known to learn an image representation that captures concepts relevant to the task, but do so in an implicit way that hampers model interpretability. However, one could argue that such a representation is hidden in the neurons and can be made explicit by teaching the model to recognize semantically interpretable attributes that are present in the scene. We call such an intermediate layer a semantic bottleneck. Once the attributes are learned, they can be re-combined to reach the final decision and provide both an accurate prediction and an explicit reasoning behind the CNN decision. In this paper, we look into semantic bottlenecks that capture context: we want attributes to be in groups of a few meaningful elements and participate jointly to the final decision. We use a two-layer semantic bottleneck that gathers attributes into interpretable, sparse groups, allowing them contribute differently to the final output depending on the context. We test our contextual semantic interpretable bottleneck (CSIB) on the task of landscape scenicness estimation and train the semantic interpretable bottleneck using an auxiliary database (SUN Attributes). Our model yields in predictions as accurate as a non-interpretable baseline when applied to a real-world test set of Flickr images, all while providing clear and interpretable explanations for each prediction.

R. Flamary—Partially funded through the project OATMIL ANR-17-CE23-0012 and 3IA Cote d’Azur Investments ANR-19-P3IA-0002 of the French National Research Agency.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Lapuschkin, S., Wäldchen, S., Binder, A., Montavon, G., Samek, W., Müller, K.R.: Unmasking Clever Hans predictors and assessing what machines really learn. Nat. Commun. 10, 1096 (2019)

    Article  Google Scholar 

  2. Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)

  3. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533 (2016)

  4. Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. arXiv preprint arXiv:1907.07174 (2019)

  5. Edwards, L., Veale, M.: Slave to the algorithm: why a right to an explanation is probably not the remedy you are looking for. Duke L. & Tech. Rev. 16, 18 (2017)

    Google Scholar 

  6. Biran, O., Cotton, C.: Explanation and justification in machine learning: a survey. In: IJCAI-17 Workshop on Explainable AI (XAI), vol. 8, no. 1 (2017)

    Google Scholar 

  7. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1, 206–215 (2019)

    Article  Google Scholar 

  8. Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), pp. 80–89. IEEE (2018)

    Google Scholar 

  9. Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. In: Proceedings of ICLR Workshop (2014)

    Google Scholar 

  10. Springenberg, J.T., Dosovitskiy, A., Brox, T., Riedmiller, M.: Striving for simplicity: The all convolutional net (2015)

    Google Scholar 

  11. Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: Proceedings of CVPR (2016)

    Google Scholar 

  12. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of ICCV (2017)

    Google Scholar 

  13. Zhang, J., Bargal, S.A., Lin, Z., Brandt, J., Shen, X., Sclaroff, S.: Top-down neural attention by excitation backprop. IJCV 126, 1084–1102 (2018)

    Article  Google Scholar 

  14. Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS One 10, e0130140 (2015)

    Article  Google Scholar 

  15. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014, Part I. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_53

    Chapter  Google Scholar 

  16. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?” explaining the predictions of any classifier. In: SIGKDD (2016)

    Google Scholar 

  17. Fong, R., Vedaldi, A.: Interpretable explanations of black boxes by meaningful perturbation. In: Proceedings of ICCV (2017)

    Google Scholar 

  18. Petsiuk, V., Das, A., Saenko, K.: RISE: Randomized input sampling for explanation of black-box models. In: Proceedings of BMVC (2018)

    Google Scholar 

  19. Fong, R., Patrick, M., Vedaldi, A.: Understanding deep networks via extremal perturbations and smooth masks. In: Proceedings of ICCV (2019)

    Google Scholar 

  20. Adebayo, J., Gilmer, J., Goodfellow, I., Hardt, M., Kim, B.: Sanity checks for saliency maps. In: Proceedings of NeurIPS (2018)

    Google Scholar 

  21. Morcos, A.S., Barrett, D.G., Rabinowitz, N.C., Botvinick, M.: On the importance of single directions for generalization. arXiv preprint arXiv:1803.06959 (2018)

  22. Zhou, B., Sun, Y., Bau, D., Torralba, A.: Revisiting the importance of individual units in CNNs via ablation. arXiv preprint arXiv:1806.02891 (2018)

  23. Bau, D., Zhou, B., Khosla, A., Oliva, A., Torralba, A.: Network dissection: quantifying interpretability of deep visual representations. In: Proceedings of CVPR (2017)

    Google Scholar 

  24. Fong, R., Vedaldi, A.: Net2Vec: quantifying and explaining how concepts are encoded by filters in deep neural networks. In: Proceedings of CVPR (2018)

    Google Scholar 

  25. Lenc, K., Vedaldi, A.: Understanding image representations by measuring their equivariance and equivalence. Int. J. Comput. Vision 127(5), 456–476 (2018). https://doi.org/10.1007/s11263-018-1098-y

    Article  MathSciNet  Google Scholar 

  26. Mahendran, A., Vedaldi, A.: Understanding deep image representations by inverting them. In: Proceedings of CVPR (2015)

    Google Scholar 

  27. Nguyen, A., Dosovitskiy, A., Yosinski, J., Brox, T., Clune, J.: Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. In: Proceedings of NeurIPS (2016)

    Google Scholar 

  28. Bau, D., Zhou, B., Khosla, A., Oliva, A., Torralba, A.: Network dissection: quantifying interpretability of deep visual representations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6541–6549 (2017)

    Google Scholar 

  29. Olah, C., Mordvintsev, A., Schubert, L.: Feature visualization. Distill 2, e7 (2017)

    Google Scholar 

  30. Ulyanov, D., Vedaldi, A., Lempitsky, V.: Deep image prior. In: Proceedings of CVPR (2018)

    Google Scholar 

  31. Mordvintsev, A., Pezzotti, N., Schubert, L., Olah, C.: Differentiable image parameterizations. Distill 3, e12 (2018)

    Article  Google Scholar 

  32. Bastani, O., Kim, C., Bastani, H.: Interpretability via model extraction. arXiv (2017)

    Google Scholar 

  33. Lakkaraju, H., Kamar, E., Caruana, R., Leskovec, J.: Interpretable & explorable approximations of black box models. arXiv (2017)

    Google Scholar 

  34. Tan, S., Caruana, R., Hooker, G., Koch, P., Gordo, A.: Learning global additive explanations for neural nets using model distillation. In: Proceedings of NeurIPS Workshop (2018)

    Google Scholar 

  35. Zhang, Q., Cao, R., Shi, F., Wu, Y.N., Zhu, S.C.: Interpreting CNN knowledge via an explanatory graph. In: Proceedings of AAAI (2018)

    Google Scholar 

  36. Zhang, Q., Yang, Y., Ma, H., Wu, Y.N.: Interpreting CNNs via decision trees. In: Proceedings of CVPR (2019)

    Google Scholar 

  37. Zhou, B., Sun, Y., Bau, D., Torralba, A.: Interpretable basis decomposition for visual explanation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018, Part VIII. LNCS, vol. 11212, pp. 122–138. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01237-3_8

    Chapter  Google Scholar 

  38. Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler, J., Viegas, F., et al.: Interpretability beyond feature attribution: Quantitative testing with concept activation vectors. In: Proceedings of ICML (2018)

    Google Scholar 

  39. Hendricks, L.A., Akata, Z., Rohrbach, M., Donahue, J., Schiele, B., Darrell, T.: Generating visual explanations. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016, Part IV. LNCS, vol. 9908, pp. 3–19. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_1

    Chapter  Google Scholar 

  40. Zhang, Z., Xie, Y., Xing, F., McGough, M., Yang, L.: MDNet: a semantically and visually interpretable medical image diagnosis network. In: Proceedings of CVPR (2017)

    Google Scholar 

  41. Huk Park, D., et al.: Multimodal explanations: justifying decisions and pointing to the evidence. In: Proceedings of CVPR (2018)

    Google Scholar 

  42. Xiao, T., Xu, Y., Yang, K., Zhang, J., Peng, Y., Zhang, Z.: The application of two-level attention models in deep convolutional neural network for fine-grained image classification. In: Proceedings of CVPR (2015)

    Google Scholar 

  43. Lu, J., Yang, J., Batra, D., Parikh, D.: Hierarchical question-image co-attention for visual question answering. In: Proceedings of NIPS (2016)

    Google Scholar 

  44. Ross, A.S., Hughes, M.C., Doshi-Velez, F.: Right for the right reasons: Training differentiable models by constraining their explanations. arXiv preprint arXiv:1703.03717 (2017)

  45. Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., Abbeel, P.: InfoGAN: interpretable representation learning by information maximizing generative adversarial nets. In: Proceedings of NIPS (2016)

    Google Scholar 

  46. Higgins, I., et al.: beta-VAE: learning basic visual concepts with a constrained variational framework. In: Proceedings of ICLR (2017)

    Google Scholar 

  47. Zhang, Q., Nian Wu, Y., Zhu, S.C.: Interpretable convolutional neural networks. In: Proceedings of CVPR (2018)

    Google Scholar 

  48. Brendel, W., Bethge, M.: Approximating CNNs with bag-of-local-features models works surprisingly well on ImageNet. In: Proceedings of ICLR (2019)

    Google Scholar 

  49. Losch, M., Fritz, M., Schiele, B.: Interpretability beyond classification output: Semantic bottleneck networks. arXiv preprint arXiv:1907.10882 (2019)

  50. Marcos, D., Lobry, S., Tuia, D.: Semantically interpretable activation maps: what-where-how explanations within CNNs. arXiv preprint arXiv:1909.08442 (2019)

  51. Oliva, A., Torralba, A.: The role of context in object recognition. Trends Cogn. Sci. 11, 520–527 (2007)

    Article  Google Scholar 

  52. Barenholtz, E.: Quantifying the role of context in visual object recognition. Vis. Cogn. 22, 30–56 (2014)

    Article  Google Scholar 

  53. Lopez-Paz, D., Nishihara, R., Chintala, S., Scholkopf, B., Bottou, L.: Discovering causal signals in images. In: Proceedings of CVPR (2017)

    Google Scholar 

  54. Harradon, M., Druce, J., Ruttenberg, B.: Causal learning and explanation of deep neural networks via autoencoded activations. arXiv (2018)

    Google Scholar 

  55. Daniels, Z.A., Metaxas, D.: ScenarioNet: an interpretable data-driven model for scene understanding. In: IJCAI Workshop on XAI, vol. 33 (2018)

    Google Scholar 

  56. Shalev-Shwartz, S., Singer, Y.: Efficient learning of label ranking by soft projections onto polyhedra. J. Mach. Learn. Res. 7, 1567–1599 (2006)

    MathSciNet  MATH  Google Scholar 

  57. Sun, Y., Ravi, S., Singh, V.: Adaptive activation thresholding: dynamic routing type behavior for interpretability in convolutional neural networks. In: Proceedings of ICCV (2019)

    Google Scholar 

  58. ScenicOrNot (2020). http://scenicornot.datasciencelab.co.uk. Accessed 03 Mar 2020

  59. Patterson, G., Xu, C., Su, H., Hays, J.: The SUN attribute database: beyond categories for deeper scene understanding. Int. J. Comput. Vision 108(1), 59–81 (2014). https://doi.org/10.1007/s11263-013-0695-z

    Article  Google Scholar 

  60. Seresinhe, C.I., Preis, T., Moat, H.S.: Using deep learning to quantify the beauty of outdoor places. R. Soc. Open Sci. 4, 170170 (2017)

    Article  MathSciNet  Google Scholar 

  61. Workman, S., Souvenir, R., Jacobs, N.: Understanding and mapping natural beauty. In: Proceedings of ICCV, pp. 5589–5598 (2017)

    Google Scholar 

  62. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of CVPR (2016)

    Google Scholar 

  63. Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vision 115(3), 211–252 (2015). https://doi.org/10.1007/s11263-015-0816-y

    Article  MathSciNet  Google Scholar 

  64. Kendall, M.G.: A new measure of rank correlation. Biometrika 30, 81–93 (1938)

    Article  Google Scholar 

  65. Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017)

  66. CORINE Land Cover - Copernicus Land Monitoring Service (2020). https://land.copernicus.eu/pan-european/corine-land-cover. Accessed 03 Mar 2020

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Diego Marcos .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 7586 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Marcos, D., Fong, R., Lobry, S., Flamary, R., Courty, N., Tuia, D. (2021). Contextual Semantic Interpretability. In: Ishikawa, H., Liu, CL., Pajdla, T., Shi, J. (eds) Computer Vision – ACCV 2020. ACCV 2020. Lecture Notes in Computer Science(), vol 12625. Springer, Cham. https://doi.org/10.1007/978-3-030-69538-5_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-69538-5_22

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-69537-8

  • Online ISBN: 978-3-030-69538-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics