Layer-Wise Relevance Propagation for Neural Networks with Local Renormalization Layers

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9887)

Abstract

Layer-wise relevance propagation is a framework which allows to decompose the prediction of a deep neural network computed over a sample, e.g. an image, down to relevance scores for the single input dimensions of the sample such as subpixels of an image. While this approach can be applied directly to generalized linear mappings, product type non-linearities are not covered. This paper proposes an approach to extend layer-wise relevance propagation to neural networks with local renormalization layers, which is a very common product-type non-linearity in convolutional neural networks. We evaluate the proposed method for local renormalization layers on the CIFAR-10, Imagenet and MIT Places datasets.

Keywords

Neural networks Image classification Interpretability 

References

  1. 1.
    Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLOS ONE 10(7), e0130140 (2015)CrossRefGoogle Scholar
  2. 2.
    Csurka, G., Dance, C.R., Fan, L., Willamowski, J., Bray, C.: Visual categorization with bags of keypoints. In: Workshop on Statistical Learning in Computer Vision, ECCV, pp. 1–22 (2004)Google Scholar
  3. 3.
    Krizhevsky, A.: Learning multiple layers of features from tiny images (2009). http://www.cs.toronto.edu/~kriz/cifar.html
  4. 4.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS, pp. 1106–1114 (2012)Google Scholar
  5. 5.
    Lapuschkin, S., Binder, A., Montavon, G., Müller, K.-R., Samek, W.: Analyzing classifiers: fisher vectors and deep neural networks. In: Proceedings of IEEE CVPR, pp. 2912–2920 (2016)Google Scholar
  6. 6.
    Montavon, G., Bach, S., Binder, A., Samek, W., Müller, K.-R.: Explaining nonlinear classification decisions with deep taylor decomposition. CoRR, abs/1512.02479 (2015)Google Scholar
  7. 7.
    Russakovsky, O., Deng, J., Hao, S., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: ImageNet large scale visual recognition challenge. IJCV 115, 1–42 (2015)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Samek, W., Binder, A., Montavon, G., Bach, S., Müller, K.-R.: Evaluating the visualization of what a deep neural network has learned. CoRR, abs/1509.06321 (2015)Google Scholar
  9. 9.
    Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. CoRR, abs/1312.6034 (2013)Google Scholar
  10. 10.
    van de Sande, K.E.A., Gevers, T., Snoek, C.G.M.: Evaluating color descriptors for object and scene recognition. IEEE Trans. Pattern Anal. Mach. Intell. 32(9), 1582–1596 (2010)CrossRefGoogle Scholar
  11. 11.
    Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014, Part I. LNCS, vol. 8689, pp. 818–833. Springer, Heidelberg (2014)Google Scholar
  12. 12.
    Zhou, B., Lapedriza, A., Xiao, J., Torralba, A., Oliva, A.: Learning deep features for scene recognition using places database. In: Advances in NIPS, pp. 487–495 (2014)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Alexander Binder
    • 1
  • Grégoire Montavon
    • 2
  • Sebastian Lapuschkin
    • 3
  • Klaus-Robert Müller
    • 2
    • 4
  • Wojciech Samek
    • 3
  1. 1.ISTD PillarSingapore University of Technology and DesignSingaporeSingapore
  2. 2.Machine Learning GroupTechnische Universität BerlinBerlinGermany
  3. 3.Machine Learning GroupFraunhofer Heinrich Hertz InstituteBerlinGermany
  4. 4.Department of Brain and Cognitive EngineeringKorea UniversitySeoulSouth Korea

Personalised recommendations